pytorch_open_registration_example icon indicating copy to clipboard operation
pytorch_open_registration_example copied to clipboard

Error : Could not run aten::normal_' with arguments from the 'PrivateUse1' backend.

Open andakai opened this issue 1 year ago • 0 comments

I have run the code successfully in this repository, but when I run the code below:

import torch
from utils.custom_device_mode import foo_module, enable_foo_device
a = torch.randn(4, device='privateuseone')

I meet the following error. How can I solve it?

Using /home/david/.cache/torch_extensions/py38_cu121 as PyTorch extensions root...
Emitting ninja build file /home/david/.cache/torch_extensions/py38_cu121/custom_device_extension/build.ninja...
Building extension module custom_device_extension...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module custom_device_extension...
Custom aten::empty.memory_format() called!
Custom allocator's delete() called!
Traceback (most recent call last):
  File "test.py", line 4, in <module>
    a = torch.randn(4, device='privateuseone')
NotImplementedError: Could not run 'aten::normal_' with arguments from the 'PrivateUse1' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::normal_' is only available for these backends: [CPU, CUDA, Meta, SparseCsrCPU, SparseCsrCUDA, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].

CPU: registered at /home/david/PytorchTrans/pytorch/build/aten/src/ATen/RegisterCPU.cpp:31085 [kernel]
CUDA: registered at /home/david/PytorchTrans/pytorch/build/aten/src/ATen/RegisterCUDA.cpp:44060 [kernel]
Meta: registered at /dev/null:219 [kernel]
SparseCsrCPU: registered at /home/david/PytorchTrans/pytorch/build/aten/src/ATen/RegisterSparseCsrCPU.cpp:1128 [kernel]
SparseCsrCUDA: registered at /home/david/PytorchTrans/pytorch/build/aten/src/ATen/RegisterSparseCsrCUDA.cpp:1269 [kernel]
BackendSelect: fallthrough registered at /home/david/PytorchTrans/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at /home/david/PytorchTrans/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:144 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at /home/david/PytorchTrans/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:491 [backend fallback]
Functionalize: registered at /home/david/PytorchTrans/pytorch/build/aten/src/ATen/RegisterFunctionalization_0.cpp:21491 [kernel]
Named: fallthrough registered at /home/david/PytorchTrans/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:11 [kernel]
Conjugate: registered at /home/david/PytorchTrans/pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at /home/david/PytorchTrans/pytorch/aten/src/ATen/native/NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at /home/david/PytorchTrans/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: registered at /home/david/PytorchTrans/pytorch/torch/csrc/autograd/generated/ADInplaceOrViewType_0.cpp:4733 [kernel]
AutogradOther: registered at /home/david/PytorchTrans/pytorch/torch/csrc/autograd/generated/VariableType_0.cpp:15862 [autograd kernel]
AutogradCPU: registered at /home/david/PytorchTrans/pytorch/torch/csrc/autograd/generated/VariableType_0.cpp:15862 [autograd kernel]
AutogradCUDA: registered at /home/david/PytorchTrans/pytorch/torch/csrc/autograd/generated/VariableType_0.cpp:15862 [autograd kernel]
AutogradHIP: registered at /home/david/PytorchTrans/pytorch/torch/csrc/autograd/generated/VariableType_0.cpp:15862 [autograd kernel]
AutogradXLA: registered at /home/david/PytorchTrans/pytorch/torch/csrc/autograd/generated/VariableType_0.cpp:15862 [autograd kernel]
AutogradMPS: registered at /home/david/PytorchTrans/pytorch/torch/csrc/autograd/generated/VariableType_0.cpp:15862 [autograd kernel]
AutogradIPU: registered at /home/david/PytorchTrans/pytorch/torch/csrc/autograd/generated/VariableType_0.cpp:15862 [autograd kernel]
AutogradXPU: registered at /home/david/PytorchTrans/pytorch/torch/csrc/autograd/generated/VariableType_0.cpp:15862 [autograd kernel]
AutogradHPU: registered at /home/david/PytorchTrans/pytorch/torch/csrc/autograd/generated/VariableType_0.cpp:15862 [autograd kernel]
AutogradVE: registered at /home/david/PytorchTrans/pytorch/torch/csrc/autograd/generated/VariableType_0.cpp:15862 [autograd kernel]
AutogradLazy: registered at /home/david/PytorchTrans/pytorch/torch/csrc/autograd/generated/VariableType_0.cpp:15862 [autograd kernel]
AutogradMeta: registered at /home/david/PytorchTrans/pytorch/torch/csrc/autograd/generated/VariableType_0.cpp:15862 [autograd kernel]
AutogradMTIA: registered at /home/david/PytorchTrans/pytorch/torch/csrc/autograd/generated/VariableType_0.cpp:15862 [autograd kernel]
AutogradPrivateUse1: registered at /home/david/PytorchTrans/pytorch/torch/csrc/autograd/generated/VariableType_0.cpp:15862 [autograd kernel]
AutogradPrivateUse2: registered at /home/david/PytorchTrans/pytorch/torch/csrc/autograd/generated/VariableType_0.cpp:15862 [autograd kernel]
AutogradPrivateUse3: registered at /home/david/PytorchTrans/pytorch/torch/csrc/autograd/generated/VariableType_0.cpp:15862 [autograd kernel]
AutogradNestedTensor: registered at /home/david/PytorchTrans/pytorch/torch/csrc/autograd/generated/VariableType_0.cpp:15862 [autograd kernel]
Tracer: registered at /home/david/PytorchTrans/pytorch/torch/csrc/autograd/generated/TraceType_1.cpp:15894 [kernel]
AutocastCPU: fallthrough registered at /home/david/PytorchTrans/pytorch/aten/src/ATen/autocast_mode.cpp:487 [backend fallback]
AutocastCUDA: fallthrough registered at /home/david/PytorchTrans/pytorch/aten/src/ATen/autocast_mode.cpp:354 [backend fallback]
FuncTorchBatched: registered at /home/david/PytorchTrans/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:815 [backend fallback]
FuncTorchVmapMode: registered at /home/david/PytorchTrans/pytorch/aten/src/ATen/functorch/BatchRulesRandomness.cpp:383 [kernel]
Batched: registered at /home/david/PytorchTrans/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1073 [backend fallback]
VmapMode: registered at /home/david/PytorchTrans/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:37 [kernel]
FuncTorchGradWrapper: registered at /home/david/PytorchTrans/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:210 [backend fallback]
PythonTLSSnapshot: registered at /home/david/PytorchTrans/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:152 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at /home/david/PytorchTrans/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:487 [backend fallback]
PythonDispatcher: registered at /home/david/PytorchTrans/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:148 [backend fallback]

andakai avatar May 29 '23 16:05 andakai