为什么使用dcnv3 会出现 RuntimeError: Not implemented on the CPU。
Finished processing dependencies for DCNv3==1.0 (yolo8_cyy) nd@aiso-PR4904P:~/yt_data/cyy/yolov8/yolov8/ultralytics/nn/modules/ops_dcnv3$ python test.py /home/nd/anaconda3/envs/yolo8_cyy/lib/python3.8/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2894.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
forward double
- True check_forward_equal_with_pytorch_double: max_abs_err 8.16e-10 max_rel_err nan
forward float
- True check_forward_equal_with_pytorch_float: max_abs_err 1.75e-09 max_rel_err nan
backward double: channels 1
- True input_grad check_backward_equal_with_pytorch_double: max_abs_err 1.19e-07 max_rel_err nan
- True offset_grad check_backward_equal_with_pytorch_double: max_abs_err 1.51e-09 max_rel_err nan
- True mask_grad check_backward_equal_with_pytorch_double: max_abs_err 3.26e-09 max_rel_err nan
backward double: channels 16
- True input_grad check_backward_equal_with_pytorch_double: max_abs_err 1.49e-07 max_rel_err nan
- True offset_grad check_backward_equal_with_pytorch_double: max_abs_err 1.02e-08 max_rel_err nan
- True mask_grad check_backward_equal_with_pytorch_double: max_abs_err 4.34e-08 max_rel_err nan
backward double: channels 30
- True input_grad check_backward_equal_with_pytorch_double: max_abs_err 1.34e-07 max_rel_err nan
- True offset_grad check_backward_equal_with_pytorch_double: max_abs_err 1.86e-08 max_rel_err nan
- True mask_grad check_backward_equal_with_pytorch_double: max_abs_err 8.20e-08 max_rel_err nan
backward double: channels 32
- True input_grad check_backward_equal_with_pytorch_double: max_abs_err 1.04e-07 max_rel_err nan
- True offset_grad check_backward_equal_with_pytorch_double: max_abs_err 2.79e-08 max_rel_err nan
- True mask_grad check_backward_equal_with_pytorch_double: max_abs_err 9.50e-08 max_rel_err nan
backward double: channels 64
- True input_grad check_backward_equal_with_pytorch_double: max_abs_err 1.04e-07 max_rel_err nan
- True offset_grad check_backward_equal_with_pytorch_double: max_abs_err 1.23e-08 max_rel_err nan
- True mask_grad check_backward_equal_with_pytorch_double: max_abs_err 1.79e-07 max_rel_err nan
backward double: channels 71
- True input_grad check_backward_equal_with_pytorch_double: max_abs_err 8.94e-08 max_rel_err nan
- True offset_grad check_backward_equal_with_pytorch_double: max_abs_err 2.24e-08 max_rel_err nan
- True mask_grad check_backward_equal_with_pytorch_double: max_abs_err 1.49e-07 max_rel_err nan
backward double: channels 1025
- True input_grad check_backward_equal_with_pytorch_double: max_abs_err 1.64e-07 max_rel_err nan
- True offset_grad check_backward_equal_with_pytorch_double: max_abs_err 7.75e-07 max_rel_err nan
- True mask_grad check_backward_equal_with_pytorch_double: max_abs_err 2.38e-06 max_rel_err nan
backward float: channels 1
- True input_grad check_backward_equal_with_pytorch_float: max_abs_err 1.49e-07 max_rel_err nan
- True offset_grad check_backward_equal_with_pytorch_float: max_abs_err 2.68e-09 max_rel_err nan
- True mask_grad check_backward_equal_with_pytorch_float: max_abs_err 9.31e-09 max_rel_err nan
backward float: channels 16
- True input_grad check_backward_equal_with_pytorch_float: max_abs_err 2.68e-07 max_rel_err nan
- True offset_grad check_backward_equal_with_pytorch_float: max_abs_err 7.45e-09 max_rel_err nan
- True mask_grad check_backward_equal_with_pytorch_float: max_abs_err 1.04e-07 max_rel_err nan
backward float: channels 30
- True input_grad check_backward_equal_with_pytorch_float: max_abs_err 1.79e-07 max_rel_err nan
- True offset_grad check_backward_equal_with_pytorch_float: max_abs_err 4.56e-08 max_rel_err nan
- True mask_grad check_backward_equal_with_pytorch_float: max_abs_err 1.64e-07 max_rel_err nan
backward float: channels 32
- True input_grad check_backward_equal_with_pytorch_float: max_abs_err 3.28e-07 max_rel_err nan
- True offset_grad check_backward_equal_with_pytorch_float: max_abs_err 5.22e-08 max_rel_err nan
- True mask_grad check_backward_equal_with_pytorch_float: max_abs_err 1.64e-07 max_rel_err nan
backward float: channels 64
- True input_grad check_backward_equal_with_pytorch_float: max_abs_err 2.98e-07 max_rel_err nan
- True offset_grad check_backward_equal_with_pytorch_float: max_abs_err 4.47e-08 max_rel_err nan
- True mask_grad check_backward_equal_with_pytorch_float: max_abs_err 3.58e-07 max_rel_err nan
backward float: channels 71
- True input_grad check_backward_equal_with_pytorch_float: max_abs_err 2.68e-07 max_rel_err nan
- True offset_grad check_backward_equal_with_pytorch_float: max_abs_err 8.20e-08 max_rel_err nan
- True mask_grad check_backward_equal_with_pytorch_float: max_abs_err 3.73e-07 max_rel_err nan
backward float: channels 1025
- True input_grad check_backward_equal_with_pytorch_float: max_abs_err 1.64e-07 max_rel_err nan
- True offset_grad check_backward_equal_with_pytorch_float: max_abs_err 2.15e-06 max_rel_err nan
- True mask_grad check_backward_equal_with_pytorch_float: max_abs_err 7.63e-06 max_rel_err nan
time cost: im2col_step 128; input torch.Size([512, 64, 64, 64]); points 9 foward time cost: 0.012824912071228028 time cost: im2col_step 256; input torch.Size([512, 64, 64, 64]); points 9 foward time cost: 0.012808279991149902 time cost: im2col_step 512; input torch.Size([512, 64, 64, 64]); points 9 foward time cost: 0.012804050445556641
return forward_call(*input, **kwargs) File "/home/nd/yt_data/cyy/yolov8/yolov8/ultralytics/nn/modules/ops_dcnv3/modules/dcnv3.py", line 326, in forward x = DCNv3Function.apply( File "/home/nd/anaconda3/envs/yolo8_cyy/lib/python3.8/site-packages/torch/cuda/amp/autocast_mode.py", line 110, in decorate_fwd return fwd(*args, **kwargs) File "/home/nd/yt_data/cyy/yolov8/yolov8/ultralytics/nn/modules/ops_dcnv3/functions/dcnv3_func.py", line 39, in forward output = DCNv3.dcnv3_forward( RuntimeError: Not implemented on the CPU
已经测试是可以使用的 为什么会出现不支持cpu ;不是应该使用GPU吗