lunar icon indicating copy to clipboard operation
lunar copied to clipboard

lunar crach

Open nerowah123 opened this issue 2 years ago • 0 comments

the lunar crach with this error

"Traceback (most recent call last): File "C:\Users\nero\Desktop\lunar_cheater.fun\lunar-main\lunar.py", line 74, in main() File "C:\Users\nero\Desktop\lunar_cheater.fun\lunar-main\lunar.py", line 22, in main lunar.start() File "C:\Users\nero\Desktop\lunar_cheater.fun\lunar-main\lib\aimbot.py", line 160, in start results = self.model(frame) File "C:\Users\nero\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\nero\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "C:\Users\nero/.cache\torch\hub\ultralytics_yolov5_master\models\common.py", line 571, in forward y = non_max_suppression(y if self.dmb else y[0], File "C:\Users\nero/.cache\torch\hub\ultralytics_yolov5_master\utils\general.py", line 816, in non_max_suppression i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS File "C:\Users\nero\AppData\Local\Programs\Python\Python310\lib\site-packages\torchvision\ops\boxes.py", line 40, in nms return torch.ops.torchvision.nms(boxes, scores, iou_threshold) NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, Tracer, AutocastCPU, Autocast, Batched, VmapMode, Functionalize].

CPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel] QuantizedCPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:125 [kernel] BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback] Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:47 [backend fallback] Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback] Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:18 [backend fallback] Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback] ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback] ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:64 [backend fallback] AutogradOther: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:35 [backend fallback] AutogradCPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:39 [backend fallback] AutogradCUDA: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:47 [backend fallback] AutogradXLA: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:51 [backend fallback] AutogradLazy: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:55 [backend fallback] AutogradXPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:43 [backend fallback] AutogradMLC: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:59 [backend fallback] AutogradHPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:68 [backend fallback] Tracer: registered at ..\torch\csrc\autograd\TraceTypeManual.cpp:293 [backend fallback] AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:461 [backend fallback] Autocast: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:305 [backend fallback] Batched: registered at ..\aten\src\ATen\BatchingRegistrations.cpp:1059 [backend fallback] VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback] Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:52 [backend fallback]"

nerowah123 avatar Apr 12 '22 00:04 nerowah123