lama
lama copied to clipboard
Bug: CUDA capability sm_86 is not compatible
I try to run lama inside WSL2 and here's the issue I'm getting. What's the correct way to install for WSL?
UserWarning:
NVIDIA RTX A6000 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the NVIDIA RTX A6000 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
[2023-04-27 07:50:54,283][saicinpainting.training.data.datasets][INFO] - Make val dataloader default from /mnt/c/Code/lama/LaMa_test_images/
0%| | 0/47 [00:00<?, ?it/s]
[2023-04-27 07:50:54,415][__main__][CRITICAL] - Prediction failed due to CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.:
Traceback (most recent call last):
File "bin/predict.py", line 83, in main
batch['mask'] = (batch['mask'] > 0) * 1
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
i've encountered the same problem here , but i use a 3090 any workaround?
After creating the environment, install Pytorch and CUDA toolkit using the following line.
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c conda-forge
It will take some time to fix the conflicts, but it worked with my RTX 3080.
I have the same problem:
RuntimeError: CUDA error: no kernel image is available for execution on the device