RuntimeError: CUDA error: invalid device ordinal
getting this error, while CUDA is OK
Python 3.8.16 (default, Mar 2 2023, 03:21:46)
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> torch.cuda.is_available()
True
>>> torch.cuda.device_count()
1
>>> torch.cuda.get_device_name(0)
'NVIDIA GeForce RTX 2080 Ti'
>>>
(visgpt)```
visual_chatgpt.py hard code cuda:${device},I only have one graphics card, replace all cuda:\d with cuda:0 work fine.
https://github.com/microsoft/visual-chatgpt/blob/aadb7f2e2ee7baa455f0369f334ed4804b48a5cd/visual_chatgpt.py#L804
visual_chatgpt.py hard code
cuda:${device},I only have one graphics card, replace allcuda:\dwithcuda:0work fine.https://github.com/microsoft/visual-chatgpt/blob/aadb7f2e2ee7baa455f0369f334ed4804b48a5cd/visual_chatgpt.py#L804
and then you will get an out of memory error, hhhhhh, try this one https://github.com/rupeshs/visual-chatgpt/tree/add-colab-support#visual-chatgpt-colab-support
Still not working even with all the cuda:0 any other way ?
You need to comment out some models, otherwise, your graphics memory won't be able to hold all the related models.
@Freshield Could you please show it an example on the code please. Thanks
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 10.76 GiB total capacity; 9.59 GiB already allocated; 25.94 MiB free; 9.64 GiB reserved in total by PyTorch) If reserved memory is >>
allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
@75andybermond @d0znpp you can check this link https://github.com/rupeshs/visual-chatgpt/blob/add-colab-support/visual_chatgpt.py to see how to modify the main python script file, and avoid the cuda out of memory error.