QualityScaler
QualityScaler copied to clipboard
AssertionError: Torch not compiled with CUDA enabled
import torch #(torch directory replaced by your bundled version)
print(f'available devices: {torch.cuda.device_count()}') print(f'current device: { torch.cuda.current_device()}')
Result:
print(f'current device: { torch.cuda.current_device()}') File "C:\Users\rmast\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\cuda_init_.py", line 388, in current_device lazy_init() File "C:\Users\rmast\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\cuda_init.py", line 164, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled
To install torch with cuda look at this page https://pytorch.org/get-started/locally/
You need to select os, cuda version, and python package manager
Installing within the unpacked QualityScaler-zip without python on the machine seems quite hard.
What should I do to get it done?
You have to install python 3.8.10
After several trials of installing Python 3.8.10 with tcl/tk, running pip install -r requirements.txt I now have:

I commented out mica, the upscale error remains. No clear hints in the log.
Probably the problem is that the version 2.2 use pytorch-directml that use Directx12 not cuda.
Try versions < 2.0 or just modify the code of 2.2 where the device = "dml" -> device = "cuda"
I try to get DirectXML running. https://github.com/martinet101/win32mica/blob/6ec96560c75e11d97b38f86c01a2f6068836d010/src/win32mica/init.py
if sys.platform == "win32" and sys.getwindowsversion().build >= 22000:
...
else:
print(f"Win32Mica Error: {sys.platform} version {sys.getwindowsversion().build} is not supported")
return 0x32

So MICA is only supported on Windows 11.
Ah sorry forgot to say, the AI models are not bundled on github because they are too heavy, you can download it here https://drive.google.com/drive/folders/13kfr3qny7S2xwG9h7v95F5mkWs0OmU0D __> BSRGAN.pth and RealSR_JPEG.pth
According to CUDA 11 minor version compatibility: https://docs.nvidia.com/deploy/cuda-compatibility/index.html#minor-version-compatibility
You should strive to minimize the CUDA-11-versions included (in Torch and right away), only upgrade if new 11 features are really adding value.
If I have a 11.4 driver it should work with versions 11.1 and 11.3 as well. (Didn't try it yet).
I tried version 1.5.0 against driver 11.4 and it seems to work, however it continues working after telling me it's finished.

The max I can go is the x3. Those results are better readable.
Fused vs. original:

Probably the problem is that the version 2.2 use pytorch-directml that use Directx12 not cuda.
Try versions < 2.0 or just modify the code of 2.2 where the device = "dml" -> device = "cuda"
I think you mean reverting this commit: https://github.com/Djdefrag/QualityScaler/commit/66b6f13eca96c3a97a48871850754b01b7403ab2
don t revert the entire commit, just modify 2.2 where there is "dml" string with cuda; so you will use all new feature came with 2.2 but with cuda backend
Does that use a new CUDA-backend via directml? As your zip-file-contents have the big CUDA-driver removed?
If you are using the script .py you can choose wich backend you want just installing the library and modifying the code.
Yes, in the .zip there is only Pythorc-directml, no cuda libraries.
I globally replaced "dml" to "cuda", but during the run there is no detailed error pointing to the line in which a missing library is suggested. I also commented out the mica-lines and put the trained models in the directory. I also tried to change import torch for import torch.cuda, but no detailed errors...
strange, maibe there is something broken in pip packages installed. You can try to clean all packages installed and install everything again:
- pip freeze > unistall.txt
- pip uninstall -y -r unistall.txt
- pip install -r requirements.txt --upgrade
pay attention that you can only install one Pytorch version (or pytorch or pytorch-directml)
Use this requirements.txt, then install pytorch 4. pip3 install torch --extra-index-url https://download.pytorch.org/whl/cu113 requirements.txt
I guess to install torch-1.12.0+cu113-cp38-cp38-linux_x86_64.whl
I will perform those steps and see...
No luck. I previously removed pytorch-directml and mica from the original requirements.txt
This is the result of your steps:
(qs) C:\Users\rmast\QualityScaler>pip list
Package Version
------------------------- ------------
altgraph 0.17.2
certifi 2022.6.15
charset-normalizer 2.1.0
colorama 0.4.5
decorator 4.4.2
engineering-notation 0.6.0
future 0.18.2
idna 3.3
imageio 2.19.3
imageio-ffmpeg 0.4.7
moviepy 1.0.3
numpy 1.23.0
opencv-python-headless 4.5.5.64
pefile 2022.5.30
Pillow 9.2.0
pip 21.2.2
proglog 0.1.10
pyinstaller 5.1
pyinstaller-hooks-contrib 2022.7
pypiwin32 223
python-tkdnd 0.2.1
pywin32 304
pywin32-ctypes 0.2.0
requests 2.28.1
setuptools 61.2.0
sv-ttk 0.1
tk-tools 0.16.0
torch 1.12.0+cu113
tqdm 4.64.0
ttkwidgets 0.12.1
typing_extensions 4.3.0
urllib3 1.26.9
wheel 0.37.1
wincertstore 0.2
WMI 1.5.1
Only seconds after pushing the button, no matter what factor 1x, 2x, or whatever an error and no activity in the directory or behind nvidia-smi
You can also try Pytorch LTS with cuda 10: pip3 install torch==1.8.2 torchvision==0.9.2 torchaudio===0.8.2 --extra-index-url https://download.pytorch.org/whl/lts/1.8/cu102
I've seen 11.3 do something in 1.5.0.
Another option would be to download PyCharm and put a breakpoint on process_upscale_multiple_images_torch
You can use VsCode, it's much better
I'll try VsCode another time, it must be good as well. I got a CUDNN-error from upscale_image_and_save that went to an exception which wasn't caught before and not written in line 1070. The right way to get that exact exception is
except Exception as e:
write_in_log_file( "<p>Error: %s</p>" % str(e) )
I see the log file gets cleared by every line written in it. You would probably want a second logfile logging what's going on if you really need a log with a single predefined line in it. You could also write the details to the popup.
I found the CUDNN-stuff in the torch directory. I replaced my torch-directory with the torch-directory contained in the 1.5.0-version and now it works with the cuda 11.4 driver! I had to global replace both 'dml' and "dml" by 'cuda' and "cuda".
If I want a 4x upscale I take a segment from a previous upscale and put that in de process, only then the 4x upscale works. So the needed memory size estimation needs adjustment as well.

Yes, the only way to communicate between the upscale process (the one that know what is happening) and the gui process (the one can modify the gui and write the little yellow message) is using a log file.
Wow! Great!
If i remember well the VRAM calculus depends on torch.cuda.getMemory() somethiung like that. In my personal GPU with some tests i saw that a max 600px image with AI model consumes 6GB of Vram, so i assumed that: 300px _> 3 Gb 500px _> 5 Gb 1000px > 10 Gb and so on, dinamically calculated in base on image and memory. If the image is bigger than memory limit the App cuts it in 4 or 9 or 16 parts etc
Yes, the only way to communicate between the upscale process (the one that know what is happening) and the gui process (the one can modify the gui and write the little yellow message) is using a log file.
I've seen things like in-memory-streams for communication between threads.
For sure there are better methods but I try to devote as little as possible to my outside projects because my main job already steals a lot of the day's time and devoting more than an hour would drive me crazy haha.
I thought I would do the same excercise for pytorch-directml compiled with CUDA and looked up some wheel for it. However I can only find very tiresome compile-it-yourself-procedures with a very specific old version of Visual Studio (to download with MSDN?) and a lot of manual installs like CUDNN, which even needs a NVIDIA-account.
The official pytorch-directml-page has some shady build-states: https://pypi.org/project/pytorch-directml/
The only supported platforms seem to be Jetson and WSL.
Yeah, a little too complicated