CodeFormer
CodeFormer copied to clipboard
Background upscaling with realesrgan doesn't seem to work.
--bg_upsampler realesrgan gives me black background. Looks like an issue. I am running it on gtx 1650ti 4g vram. Can you please help? I have made sure pytorch with cuda support is installed. All libraries are up and running..
Lol I have the same problem, no background, only black image with face.
- Ubuntu 20.04.5 LTS
- nVidia 1650
- Anaconda3-2022.10-Linux-x86_64.sh (conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia)
Lol I have the same problem, no background, only black image with face.
- Ubuntu 20.04.5 LTS
- nVidia 1650
- Anaconda3-2022.10-Linux-x86_64.sh (conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia)
I resolved the issue with the following trick. just add these two lines after the import statements in the file "inference_codeformer.py" and save it.
torch.backends.cudnn.benchmark = True torch.backends.cudnn.enabled = True
The issue has something to do with float16 computational performance in all 16XX cards. The above trick seems to have resolved it. Using the same trick, I am able to run Stable Diffusion as well without using any optimization arguments.
omg:) work Thx
Memory problem?
Error CUDA out of memory. Tried to allocate 930.00 MiB (GPU 0; 3.81 GiB total capacity; 871.98 MiB already allocated; 931.44 MiB free; 1.67 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I have never tried this on Linux. I am using windows and able to upscale source images of max 1080p. With 4GB video memory, that is as far as I could go. Make sure you are not setting Nvidia card as primary card on Ubuntu so that OS wont occupy the card for basic ui render. Enable nvidia on-demand or power saver option from the control panel. If that doesn't seem to help you, try it on windows. It will work for a source image up to1080p.
I see, thank you!
edit the inference.py and add this after the import
torch.backends.cudnn.benchmark = True
torch.backends.cudnn.enabled = True