BasicVSR_PlusPlus
BasicVSR_PlusPlus copied to clipboard
CUDA out of memory
Hi @ckkelvinchan , I have tried running inference on RTX2060 with 6gb of vram but it doesn't perform inference on low memory. I tried using video.mp4 as input. Is their any parameter or option to increase the inference time by some tile or batch size limit but perform inference ?
Hello, you may use a smaller --max-seq-len
here. With this value, the sequence is cut into multiple sequence for processing.
@muhammad-ahmed-ghani Hello, have you solved this problem?
@muhammad-ahmed-ghani Hello, have you solved this problem?
Yeah as @ckkelvinchan suggest above
https://github.com/ckkelvinchan/BasicVSR_PlusPlus/issues/10#issuecomment-1116952418
to set the max sequence length to the minimum possible value according to your gpu memory.
@muhammad-ahmed-ghani my gpu memory is 24G,I try to set this param equal 1,but linux system killed this process,I don't know why?All I found was that the gpu memory was always under heavy loade
@Dylan-Jinx Maybe the resolution of your input video is too large. The current code supports only x4 upsampling. You need to modify the code and retrain the network to work on x2. You can use a lower resolution video to test or retrain the network for your upsampling magnification.
My case:
The GPU OOM occurred when I use input image size is 1920 X 1080. But it's well when I changed the input resolution to 1200 X 800.
My script:
python demo/restoration_video_demo.py configs/restorers/basicvsr_plusplus/basicvsr_plusplus_c64n7_8x1_600k_reds4.py pth/basicvsr_plusplus_c64n7_8x1_600k_reds4_20210217-db622b2f.pth data/input/test2/ data/output/test2/ --max-seq-len=1
My hardware:
- GPU: RTX 3080 Laptop
- GPU RAM: 16G
- OS: Ubuntu 22.04
- RAM: 32GB
My env:
- python 3.7
- torch 1.10.0
- torchaudio 0.10.0
- torchvision 0.11.0
- mmcv-full 1.4.8
My mmediting code branch: https://github.com/open-mmlab/mmediting/tree/master
Hello, you may use a smaller
--max-seq-len
here. With this value, the sequence is cut into multiple sequence for processing.
Hey, I think your link is not valid anymore. Can you update it ? Thanks !