Deep-Flow-Guided-Video-Inpainting
Deep-Flow-Guided-Video-Inpainting copied to clipboard
LiteFlowNet with cupy instead of FlowNet2 with compiled modules
I've replaced all the FlowNet2 files with corresponding LiteFlowNet implementations from the pytorch reimplementation. This implementation doesn't need the flownet2 modules but is based on CuPy. In return the compiling of the flownet2 modules is no longer a problem. Also LiteFlowNet is 30 times smaller in the model size and 1.36 times faster in the running speed and outperforms the FlowNet2 on the challenging Sintel final pass and KITTI benchmarks [1].
Quick note: I didn't mean to request merging into the master branch. I just wanted to let you know that when compiling the flownet2 modules fails there is another option. Maybe you can make a new LiteFlowNet branch or reference my fork in the FAQ's answer.
Here is the flamingo demo generated using the LiteFlowNet:
Very thanks for you contribution and sorry for my late reply. I will consider to make a new branch for your request.
Thanks again for your interest in our project.
Thank you so much for this contribution!
For anyone who comes across this thread in the future - here's how I was able to get the software to work (Manjaro Arch Linux 5.18 + CUDA 1.17 + Nvidia + without Anaconda). It's likely that you have to tweak some dependencies:
git clone https://github.com/lukas-blecher/Deep-Flow-Guided-Video-Inpainting && cd Deep-Flow-Guided-Video-Inpainting
python -m venv venv # it is Python 3.9
source venv/bin/activate
pip install -r new_requirements.txt
new_requirements.txt result_pip_freeze.txt
Download models and demos from + place them in the correct folder (e.g. models/pretrained_models - see original instruction)
- https://drive.google.com/drive/folders/1a2FrHIQGExJTHXxSIibZOGMukNrypr_g
- https://drive.google.com/drive/folders/1zIamN-DzvknZLf5QAGCfvWs7a6qUqaaC
- https://github.com/lukas-blecher/CleanPlateBlender/blob/master/weights/README.md
Replace @cupy.util.memoize(for_each_device=True)
with @cupy.memoize(for_each_device=True)
in models/LiteFlowNet_Models/correlation/correlation.py
Slighty different call (because of network-default.pytorch and --LiteFlowNet)
CUDA_VISIBLE_DEVICES=0 python tools/video_inpaint.py \
--frame_dir ./demo/lady-running/frames \
--MASK_ROOT ./demo/lady-running/mask_bbox.png \
--img_size 448 896 \
--DFC \
--LiteFlowNet \
--pretrained_model_liteflownet ./pretrained_models/network-default.pytorch \
--Propagation \
--PRETRAINED_MODEL_1 ./pretrained_models/resnet50_stage1.pth \
--PRETRAINED_MODEL_2 ./pretrained_models/DAVIS_model/davis_stage2.pth \
--PRETRAINED_MODEL_3 ./pretrained_models/DAVIS_model/davis_stage3.pth \
--MS --th_warp 3 --FIX_MASK