Revisiting-Temporal-Alignment-for-Video-Restoration
Revisiting-Temporal-Alignment-for-Video-Restoration copied to clipboard
Difficulties Running
I appreciate the code release, however, this was pretty difficult to get running. I managed to get there but the bar for the results was raised by the effort. I think it might be good to revisit a few things.
Here's things that I think would help others be able to use it
- Provide some information about your build environment/requirements
- Make the DCNv2 a submodule and perhaps make the import work independent of location
- Make symlinks in each model for libs and utils if you're going to import them in the scripts
- Remove hard coded inputs, especially those pointing to your home directory etc
I also found that there isn't a sequence_test.py for each model. I worked on creating my own for VDN but it seems the feed forward network requires the noise map and as best I can tell it is the generated noise. Is this correct?
I also found that there isn't a sequence_test.py for each model. I worked on creating my own for VDN but it seems the feed forward network requires the noise map and as best I can tell it is the generated noise. Is this correct?
Sorry for that. In fact, the noise map refers to the noise level, where all the elements have the same values (i.e. 20), As described in our paper, "In this task, we aim to remove Gaussian white noises with known noise levels".
I see. Thanks for clarifying. I think I can manage. Would you like a PR with some changes to review or is my feedback here sufficient?
And would it be possible to release the missing sequence_test.py for VDN? In the README it's the only one specified which isn't there in the repo.
I see. Thanks for clarifying. I think I can manage. Would you like a PR with some changes to review or is my feedback here sufficient?
I appreciate your PR. Thanks for your good suggestion, and I will upload this script soon.