EMVD
EMVD copied to clipboard
Efficient Multi-Stage Video Denoising With Recurrent Spatio-Temporal Fusion. CVPR_2021.
经过我的实验,没有必要复现论文的网络结构,包括color transofrm和frequency transform,我仅仅使用简单的avgpool和conv就超过了和论文一样结构的结果;fuse,denoise,refine结构值得借鉴,特别是fuse,其余的可自行发挥。
Hello, first of all, respect for your amazing work. I saw your paper mentioned VBM4D results, could you give this code as a reference? Thanks a lot. Email: [email protected]
the YUVW transfer matrix is M = 0.5 0.5 0.5 0.5 −0.5 0.5 0.5 −0.5 0.65 0.2784 −0.2784 −0.65 −0.2784 0.65 −0.65 0.2784 but your code has cfa = np.array(...
Good job! I met the checkerboard effect during my training. Has someone encountered this, too?
Hey, what is the license of the project?
I didn't find any learing rate adjustment in the training code.
any pre-trained model for using? Thx a lot
谢谢大哥的复现。有两个小问题 1. 请问你们用这段代码训练了多少iteration才达到42.02,你们的设备是什么,训练了多少时间? 2. EMVD paper 里面使用了synthetic data,你们好像只使用real data CRVD,请问这个会对效果产生影响吗 Thanks for your code. Several questions: 1. how many iterations/gpuhours do you use to real PSNR42.02 2. Is it OK...
我最近也在复现这篇。 如论文所说只有第一帧是用noise_frame的LL的sigma作初始化  所以每次循环帧训练的时候,https://github.com/Baymax-chen/EMVD/blob/975a2f46b20798fc981bceccc1885f63aad6d870/structure.py#L220 这里只有“第一帧”是这样算的,后面的帧应该是由前面帧算的结果传入的 
Hello, the code of the model is relatively complex and difficult to modify to consider images of conventional size either with 1 channel (black and white) or 3 channels (RGB)....