EDVR icon indicating copy to clipboard operation
EDVR copied to clipboard

Offset mean is larger than 100 in PCD Align Module' DCNV2.Could you give me some advice to minimize it?

Open huihuiustc opened this issue 5 years ago • 9 comments

huihuiustc avatar Jun 04 '19 02:06 huihuiustc

Same to me.

DLwbm123 avatar Jun 05 '19 07:06 DLwbm123

Yes, indeed we also found the training with DCN is unstable. We will write down the issues we met during the competition in this repo later. And unstable training is one of them. There are still a lot of things that we can improve on EDVR and we are also exploring some of them.

During the competition, we trained the large model from smaller ones and used a smaller learning rate for dcn. Even with these tricks, the over-large offsets are occasionally met. And we just resumed it from a normal checkpoint if we met.

xinntao avatar Jun 05 '19 09:06 xinntao

What dou mean that you trained the large model from smaller ones.Or this one:"We initialize deeper networks by parameters from shallower ones for faster convergence"in your paper.

For instance. We use kaiming_normal initialize all parameter,then freeze TSA and Reconstruction Module,only request_grad in the PCD align and PreDeblur Module.

Thanks for your attention.

huihuiustc avatar Jun 05 '19 12:06 huihuiustc

  1. Yes, we first train shallower ones.
  2. We will release some models and also the training codes to train from scratch. But their performances are not as good as the competition models.

xinntao avatar Jun 07 '19 05:06 xinntao

谢谢大佬的回复,确实是很厉害的工作和研究。

我们正在尝试先把可变卷积换成正常的卷积,然后训练得到的初始model,然后用这个模型训练网络。 接着冻结部分模型块再开始训练。

huihuiustc avatar Jun 07 '19 10:06 huihuiustc

Actually, DCN is relatively important. So you can first train a small network with DCN (w/o TSA). We are running these experiments and will release it as soon as possible.

xinntao avatar Jun 10 '19 16:06 xinntao

1、“We trained the large model from smaller ones and used a smaller learning rate for dcn.” Do you mean this(for example): step 1>5front-10back with DCN+TSA,lr=1e-4,(model S(hallow)). step 2>5front-40back with DCN+TSA,lr(DCN)=5e-5(e.g.),lr_others=1e-4. And parameters of S except 30 back blocks is copied to model D(eep).

2、"You can first train a small network with DCN (w/o TSA)" Do you mean, only DCN is needed to be pretrained, another paramters after DCN is not needed(not useful for deeper model). For example, I can train 5front blocks with DCN, w/o TSA, and with very shallow SR network after DCN. Then, the DCN is pretrained, paramters after DCN can be abandoned, and I can change SR network whatever I like after DCN?

3、This pretrained-DCN-trick can't make the final model D with a deeper or wider(I mean, change the feature extraction layers before DCN) DCN module compared with model S, because DCN paramters are needed to be copied. Is it right?

4、For the second step, there are two choices for DCN. The first one, smaller lr for DCN. The second one, freeze DCN module. The second choice can save many time and GPU memory for training. Is it suitable? @xinntao

splinter21 avatar Jun 11 '19 04:06 splinter21

We have updated the training codes and configs. We provide training scripts for the model with Channel=128, Back RB=10. The learning rate scheme is different from that in the competition. But it is more effective.

  1. train with the script train_EDVR_woTSA_M.yml
  2. then train with the script train_EDVR_M.yml

You can try this.

xinntao avatar Jun 14 '19 15:06 xinntao

谢谢大佬的回复,确实是很厉害的工作和研究。

我们正在尝试先把可变卷积换成正常的卷积,然后训练得到的初始model,然后用这个模型训练网络。 接着冻结部分模型块再开始训练。

have you succeed?how about the effect?

tongjuntx avatar Jun 15 '20 08:06 tongjuntx