PWC-Net icon indicating copy to clipboard operation
PWC-Net copied to clipboard

Doubt about the PWC-Net+ fine-tuning schedules

Open fperezgamonal opened this issue 5 years ago • 2 comments

Hello,

First of all, thank you for your wonderful work and detailed insights with the supplemental material of your papers. I am currently doing my MsC thesis on Optical Flow interpolation using Deep Learning and I have learned a lot by reading your work.

I'm currently trying the simplest approach to optical flow interpolation with deep learning by starting with the simple FlowNetS architecture and complicate things once the concept works (as I use different inputs to the classical estimation problem, e.g.: sparse matches (location) + sparse_flow).

I would like to know which fine-tuning schedule you used to fine-tune on MPI-Sintel alone after Slong and S_fine on FlyingChairs and FlyingThings3D (not the mixed setting of Sintel, KITTI and HD1K for the ROB challenge). For what I have seen in the recently uploaded PWC-Net+ protocols for KITTI and Sintel, it seems like you are using more learning disruptions as per figure 8 (see below) of the paper "Models matter, so does training" (to be precise, 5 learning disruptions for Sintel, 4 for KITTI). However, in the paper, figure 4, you used only 2 disruptions with different learning rates for Sintel.

Figure 4: Figure4

Figure 8: Figure8

Could you let me know, if you tried it, if I'm only fine-tuning on Sintel (final + clean), would it be better to employ the training schedule with 5 learning rate disruptions as you did for the mixed batches for ROB (like fig. 8) or with just 2 (like fig.4) like initially done for only Sintel.

Note: sadly I could not manage to get your PyTorch implementation to work on the cluster I'm using, so I'm instead using the Tensorboard re-implementation here.

Thanks in advance and do not hesitate to ask for any clarification! Cheers. Ferran.

fperezgamonal avatar May 30 '19 08:05 fperezgamonal

Hi Ferran,

Thank you for your interest in our work.

The learning rate in Figure 4 has been used to obtain the results on Sintel reported in our CVPR'18 paper (PWC-Net) and the one in Figure 8 for the extended TPAMI journal paper (PWC-Net+).

It's a good question whether the learning rate in Figure 8 would work for fine-tuning on Sintel only. Unfortunately, I haven't tried it. My guess is that it would be more over-fitted to the Sintel training data and I don't know how it would generalize to the test data.

If you have the computing resource and time, I highly encourage you to test the idea (and report the results here).

Best regards, Deqing

deqings avatar Jun 10 '19 18:06 deqings

@deqings One more doubt. Which model does PWC-Net+ fine-tune from? PWC-Net or PWC-Net_ROB?

xianshunw avatar Aug 17 '19 15:08 xianshunw