Results 32 comments of RedHair

@xghc1991 which data you trained with ? I am only using driving data in SceneFlow dataset.

No, I am using data from here. https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html

@xghc1991 yes, I can. How did you set hyper-parameters?

I updated the network and train the model with driving_finalpass dataset (contains 4400 stereo images) with one epoch. The mean absolute error is around 7.

Yes, this problem arises with dimension inconsistency. Since there is a total of four down sampling layers, height and width (also disparity) must be divisible by 32. Currently, the input...

I didn't copy functions to the backend library. Could you post the error message ? The pretrained model is only for experiment, it should have been deleted.

@Lejboelle I have modified convolution3DTranspose.py. Feel free to tell my If you still got problems @heyalqh No I haven't. I run out of memory with original parameter setting. It seems...

The original paper concatenates the left and the right features across all disparities. In your case, it just concatenates features from D = 0 to D = 400.

About your first question, I think you're right. As indicated by the paper, the dimension of cost volume is DxHxWx2F, which means each feature pair is a DxHxW array. For...

Hi, unfortunately I haven't trained the model with Scene_Flow data. It seems that the model will run out of memory with if hyper parameters are set too high. In addition,...