LayoutNetv2 icon indicating copy to clipboard operation
LayoutNetv2 copied to clipboard

Quantitative experiments feedback

Open bertjiazheng opened this issue 6 years ago • 2 comments

Hi,

Thanks for sharing your code and dataset. Here is some feedback from our quantitative experiments.

Instead of training the network step by step, we directly train the whole network for 100 epochs with a larger batch size (=16). We can get comparable performance:

Testset 3D IoU CE PE
PanoContext 84.15 0.64 1.80
Stanford 83.39 0.74 2.39

The dataset setting is a bit different, we use mixed PanoContext and Stanford dataset (same as LayoutNet and HorizonNet). In your newest paper, you also consider the test data from another dataset.

Best, Jia

bertjiazheng avatar Oct 30 '19 10:10 bertjiazheng

@bertjiazheng Thanks for your effort. To clarify:

  1. LayoutNet v2 uses the same dataset setting as LayoutNet, as in the ablation study in Tab 6 in our newest arxiv paper. That’s why we re-train HorizonNet in Tab 4& 5 since only HorizonNet uses a different setting.
  2. Since you’re available for larger batch size, you can train LayoutNet v2 with ResNet-50 encoder for the best performance. We’ve addressed this in our newest arxiv Sec. 5.2.1 mentioned in this repo.

zouchuhang avatar Oct 30 '19 13:10 zouchuhang

Thanks for your response. I misunderstand the dataset setting all along and missed the updated version of your paper.

bertjiazheng avatar Oct 30 '19 13:10 bertjiazheng