LayoutNetv2
LayoutNetv2 copied to clipboard
Quantitative experiments feedback
Hi,
Thanks for sharing your code and dataset. Here is some feedback from our quantitative experiments.
Instead of training the network step by step, we directly train the whole network for 100 epochs with a larger batch size (=16). We can get comparable performance:
| Testset | 3D IoU | CE | PE |
|---|---|---|---|
| PanoContext | 84.15 | 0.64 | 1.80 |
| Stanford | 83.39 | 0.74 | 2.39 |
The dataset setting is a bit different, we use mixed PanoContext and Stanford dataset (same as LayoutNet and HorizonNet). In your newest paper, you also consider the test data from another dataset.
Best, Jia
@bertjiazheng Thanks for your effort. To clarify:
- LayoutNet v2 uses the same dataset setting as LayoutNet, as in the ablation study in Tab 6 in our newest arxiv paper. That’s why we re-train HorizonNet in Tab 4& 5 since only HorizonNet uses a different setting.
- Since you’re available for larger batch size, you can train LayoutNet v2 with ResNet-50 encoder for the best performance. We’ve addressed this in our newest arxiv Sec. 5.2.1 mentioned in this repo.
Thanks for your response. I misunderstand the dataset setting all along and missed the updated version of your paper.