You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing your code and dataset. Here is some feedback from our quantitative experiments.
Instead of training the network step by step, we directly train the whole network for 100 epochs with a larger batch size (=16). We can get comparable performance:
Testset
3D IoU
CE
PE
PanoContext
84.15
0.64
1.80
Stanford
83.39
0.74
2.39
The dataset setting is a bit different, we use mixed PanoContext and Stanford dataset (same as LayoutNet and HorizonNet). In your newest paper, you also consider the test data from another dataset.
Best,
Jia
The text was updated successfully, but these errors were encountered:
LayoutNet v2 uses the same dataset setting as LayoutNet, as in the ablation study in Tab 6 in our newest arxiv paper. That’s why we re-train HorizonNet in Tab 4& 5 since only HorizonNet uses a different setting.
Since you’re available for larger batch size, you can train LayoutNet v2 with ResNet-50 encoder for the best performance. We’ve addressed this in our newest arxiv Sec. 5.2.1 mentioned in this repo.
Hi,
Thanks for sharing your code and dataset. Here is some feedback from our quantitative experiments.
Instead of training the network step by step, we directly train the whole network for 100 epochs with a larger batch size (=16). We can get comparable performance:
The dataset setting is a bit different, we use mixed PanoContext and Stanford dataset (same as LayoutNet and HorizonNet). In your newest paper, you also consider the test data from another dataset.
Best,
Jia
The text was updated successfully, but these errors were encountered: