Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quantitative experiments feedback #2

Open
bertjiazheng opened this issue Oct 30, 2019 · 2 comments
Open

Quantitative experiments feedback #2

bertjiazheng opened this issue Oct 30, 2019 · 2 comments

Comments

@bertjiazheng
Copy link

Hi,

Thanks for sharing your code and dataset. Here is some feedback from our quantitative experiments.

Instead of training the network step by step, we directly train the whole network for 100 epochs with a larger batch size (=16). We can get comparable performance:

Testset 3D IoU CE PE
PanoContext 84.15 0.64 1.80
Stanford 83.39 0.74 2.39

The dataset setting is a bit different, we use mixed PanoContext and Stanford dataset (same as LayoutNet and HorizonNet). In your newest paper, you also consider the test data from another dataset.

Best,
Jia

@zouchuhang
Copy link
Owner

@bertjiazheng Thanks for your effort. To clarify:

  1. LayoutNet v2 uses the same dataset setting as LayoutNet, as in the ablation study in Tab 6 in our newest arxiv paper. That’s why we re-train HorizonNet in Tab 4& 5 since only HorizonNet uses a different setting.
  2. Since you’re available for larger batch size, you can train LayoutNet v2 with ResNet-50 encoder for the best performance. We’ve addressed this in our newest arxiv Sec. 5.2.1 mentioned in this repo.

@bertjiazheng
Copy link
Author

Thanks for your response. I misunderstand the dataset setting all along and missed the updated version of your paper.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants