Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could the BatchSize be the reason why the training AP remains at 0? #372

Open
muluofanhua opened this issue Jan 2, 2025 · 0 comments
Open

Comments

@muluofanhua
Copy link

muluofanhua commented Jan 2, 2025

When training Group DETR, the training AP remains at 0. I referred to some articles, and it seems that this issue might be related to the batch size. I used a single A6000 GPU with a batch size of 4, while the original setup used 8 GPUs with a total batch size of 16. Could a smaller batch size affect the training process?

@muluofanhua muluofanhua changed the title BatchSize是否导致训练AP始终为0 Could the BatchSize be the reason why the training AP remains at 0? Jan 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant