You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When training Group DETR, the training AP remains at 0. I referred to some articles, and it seems that this issue might be related to the batch size. I used a single A6000 GPU with a batch size of 4, while the original setup used 8 GPUs with a total batch size of 16. Could a smaller batch size affect the training process?
The text was updated successfully, but these errors were encountered:
muluofanhua
changed the title
BatchSize是否导致训练AP始终为0
Could the BatchSize be the reason why the training AP remains at 0?
Jan 2, 2025
When training Group DETR, the training AP remains at 0. I referred to some articles, and it seems that this issue might be related to the batch size. I used a single A6000 GPU with a batch size of 4, while the original setup used 8 GPUs with a total batch size of 16. Could a smaller batch size affect the training process?
The text was updated successfully, but these errors were encountered: