You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello! I've been trying to get MMDetection up and running with a custom dataset. I've specifically been trying to train a SOLOv2 model. Training on a subset of the COCO2017 dataset works fine, but when I train on my own dataset I get very weird looking metrics and completely incorrect performance. When running inference on the models generated from my dataset it simply marks the entire image as an instance with a 1.0 confidence metric. I'm using auto-scaling LR.
Here are the segmentation loss and accuracy results from one training epoch on the COCO2017 dataset:
Epoch [1][3000/3000] lr: 1.250e-03, eta: 6:38:58, time: 0.236, data_time: 0.015, memory: 4687, loss_mask: 1.4996, loss_cls: 0.5037, loss: 2.0033, grad_norm: 11.7340
2023-04-05 13:31:25,677 - mmdet - INFO -
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.008
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=1000 ] = 0.023
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=1000 ] = 0.004
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.001
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.008
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.017
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.034
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 ] = 0.034
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.034
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.005
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.034
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.070
Here are the results from one training epoch on my custom dataset:
2023-04-04 16:36:33,391 - mmdet - INFO - Epoch [1][200/200] lr: 2.993e-03, eta: 15:42:30, time: 2.417, data_time: 0.361, memory: 11198, loss_mask: 3.0337, loss_cls: 0.1355, loss: 3.1692, grad_norm: 2.2686
2023-04-04 16:42:41,258 - mmdet - INFO -
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=1000 ] = 0.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=1000 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.000
And finally here is my config for my custom dataset
I've looked at my COCO annotations for my dataset using coco-viewer and they seem to be formatted correctly so I'm not really sure what's happening here. The loss metrics changed from epoch to epoch and continued to go down, but the accuracy metrics all stayed exactly the same. Are there any recommendations for what might be causing this? Additionally, what exactly do these different accuracy metrics mean? What does the area field reflect and why are there different accuracy metrics for different areas? Why would only the small areas have negative accuracy values? If it helps, all of the instances in my training set appear as roughly the same size pre-augmentation as my data is synthetically generated. I am still fairly new to ML development so any and all assistance is appreciated!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello! I've been trying to get MMDetection up and running with a custom dataset. I've specifically been trying to train a SOLOv2 model. Training on a subset of the COCO2017 dataset works fine, but when I train on my own dataset I get very weird looking metrics and completely incorrect performance. When running inference on the models generated from my dataset it simply marks the entire image as an instance with a 1.0 confidence metric. I'm using auto-scaling LR.
Here are the segmentation loss and accuracy results from one training epoch on the COCO2017 dataset:
Here are the results from one training epoch on my custom dataset:
And finally here is my config for my custom dataset
I've looked at my COCO annotations for my dataset using coco-viewer and they seem to be formatted correctly so I'm not really sure what's happening here. The loss metrics changed from epoch to epoch and continued to go down, but the accuracy metrics all stayed exactly the same. Are there any recommendations for what might be causing this? Additionally, what exactly do these different accuracy metrics mean? What does the area field reflect and why are there different accuracy metrics for different areas? Why would only the small areas have negative accuracy values? If it helps, all of the instances in my training set appear as roughly the same size pre-augmentation as my data is synthetically generated. I am still fairly new to ML development so any and all assistance is appreciated!
Beta Was this translation helpful? Give feedback.
All reactions