-
-
Notifications
You must be signed in to change notification settings - Fork 619
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
COCO mAP metric #2901
base: master
Are you sure you want to change the base?
COCO mAP metric #2901
Conversation
Hi @AlexanderChaptykov , thanks for your comment. Yes that would be nice, but I need a short time to add a few commits beforehand. |
@vfdev-5 , finally It's ready to get reviewed. |
i.e. ObjectDetectionMap and its dependencies
Removed allow_multiple... Renamed average_operand Renamed _measure_recall... to _compute_recall...
Docs has some nasty errors
Removed generic detection logics. Just that of the COCO is remained Tests are updated
faa10a4
to
9a45edc
Compare
9a45edc
to
0444933
Compare
Hi @vfdev-5 , finally this seems to be ready for review. failure reasons:
|
@sadra-barikbin sounds great!
Let's do the following: # precision_integrand = precision.flip(-1).cummax(dim=-1).values.flip(-1)
if precision.device.type == "mps":
# Manual fallback to CPU if precision is on MPS due to the error:
# NotImplementedError: The operator 'aten::_cummax_helper' is not currently implemented for the MPS device
device = precision.device
precision_integrand = precision.flip(-1).cpu()
precision_integrand = precision_integrand.cummax(dim=-1).values
precision_integrand = precision_integrand.to(device=device).flip(-1)
else:
precision_integrand = precision.flip(-1).cummax(dim=-1).values.flip(-1)
I haven't seen that previously. If we have large tensors in the test, we can skip those tests on MPS device. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Few comments.
Thanks a lot for still working on this PR, Sadra!
ignite/metrics/vision/object_detection_average_precision_recall.py
Outdated
Show resolved
Hide resolved
ignite/metrics/vision/object_detection_average_precision_recall.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Few minor updates and let's try to merge this great PR, @sadra-barikbin !
Thanks a lot for finally making this possible!
self, | ||
iou_thresholds: Optional[Union[Sequence[float], torch.Tensor]] = None, | ||
rec_thresholds: Optional[Union[Sequence[float], torch.Tensor]] = None, | ||
num_classes: int = 91, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should not this match MS Coco number of classes: 80 or 81
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you're right. The model used in tests had 91 classes.
Concerning failing MPS tests:
Let's skip the tests on MPS |
The link |
We can replace it with : https://docs.wandb.ai/ref/python/init |
@vfdev-5 , any blocker on this? |
@sadra-barikbin can you please fix meanwhile the issue with |
Description:
Mean Average Precision metric for detection in COCO dataset.
Check list: