You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 3, 2020. It is now read-only.
Metrics (accuracy, IoU, dice_coef) are currently computed for the whole images, without any distinction between classes. To gain on description quality, and be able to compare with state-of-the-art results and contest leaderboards (see Mapillary, CityScapes, or AerialImage examples), we should add class-specific metric results in training_metrics.csv.
It should be interesting to add the best instance metrics in the best-instance-<img_size>-<aggregation>.json file (produced by paramoptim.py when exploring hyperparameters) as well.
The text was updated successfully, but these errors were encountered:
Metrics (accuracy, IoU, dice_coef) are currently computed for the whole images, without any distinction between classes. To gain on description quality, and be able to compare with state-of-the-art results and contest leaderboards (see Mapillary, CityScapes, or AerialImage examples), we should add class-specific metric results in
training_metrics.csv
.It should be interesting to add the best instance metrics in the
best-instance-<img_size>-<aggregation>.json
file (produced byparamoptim.py
when exploring hyperparameters) as well.The text was updated successfully, but these errors were encountered: