-
Notifications
You must be signed in to change notification settings - Fork 268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reconfirming the evaluation metric #49
Comments
It's this:
This is the same as the metric provided by Brendon's |
Thank for the info @kwinkunks . I was assuming accuracy for so long. Anybody would like to help me with a R package to calculate F1 score for multiclass ??? Highly appreciate it. |
@thanish: I do not know R, but perhaps this? |
Thanks @mycarta that really helped :) |
@kwinkunks I thought the f1 score is using average = 'weighted'. |
@kwinkunks just following up on this before I have submitted and seen my score. ( Will not seem so biased!). Hopefully it´s weighted instead of micro. Micro will give bias to highly populated labels. In this case all facies are equal and there are some heavily skewed distributions of the classes. Extracted some text below from here:
|
Think It might have already been discussed in #4 but just to reconfirm, what is the evaluation metric for this contest ? it's F1 which is 2 * (precision * recall) / (precision + recall) right ?
and not accuracy which would be (sum of the diagonal of confusion matrix) / total number of test data(or blind data)
The text was updated successfully, but these errors were encountered: