Releases: chakki-works/seqeval
Releases · chakki-works/seqeval
v1.2.2
v1.2.1
v1.2.0
Enable to compute macro/weighted/perClass f1, recall, and precision #61
F1 score
>>> from seqeval.metrics import f1_score
>>> y_true = [['O', 'O', 'B-MISC', 'I-MISC', 'B-MISC', 'O', 'O'], ['B-PER', 'I-PER', 'O']]
>>> y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'B-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> f1_score(y_true, y_pred, average=None)
array([0.5, 1. ])
>>> f1_score(y_true, y_pred, average='micro')
0.6666666666666666
>>> f1_score(y_true, y_pred, average='macro')
0.75
>>> f1_score(y_true, y_pred, average='weighted')
0.6666666666666666
Precision
>>> from seqeval.metrics import precision_score
>>> y_true = [['O', 'O', 'B-MISC', 'I-MISC', 'B-MISC', 'O', 'O'], ['B-PER', 'I-PER', 'O']]
>>> y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'B-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> precision_score(y_true, y_pred, average=None)
array([0.5, 1. ])
>>> precision_score(y_true, y_pred, average='micro')
0.6666666666666666
>>> precision_score(y_true, y_pred, average='macro')
0.75
>>> precision_score(y_true, y_pred, average='weighted')
0.6666666666666666
Recall
>>> from seqeval.metrics import recall_score
>>> y_true = [['O', 'O', 'B-MISC', 'I-MISC', 'B-MISC', 'O', 'O'], ['B-PER', 'I-PER', 'O']]
>>> y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'B-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> recall_score(y_true, y_pred, average=None)
array([0.5, 1. ])
>>> recall_score(y_true, y_pred, average='micro')
0.6666666666666666
>>> recall_score(y_true, y_pred, average='macro')
0.75
>>> recall_score(y_true, y_pred, average='weighted')
0.6666666666666666