You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Based on your thesis, the strict evaluation metric is used:
"Strict evaluation metrics are applied, relying on both the correctness of the entity boundary and the entity class"
However, when I inspected evaluation.py, I don't see the mode="strict" parameter being set.
I admit that I might be missing something simple. I tried to pass my own compute_metrics function for trainer, but I can't get it to work although the same function worked for a standard transformers.Trainer method.
I even tried to copy the entire compute_f1_via_seqeval function and pass it as the compute_metrics argument of trainer, after making an edit to set results = seqeval.compute(mode='strict'), but I still got some errors related to structuring the data correctly or passing required variables
For now, my dirty solution would be to edit the evaluation.py script.
Is there an easier way to do this? Or am I missing something that shows that the 'strict' evaluation metric is already being used?
Thank you for your time.
The text was updated successfully, but these errors were encountered:
tan-js
changed the title
Allow for us to set mode="Strict" for seqeval mode="Strict" for seqeval?
Jun 11, 2024
Hi again @tomaarsen,
Based on your thesis, the strict evaluation metric is used:
"Strict evaluation metrics are applied, relying on both the correctness of the entity boundary and the entity class"
However, when I inspected evaluation.py, I don't see the
mode="strict"
parameter being set.I admit that I might be missing something simple. I tried to pass my own
compute_metrics
function fortrainer
, but I can't get it to work although the same function worked for a standardtransformers.Trainer
method.I even tried to copy the entire
compute_f1_via_seqeval
function and pass it as thecompute_metrics
argument oftrainer
, after making an edit to setresults = seqeval.compute(mode='strict')
, but I still got some errors related to structuring the data correctly or passing required variablesFor now, my dirty solution would be to edit the evaluation.py script.
Is there an easier way to do this? Or am I missing something that shows that the 'strict' evaluation metric is already being used?
Thank you for your time.
The text was updated successfully, but these errors were encountered: