You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 19, 2024. It is now read-only.
I am trying to train an NLP classification model and there is a common error when model predicts "decrease" whenever there is "increase" in input text. I looked into embeddings of both and following is the nearest neighbor list of increase.
As can be seen, decrease is the nearest neighbor. I believe, my model would work better if antonyms can be further apart. Any suggestions how I can reduce error due to this.
Regards,
Deepti
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hi,
I am trying to train an NLP classification model and there is a common error when model predicts "decrease" whenever there is "increase" in input text. I looked into embeddings of both and following is the nearest neighbor list of increase.
[(0.8970754742622375, 'decrease'),
(0.8135992288589478, 'increases'),
(0.7706713080406189, 'increased'),
(0.7596212029457092, 'increasing'),
(0.7075006365776062, 'decreases'),
As can be seen, decrease is the nearest neighbor. I believe, my model would work better if antonyms can be further apart. Any suggestions how I can reduce error due to this.
Regards,
Deepti
The text was updated successfully, but these errors were encountered: