Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes the incorrect token prediction distribution from _all_scores_for_token() in sequence_tagger_model.py #3449

Merged

Conversation

mdmotaharmahtab
Copy link

This PR fixes the issue #3448 . Previously, due to incorrect length calculation for each sentence in the batch, the returned tag probability distribution for each token was incorrect (from the _all_scores_for_token() function in sequence_tagger_model.py). This PR makes a small change in _all_scores_for_token() function to correctly compute the length.

@alanakbik alanakbik requested a review from whoisjones May 2, 2024 07:05
@alanakbik
Copy link
Collaborator

@mdmotaharmahtab thanks a lot for this PR!

@whoisjones can you review?

@mdmotaharmahtab
Copy link
Author

I have examined the failing unit tests and found that they also failed on the master branch (so rebasing on the master did not help). They may not be related to this PR. Do I need to work on these tests for this PR? @whoisjones @alanakbik

@MdMotahar
Copy link
Contributor

Hello. Want to know if this PR will be reviewed any time soon? @whoisjones @alanakbik

@Mahtab-delineate
Copy link

Should I rebase with main ?

@helpmefindaname
Copy link
Collaborator

Thank you for your contribution @mdmotaharmahtab

@helpmefindaname helpmefindaname merged commit 469bdcd into flairNLP:master Nov 29, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants