You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to use pretrained tokenizers to test out encoding using the train_byte_level_bpe.py file. (From Restoring model from learned vocab/merges)
Even though I have created an environment using conda.yml file, I am having an error with .encode() function: TypeError: encode() got an unexpected keyword argument 'pad_to_max_length'
When I searched, I have found that it can be related to the version of tokenizers or transformers library, do you know how can I solve it?
Thanks a lot for your help
The text was updated successfully, but these errors were encountered:
Hello,
I am trying to use pretrained tokenizers to test out encoding using the train_byte_level_bpe.py file. (From Restoring model from learned vocab/merges)
Even though I have created an environment using conda.yml file, I am having an error with .encode() function:
TypeError: encode() got an unexpected keyword argument 'pad_to_max_length'
When I searched, I have found that it can be related to the version of tokenizers or transformers library, do you know how can I solve it?
Thanks a lot for your help
The text was updated successfully, but these errors were encountered: