-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Text looping when using inference and go full determinism #4
Comments
Hi, thanks for noticing this. It's been bothering me for a while since I saw that the But I just got a new clue that those three works on the demo of databricks-dolly, while it does not work when And to avoid looping output, you can try to increase the Repetition Penalty and/or Beams. Hope it helps! |
Ok I found out why
|
Not related to this but is adding an option for text only dataset possible? Sorry for bother you but I am not good at Python enough to implemented it. |
Do you mean supporting the "Plain Text" format with datasets loaded from files? If so, I'm also thinking about this as I want to fine-tune some models specialized in writing code, and it'll be more convenient to write code samples in a plain text format instead of dealing with all the BTW don't worry about the familiarity with Python, I'm also not good at it, and many pieces of code in this repo are co-authored by ChatGPT lol. |
Closing this since the issue has been resolved in the |
While using text inference for testing my LoRA, when regenerate with changed temperature, Top P and stuff, the output is still the same as before. Tested on unhelpful-ai.
The text was updated successfully, but these errors were encountered: