Bug: Empty response in interactive mode #993
batmanonline
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
when running with the -t 12 -i -r "### Human:" flags llama returns control
the cpu activity goes to 0 and the user sends a new input
however, llama now continues responding to the previous input (or returns no response) completely ignoring the new input...
from here llama completely breaks the chat, it even generates the prompt "### Human:" and starts completing the questions written by a human
Note: tested this many times with vicuna and gpt4all
Note: this might be related to issues #990 #941
Beta Was this translation helpful? Give feedback.
All reactions