-
-
Notifications
You must be signed in to change notification settings - Fork 340
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cannot instantiate local gpt4all model in chat #348
Comments
Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! 🤗 |
The generic error messages is itself a known issue: #238 |
Related: zylon-ai/private-gpt#691 I'm able to get past the error by downgrading:
But then I encounter new issues with the error message:
which sounds like jupyter-ai targeting a newer privateGPT api which isn't present there in 0.3.4 version. I will keep hunting. I'm sure there's a version of gpt4all which this was all tested/developed on and worked. Edit: Found it, use v1 API not v0.3Do:
Edit: Hmm, v1.0.0 appears to not be able to Testing versions
So Edit: A better solution that I've seen is to use a locally hosted GPU model |
Is this bug resolved? or any other workaround available. |
Hello, could you help me figure out why I cannot use the local gpt4all model? I'm using the
ggml-gpt4all-l13b-snoozy
language model without embedding model, and have the model downloaded to.cache/gpt4all/
(although via a symbolic link since I'm on a cluster with limited home directory quota). When I type/ask hello world
in the chat it gives me the following error:Below is some system info FYI:
The text was updated successfully, but these errors were encountered: