-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llama-cpp-server broken #547
Comments
See my comment: #539 (comment) |
@MichaelClifford @Gregory-Pereira Is that something you can look at? It would also make sense to implement a CI/Testing Framework test. WDYT? |
I will take a look at this tonight |
@Gregory-Pereira do you want me to take this one over? |
The temporary fix is to add back the |
Maybe this might be usable in this smarter implementation: https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/scripts/gguf-new-metadata.py |
Reopened to keep this here for the long term fix |
Got this while running from main branch in Podman AI Lab:
The text was updated successfully, but these errors were encountered: