-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ollama create Error: unsupported architecture #6
Comments
Hi, This model has exactly the same architecture as the llama3.2V. |
In theory, you can run this model on any platform that supports llama3.2V. |
cmd: output: ollama run llama3.2-vision, this cmd can run without problem. |
could you upload a gguf version as well? i am not sure how the official llama3.2v is converted |
It seems that one can use: pip install model2gguf I will try to upload one later. (I haven't tried that before, honestly.) |
ehh... just tried the package. it is using llama.cpp in the backend as well. It does not work, got the same error: INFO:hf-to-gguf:Loading model: Llama-3.2V-11B-cot |
the architecture error when creating models or converting to gguf with llama.cpp
it look like the model has the same architecture as the llama3.2V, could you help me with this?
The text was updated successfully, but these errors were encountered: