We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
执行转llama-13b 为 llama-13b-hf 时,报LlamaConfig 找不到
root@kml-dtmachine-10688-prod:/home/jinzhiliang/webdownload/chatgml/model2download# python convert_llama_weights_to_hf.py --input_dir ./llama-13b --model_size 13B --output_dir ./llama-13b-hf Traceback (most recent call last): File "convert_llama_weights_to_hf.py", line 23, in from transformers import LlamaConfig, LlamaForCausalLM, LlamaTokenizer ImportError: cannot import name 'LlamaConfig' from 'transformers' (/usr/local/lib/python3.8/dist-packages/transformers/init.py)
The text was updated successfully, but these errors were encountered:
如何在ollma上使用这个模型?
Sorry, something went wrong.
No branches or pull requests
执行转llama-13b 为 llama-13b-hf 时,报LlamaConfig 找不到
root@kml-dtmachine-10688-prod:/home/jinzhiliang/webdownload/chatgml/model2download# python convert_llama_weights_to_hf.py --input_dir ./llama-13b --model_size 13B --output_dir ./llama-13b-hf
Traceback (most recent call last):
File "convert_llama_weights_to_hf.py", line 23, in
from transformers import LlamaConfig, LlamaForCausalLM, LlamaTokenizer
ImportError: cannot import name 'LlamaConfig' from 'transformers' (/usr/local/lib/python3.8/dist-packages/transformers/init.py)
The text was updated successfully, but these errors were encountered: