You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Loading model file ../pythia/pythia-hf/pytorch_model-00001-of-00003.bin
Traceback (most recent call last):
File "/home/hyanxo/projects/llama.cpp/convert.py", line 1483, in <module>
main()
File "/home/hyanxo/projects/llama.cpp/convert.py", line 1419, in main
model_plus = load_some_model(args.model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyanxo/projects/llama.cpp/convert.py", line 1278, in load_some_model
models_plus.append(lazy_load_file(path))
^^^^^^^^^^^^^^^^^^^^
File "/home/hyanxo/projects/llama.cpp/convert.py", line 887, in lazy_load_file
return lazy_load_torch_file(fp, path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hyanxo/projects/llama.cpp/convert.py", line 843, in lazy_load_torch_file
model = unpickler.load()
^^^^^^^^^^^^^^^^
File "/home/hyanxo/projects/llama.cpp/convert.py", line 832, in find_class
return self.CLASSES[(module, name)]
~~~~~~~~~~~~^^^^^^^^^^^^^^^^
KeyError: ('torch', 'ByteStorage')
The text was updated successfully, but these errors were encountered:
Can you please convert this to gguf?
I tried to use llama.cpp convert.py with the following command:
It gives me this error:
The text was updated successfully, but these errors were encountered: