You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
updated the main.py with decapoda-research/llama-13b-hf in all the spots that had 7B
It downloaded the sharded parts all right
but now im getting this config issue tho. Any advice would be appreciated.
File "/home/orwell/miniconda3/envs/llama-finetuner/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/home/orwell/miniconda3/envs/llama-finetuner/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/home/orwell/miniconda3/envs/llama-finetuner/lib/python3.10/site-packages/gradio/helpers.py", line 587, in tracked_fn
response = fn(*args)
File "/home/orwell/simple-llama-finetuner/main.py", line 82, in generate_text
load_peft_model(peft_model)
File "/home/orwell/simple-llama-finetuner/main.py", line 35, in load_peft_model
model = peft.PeftModel.from_pretrained(
File "/home/orwell/miniconda3/envs/llama-finetuner/lib/python3.10/site-packages/peft/peft_model.py", line 135, in from_pretrained
config = PEFT_TYPE_TO_CONFIG_MAPPING[PeftConfig.from_pretrained(model_id).peft_type].from_pretrained(model_id)
File "/home/orwell/miniconda3/envs/llama-finetuner/lib/python3.10/site-packages/peft/utils/config.py", line 101, in from_pretrained
raise ValueError(f"Can't find config.json at '{pretrained_model_name_or_path}'")
ValueError: Can't find config.json at ''
The config file appears in the cache the same as it does for 7B - im assuming im missing something just not sure what.
Thank you again
The text was updated successfully, but these errors were encountered:
And you need to change those three functions as follow in the main.py file:
def load_base_model():
global model
print('Loading base model...')
model = transformers.LLaMAForCausalLM.from_pretrained(
'decapoda-research/llama-13b-hf',
load_in_8bit=True,
torch_dtype=torch.float16,
device_map={'':'cuda'}
)
def load_peft_model(model_name):
global model
print('Loading peft model ' + model_name + '...')
model = peft.PeftModel.from_pretrained(
model, model_name,
torch_dtype=torch.float16,
device_map={'':0}
)
We are working - altho i must say as it turns out it was a UI issue i think
In the interference tab
The lora model must have NOTHING in the field, not "none", which is what i had on mine, else it cannot find the config file. Turns out that was the issue. I clicked hte lil X and then it started working. No idea why on that one. I think it might be because up to this point i only have 7B LoRa so when that lil thing says none it is looking in 7B folder not 13B?
In either case it is now working - i thank you for your help.
updated the main.py with decapoda-research/llama-13b-hf in all the spots that had 7B
It downloaded the sharded parts all right
but now im getting this config issue tho. Any advice would be appreciated.
File "/home/orwell/miniconda3/envs/llama-finetuner/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/home/orwell/miniconda3/envs/llama-finetuner/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/home/orwell/miniconda3/envs/llama-finetuner/lib/python3.10/site-packages/gradio/helpers.py", line 587, in tracked_fn
response = fn(*args)
File "/home/orwell/simple-llama-finetuner/main.py", line 82, in generate_text
load_peft_model(peft_model)
File "/home/orwell/simple-llama-finetuner/main.py", line 35, in load_peft_model
model = peft.PeftModel.from_pretrained(
File "/home/orwell/miniconda3/envs/llama-finetuner/lib/python3.10/site-packages/peft/peft_model.py", line 135, in from_pretrained
config = PEFT_TYPE_TO_CONFIG_MAPPING[PeftConfig.from_pretrained(model_id).peft_type].from_pretrained(model_id)
File "/home/orwell/miniconda3/envs/llama-finetuner/lib/python3.10/site-packages/peft/utils/config.py", line 101, in from_pretrained
raise ValueError(f"Can't find config.json at '{pretrained_model_name_or_path}'")
ValueError: Can't find config.json at ''
The config file appears in the cache the same as it does for 7B - im assuming im missing something just not sure what.
Thank you again
The text was updated successfully, but these errors were encountered: