You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
/root/anaconda3/envs/lmflow/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: /root/anaconda3/envs/lmflow did not contain libcudart.so as expected! Searching further paths...
warn(msg)
CUDA SETUP: CUDA runtime path found: /usr/local/cuda-11.4/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 7.0
CUDA SETUP: Detected CUDA version 114
/root/anaconda3/envs/lmflow/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: Compute capability < 7.5 detected! Only slow 8-bit matmul is supported for your GPU!
warn(msg)
CUDA SETUP: Loading binary /root/anaconda3/envs/lmflow/lib/python3.9/site-packages/bitsandbytes/libbitsandbytes_cuda114_nocublaslt.so...
Loading checkpoint shards: 100%|██████████████████████████████████████████████| 3/3 [00:20<00:00, 6.96s/it]
Saving alpaca model to ./
Traceback (most recent call last):
File "/home/LMFlow/cn_llama/packages/new/alpaca-weight/merge_step2_patch2alpaca.py", line 32, in
trainer = Seq2SeqTrainer(model=model)
File "/root/anaconda3/envs/lmflow/lib/python3.9/site-packages/transformers/trainer_seq2seq.py", line 72, in init
if self.args.generation_config is not None:
AttributeError: 'TrainingArguments' object has no attribute 'generation_config'
The text was updated successfully, but these errors were encountered:
When writing the code, the LLaMA is not in the main branch of transformers. The pull request of bruvduroiu has been merged. Feel free to try and thanks. @bruvduroiu@tomasmcz@ChrisXULC
/root/anaconda3/envs/lmflow/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: /root/anaconda3/envs/lmflow did not contain libcudart.so as expected! Searching further paths...
warn(msg)
CUDA SETUP: CUDA runtime path found: /usr/local/cuda-11.4/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 7.0
CUDA SETUP: Detected CUDA version 114
/root/anaconda3/envs/lmflow/lib/python3.9/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: Compute capability < 7.5 detected! Only slow 8-bit matmul is supported for your GPU!
warn(msg)
CUDA SETUP: Loading binary /root/anaconda3/envs/lmflow/lib/python3.9/site-packages/bitsandbytes/libbitsandbytes_cuda114_nocublaslt.so...
Loading checkpoint shards: 100%|██████████████████████████████████████████████| 3/3 [00:20<00:00, 6.96s/it]
Saving alpaca model to ./
Traceback (most recent call last):
File "/home/LMFlow/cn_llama/packages/new/alpaca-weight/merge_step2_patch2alpaca.py", line 32, in
trainer = Seq2SeqTrainer(model=model)
File "/root/anaconda3/envs/lmflow/lib/python3.9/site-packages/transformers/trainer_seq2seq.py", line 72, in init
if self.args.generation_config is not None:
AttributeError: 'TrainingArguments' object has no attribute 'generation_config'
The text was updated successfully, but these errors were encountered: