Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

training costs for fine-tuning LLaMA (CapsFus-LLaMA) #6

Open
YoojLee opened this issue Apr 17, 2024 · 2 comments
Open

training costs for fine-tuning LLaMA (CapsFus-LLaMA) #6

YoojLee opened this issue Apr 17, 2024 · 2 comments

Comments

@YoojLee
Copy link

YoojLee commented Apr 17, 2024

Hi, thanks for such a great work!
I would like to ask you of training costs for fine-tuning LLaMA2-13B on the caption fusion task. If possible, please let me know which gpu you have used and how many days (or hours) it costs!

@yqy2001
Copy link
Member

yqy2001 commented Apr 17, 2024

Thank you for your interest. The finetuning cost is about 1-2 days with 8 A800-80G gpus based on Alpaca's codebase, as only 2M samples are enough (2 epochs).

@YoojLee
Copy link
Author

YoojLee commented Apr 18, 2024

Thanks for quick reply!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants