-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support finetuning with LoRA #431
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. |
@katalinic-gc I would like to use this to finetune whisper-large-v2 on a fairly large dataset (700k examples). Is this usable as is (if I fix the merge conflicts)? What steps from https://www.graphcore.ai/posts/fine-tune-openais-whisper-automatic-speech-recognition-asr-model would need to be different with this approach? |
It won't be usable for large on IPUs due to OOM. We are internally working on supporting that; if and when available, we'll announce it. |
What does this PR do?
To enable it, apply below, which is basically identical to upstream.
Some online finetuning walkthroughs also include
to the training args, e.g.
(IPU)Seq2SeqTrainingArguments
.Before submitting