Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for 24GB VRAM fine tuning via 8bit optimizers #162

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -149,6 +149,16 @@ The following properties are defined in the top level of the model configuration
- `training`
- The training configuration for the model, varies based on `model_type`. Provides parameters for training as well as demos.


### Optimizer config
The optimizer config, inside of the training subsection of the model config, allows for use of different optimizer implementations, including those that allow for fine tuning with 24GB VRAM.

- `backend`
- The type of optimizer library being used, currently limited to one of `"bnb", "default"`.
- `type`
- Optimizer name to use. If using bnb, enabled the use of `"AdamW8bit"` and other 8bit optimizers.


## Dataset config
`stable-audio-tools` currently supports two kinds of data sources: local directories of audio files, and WebDataset datasets stored in Amazon S3. More information can be found in [the dataset config documentation](docs/datasets.md)

Expand Down
1 change: 1 addition & 0 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
'aeiou==0.0.20',
'alias-free-torch==0.0.6',
'auraloss==0.4.0',
'bitsandbytes==0.35.0',
'descript-audio-codec==1.0.0',
'einops==0.7.0',
'einops-exts==0.0.4',
Expand Down
17 changes: 12 additions & 5 deletions stable_audio_tools/training/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -84,13 +84,20 @@ def create_optimizer_from_config(optimizer_config, parameters):
"""

optimizer_type = optimizer_config["type"]
optimizer_backend = optimizer_config.get("backend", "")

if optimizer_type == "FusedAdam":
from deepspeed.ops.adam import FusedAdam
optimizer = FusedAdam(parameters, **optimizer_config["config"])
else:
optimizer_fn = getattr(torch.optim, optimizer_type)
if optimizer_backend == "bnb":
import bitsandbytes as bnb
optimizer_fn = getattr(bnb.optim, optimizer_type)
optimizer = optimizer_fn(parameters, **optimizer_config["config"])
else:
if optimizer_type == "FusedAdam":
from deepspeed.ops.adam import FusedAdam
optimizer = FusedAdam(parameters, **optimizer_config["config"])
else:
optimizer_fn = getattr(torch.optim, optimizer_type)
optimizer = optimizer_fn(parameters, **optimizer_config["config"])

return optimizer

def create_scheduler_from_config(scheduler_config, optimizer):
Expand Down