Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge changes #169

Merged
merged 55 commits into from
Jul 27, 2024
Merged
Changes from 1 commit
Commits
Show all changes
55 commits
Select commit Hold shift + click to select a range
bbd2f9d
[tests] fix typo in pag tests (#8845)
a-r-r-o-w Jul 12, 2024
3b37fef
[Docker] include python3.10 dev and solve header missing problem (#8865)
sayakpaul Jul 16, 2024
e87bf62
[`Cont'd`] Add the SDE variant of ~~DPM-Solver~~ and DPM-Solver++ to …
tolgacangoz Jul 17, 2024
f6cfe0a
modify pocs. (#8867)
sayakpaul Jul 17, 2024
0f09b01
[Core] fix: shard loading and saving when variant is provided. (#8869)
sayakpaul Jul 17, 2024
c2fbf8d
[Chore] allow auraflow latest to be torch compile compatible. (#8859)
sayakpaul Jul 17, 2024
e15a8e7
Add AuraFlowPipeline and KolorsPipeline to auto map (#8849)
Beinsezii Jul 17, 2024
c1dc2ae
Fix multi-gpu case for `train_cm_ct_unconditional.py` (#8653)
tolgacangoz Jul 17, 2024
12625c1
[docs] pipeline docs for latte (#8844)
a-r-r-o-w Jul 18, 2024
a41e4c5
[Chore] add disable forward chunking to SD3 transformer. (#8838)
sayakpaul Jul 18, 2024
e02ec27
[Core] remove `resume_download` from Hub related stuff (#8648)
sayakpaul Jul 18, 2024
eb24e4b
Add option to SSH into CPU runner. (#8884)
DN6 Jul 18, 2024
588fb5c
SSH into cpu runner fix (#8888)
DN6 Jul 18, 2024
3f14117
SSH into cpu runner additional fix (#8893)
DN6 Jul 18, 2024
c009c20
[SDXL] Fix uncaught error with image to image (#8856)
asomoza Jul 19, 2024
3b04cdc
fix loop bug in SlicedAttnProcessor (#8836)
shinetzh Jul 20, 2024
461efc5
[fix code annotation] Adjust the dimensions of the rotary positional …
wangqixun Jul 20, 2024
fe79489
allow tensors in several schedulers step() call (#8905)
catwell Jul 20, 2024
56e772a
Use model_info.id instead of model_info.modelId (#8912)
Wauplin Jul 20, 2024
1a8b3c2
[Training] SD3 training fixes (#8917)
sayakpaul Jul 21, 2024
267bf65
🌐 [i18n-KO] Translated docs to Korean (added 7 docs and etc) (#8804)
Snailpong Jul 22, 2024
f4af03b
[Docs] small fixes to pag guide. (#8920)
sayakpaul Jul 22, 2024
5802c2e
Reflect few contributions on `ethical_guidelines.md` that were not re…
mreraser Jul 22, 2024
af40004
[Tests] proper skipping of request caching test (#8908)
sayakpaul Jul 22, 2024
77c5de2
Add attentionless VAE support (#8769)
Gothos Jul 23, 2024
c5fdf33
[Benchmarking] check if runner helps to restore benchmarking (#8929)
sayakpaul Jul 23, 2024
f57b27d
Update pipeline test fetcher (#8931)
DN6 Jul 23, 2024
8b21fee
[Tests] reduce the model size in the audioldm2 fast test (#7846)
ariG23498 Jul 23, 2024
7710415
fix: checkpoint save issue in advanced dreambooth lora sdxl script (#…
akbaig Jul 23, 2024
7a95f8d
[Tests] Improve transformers model test suite coverage - Temporal Tra…
rootonchair Jul 23, 2024
cf55dcf
Fix Colab and Notebook checks for `diffusers-cli env` (#8408)
tolgacangoz Jul 23, 2024
3bb1fd6
Fix name when saving text inversion embeddings in dreambooth advanced…
DN6 Jul 23, 2024
50d21f7
[Core] fix QKV fusion for attention (#8829)
sayakpaul Jul 24, 2024
41b705f
remove residual i from auraflow. (#8949)
sayakpaul Jul 24, 2024
93983b6
[CI] Skip flaky download tests in PR CI (#8945)
DN6 Jul 24, 2024
2c25b98
[AuraFlow] fix long prompt handling (#8937)
sayakpaul Jul 24, 2024
cdd12bd
Added Code for Gradient Accumulation to work for basic_training (#8961)
RandomGamingDev Jul 25, 2024
4a782f4
[AudioLDM2] Fix cache pos for GPT-2 generation (#8964)
sanchit-gandhi Jul 25, 2024
d8bcb33
[Tests] fix slices of 26 tests (first half) (#8959)
sayakpaul Jul 25, 2024
5fbb4d3
[CI] Slow Test Updates (#8870)
DN6 Jul 25, 2024
3ae0ee8
[tests] speed up animatediff tests (#8846)
a-r-r-o-w Jul 25, 2024
527430d
[LoRA] introduce LoraBaseMixin to promote reusability. (#8774)
sayakpaul Jul 25, 2024
0bda1d7
Update TensorRT img2img community pipeline (#8899)
asfiyab-nvidia Jul 25, 2024
1fd647f
Enable CivitAI SDXL Inpainting Models Conversion (#8795)
mazharosama Jul 25, 2024
62863bb
Revert "[LoRA] introduce LoraBaseMixin to promote reusability." (#8976)
yiyixuxu Jul 25, 2024
9b8c860
fix guidance_scale value not equal to the value in comments (#8941)
efwfe Jul 25, 2024
50e66f2
[Chore] remove all is from auraflow. (#8980)
sayakpaul Jul 26, 2024
d87fe95
[Chore] add `LoraLoaderMixin` to the inits (#8981)
sayakpaul Jul 26, 2024
2afb2e0
Added `accelerator` based gradient accumulation for basic_example (#8…
RandomGamingDev Jul 26, 2024
bce9105
[CI] Fix parallelism in nightly tests (#8983)
DN6 Jul 26, 2024
1168eaa
[CI] Nightly Test Runner explicitly set runner for Setup Pipeline Mat…
DN6 Jul 26, 2024
57a021d
[fix] FreeInit step index out of bounds (#8969)
a-r-r-o-w Jul 26, 2024
5c53ca5
[core] AnimateDiff SparseCtrl (#8897)
a-r-r-o-w Jul 26, 2024
ca0747a
remove unused code from pag attn procs (#8928)
a-r-r-o-w Jul 26, 2024
73acebb
[Kolors] Add IP Adapter (#8901)
asomoza Jul 26, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Added Code for Gradient Accumulation to work for basic_training (hugg…
…ingface#8961)

added line allowing gradient accumulation to work for basic_training example
  • Loading branch information
RandomGamingDev authored Jul 25, 2024
commit cdd12bde173f1c4589d981e16a7a6f467b137870
1 change: 1 addition & 0 deletions docs/source/en/tutorials/basic_training.md
Original file line number Diff line number Diff line change
Expand Up @@ -340,6 +340,7 @@ Now you can wrap all these components together in a training loop with 🤗 Acce
... loss = F.mse_loss(noise_pred, noise)
... accelerator.backward(loss)

... if (step + 1) % config.gradient_accumulation_steps == 0:
... accelerator.clip_grad_norm_(model.parameters(), 1.0)
... optimizer.step()
... lr_scheduler.step()
Expand Down