Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge changes #169

Merged
merged 55 commits into from
Jul 27, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
55 commits
Select commit Hold shift + click to select a range
bbd2f9d
[tests] fix typo in pag tests (#8845)
a-r-r-o-w Jul 12, 2024
3b37fef
[Docker] include python3.10 dev and solve header missing problem (#8865)
sayakpaul Jul 16, 2024
e87bf62
[`Cont'd`] Add the SDE variant of ~~DPM-Solver~~ and DPM-Solver++ to …
tolgacangoz Jul 17, 2024
f6cfe0a
modify pocs. (#8867)
sayakpaul Jul 17, 2024
0f09b01
[Core] fix: shard loading and saving when variant is provided. (#8869)
sayakpaul Jul 17, 2024
c2fbf8d
[Chore] allow auraflow latest to be torch compile compatible. (#8859)
sayakpaul Jul 17, 2024
e15a8e7
Add AuraFlowPipeline and KolorsPipeline to auto map (#8849)
Beinsezii Jul 17, 2024
c1dc2ae
Fix multi-gpu case for `train_cm_ct_unconditional.py` (#8653)
tolgacangoz Jul 17, 2024
12625c1
[docs] pipeline docs for latte (#8844)
a-r-r-o-w Jul 18, 2024
a41e4c5
[Chore] add disable forward chunking to SD3 transformer. (#8838)
sayakpaul Jul 18, 2024
e02ec27
[Core] remove `resume_download` from Hub related stuff (#8648)
sayakpaul Jul 18, 2024
eb24e4b
Add option to SSH into CPU runner. (#8884)
DN6 Jul 18, 2024
588fb5c
SSH into cpu runner fix (#8888)
DN6 Jul 18, 2024
3f14117
SSH into cpu runner additional fix (#8893)
DN6 Jul 18, 2024
c009c20
[SDXL] Fix uncaught error with image to image (#8856)
asomoza Jul 19, 2024
3b04cdc
fix loop bug in SlicedAttnProcessor (#8836)
shinetzh Jul 20, 2024
461efc5
[fix code annotation] Adjust the dimensions of the rotary positional …
wangqixun Jul 20, 2024
fe79489
allow tensors in several schedulers step() call (#8905)
catwell Jul 20, 2024
56e772a
Use model_info.id instead of model_info.modelId (#8912)
Wauplin Jul 20, 2024
1a8b3c2
[Training] SD3 training fixes (#8917)
sayakpaul Jul 21, 2024
267bf65
🌐 [i18n-KO] Translated docs to Korean (added 7 docs and etc) (#8804)
Snailpong Jul 22, 2024
f4af03b
[Docs] small fixes to pag guide. (#8920)
sayakpaul Jul 22, 2024
5802c2e
Reflect few contributions on `ethical_guidelines.md` that were not re…
mreraser Jul 22, 2024
af40004
[Tests] proper skipping of request caching test (#8908)
sayakpaul Jul 22, 2024
77c5de2
Add attentionless VAE support (#8769)
Gothos Jul 23, 2024
c5fdf33
[Benchmarking] check if runner helps to restore benchmarking (#8929)
sayakpaul Jul 23, 2024
f57b27d
Update pipeline test fetcher (#8931)
DN6 Jul 23, 2024
8b21fee
[Tests] reduce the model size in the audioldm2 fast test (#7846)
ariG23498 Jul 23, 2024
7710415
fix: checkpoint save issue in advanced dreambooth lora sdxl script (#…
akbaig Jul 23, 2024
7a95f8d
[Tests] Improve transformers model test suite coverage - Temporal Tra…
rootonchair Jul 23, 2024
cf55dcf
Fix Colab and Notebook checks for `diffusers-cli env` (#8408)
tolgacangoz Jul 23, 2024
3bb1fd6
Fix name when saving text inversion embeddings in dreambooth advanced…
DN6 Jul 23, 2024
50d21f7
[Core] fix QKV fusion for attention (#8829)
sayakpaul Jul 24, 2024
41b705f
remove residual i from auraflow. (#8949)
sayakpaul Jul 24, 2024
93983b6
[CI] Skip flaky download tests in PR CI (#8945)
DN6 Jul 24, 2024
2c25b98
[AuraFlow] fix long prompt handling (#8937)
sayakpaul Jul 24, 2024
cdd12bd
Added Code for Gradient Accumulation to work for basic_training (#8961)
RandomGamingDev Jul 25, 2024
4a782f4
[AudioLDM2] Fix cache pos for GPT-2 generation (#8964)
sanchit-gandhi Jul 25, 2024
d8bcb33
[Tests] fix slices of 26 tests (first half) (#8959)
sayakpaul Jul 25, 2024
5fbb4d3
[CI] Slow Test Updates (#8870)
DN6 Jul 25, 2024
3ae0ee8
[tests] speed up animatediff tests (#8846)
a-r-r-o-w Jul 25, 2024
527430d
[LoRA] introduce LoraBaseMixin to promote reusability. (#8774)
sayakpaul Jul 25, 2024
0bda1d7
Update TensorRT img2img community pipeline (#8899)
asfiyab-nvidia Jul 25, 2024
1fd647f
Enable CivitAI SDXL Inpainting Models Conversion (#8795)
mazharosama Jul 25, 2024
62863bb
Revert "[LoRA] introduce LoraBaseMixin to promote reusability." (#8976)
yiyixuxu Jul 25, 2024
9b8c860
fix guidance_scale value not equal to the value in comments (#8941)
efwfe Jul 25, 2024
50e66f2
[Chore] remove all is from auraflow. (#8980)
sayakpaul Jul 26, 2024
d87fe95
[Chore] add `LoraLoaderMixin` to the inits (#8981)
sayakpaul Jul 26, 2024
2afb2e0
Added `accelerator` based gradient accumulation for basic_example (#8…
RandomGamingDev Jul 26, 2024
bce9105
[CI] Fix parallelism in nightly tests (#8983)
DN6 Jul 26, 2024
1168eaa
[CI] Nightly Test Runner explicitly set runner for Setup Pipeline Mat…
DN6 Jul 26, 2024
57a021d
[fix] FreeInit step index out of bounds (#8969)
a-r-r-o-w Jul 26, 2024
5c53ca5
[core] AnimateDiff SparseCtrl (#8897)
a-r-r-o-w Jul 26, 2024
ca0747a
remove unused code from pag attn procs (#8928)
a-r-r-o-w Jul 26, 2024
73acebb
[Kolors] Add IP Adapter (#8901)
asomoza Jul 26, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/bug-report.yml
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ body:
- ControlNet @sayakpaul @yiyixuxu @DN6
- T2I Adapter @sayakpaul @yiyixuxu @DN6
- IF @DN6
- Text-to-Video / Video-to-Video @DN6 @sayakpaul
- Text-to-Video / Video-to-Video @DN6 @a-r-r-o-w
- Wuerstchen @DN6
- Other: @yiyixuxu @DN6
- Improving generation quality: @asomoza
Expand Down
1 change: 1 addition & 0 deletions .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,7 @@ Core library:
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc
- PEFT: @sayakpaul @BenjaminBossan
HF projects:
Expand Down
5 changes: 3 additions & 2 deletions .github/workflows/benchmark.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,10 +19,11 @@ jobs:
strategy:
fail-fast: false
max-parallel: 1
runs-on: [single-gpu, nvidia-gpu, a10, ci]
runs-on:
group: aws-g6-4xlarge-plus
container:
image: diffusers/diffusers-pytorch-compile-cuda
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0
options: --shm-size "16gb" --ipc host --gpus 0
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
Expand Down
107 changes: 21 additions & 86 deletions .github/workflows/nightly_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ on:

env:
DIFFUSERS_IS_CI: yes
HF_HOME: /mnt/cache
HF_HUB_ENABLE_HF_TRANSFER: 1
OMP_NUM_THREADS: 8
MKL_NUM_THREADS: 8
PYTEST_TIMEOUT: 600
Expand All @@ -18,19 +18,17 @@ env:

jobs:
setup_torch_cuda_pipeline_matrix:
name: Setup Torch Pipelines Matrix
runs-on: diffusers/diffusers-pytorch-cpu
name: Setup Torch Pipelines CUDA Slow Tests Matrix
runs-on: [ self-hosted, intel-cpu, 8-cpu, ci ]
container:
image: diffusers/diffusers-pytorch-cpu
outputs:
pipeline_test_matrix: ${{ steps.fetch_pipeline_matrix.outputs.pipeline_test_matrix }}
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.8"
- name: Install dependencies
run: |
pip install -e .
Expand All @@ -50,36 +48,34 @@ jobs:
path: reports

run_nightly_tests_for_torch_pipelines:
name: Torch Pipelines CUDA Nightly Tests
name: Nightly Torch Pipelines CUDA Tests
needs: setup_torch_cuda_pipeline_matrix
strategy:
fail-fast: false
max-parallel: 8
matrix:
module: ${{ fromJson(needs.setup_torch_cuda_pipeline_matrix.outputs.pipeline_test_matrix) }}
runs-on: [single-gpu, nvidia-gpu, t4, ci]
container:
image: diffusers/diffusers-pytorch-cuda
options: --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface/diffusers:/mnt/cache/ --gpus 0
options: --shm-size "16gb" --ipc host --gpus 0
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: NVIDIA-SMI
run: nvidia-smi

- name: Install dependencies
run: |
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m uv pip install -e [quality,test]
python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git
python -m uv pip install pytest-reportlog
- name: Environment
run: |
python utils/print_env.py
- name: Nightly PyTorch CUDA checkpoint (pipelines) tests
- name: Pipeline CUDA Test
env:
HF_TOKEN: ${{ secrets.HF_TOKEN }}
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
Expand All @@ -90,38 +86,36 @@ jobs:
--make-reports=tests_pipeline_${{ matrix.module }}_cuda \
--report-log=tests_pipeline_${{ matrix.module }}_cuda.log \
tests/pipelines/${{ matrix.module }}
- name: Failure short reports
if: ${{ failure() }}
run: |
cat reports/tests_pipeline_${{ matrix.module }}_cuda_stats.txt
cat reports/tests_pipeline_${{ matrix.module }}_cuda_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: pipeline_${{ matrix.module }}_test_reports
path: reports

- name: Generate Report and Notify Channel
if: always()
run: |
pip install slack_sdk tabulate
python scripts/log_reports.py >> $GITHUB_STEP_SUMMARY
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_nightly_tests_for_other_torch_modules:
name: Torch Non-Pipelines CUDA Nightly Tests
name: Nightly Torch CUDA Tests
runs-on: [single-gpu, nvidia-gpu, t4, ci]
container:
image: diffusers/diffusers-pytorch-cuda
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0
options: --shm-size "16gb" --ipc host --gpus 0
defaults:
run:
shell: bash
strategy:
max-parallel: 2
matrix:
module: [models, schedulers, others, examples]
module: [models, schedulers, lora, others, single_file, examples]
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
Expand All @@ -133,8 +127,8 @@ jobs:
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m uv pip install -e [quality,test]
python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git
python -m uv pip install peft@git+https://github.com/huggingface/peft.git
python -m uv pip install pytest-reportlog
- name: Environment
run: python utils/print_env.py

Expand All @@ -158,7 +152,6 @@ jobs:
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
CUBLAS_WORKSPACE_CONFIG: :16:8
run: |
python -m uv pip install peft@git+https://github.com/huggingface/peft.git
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v --make-reports=examples_torch_cuda \
--report-log=examples_torch_cuda.log \
Expand All @@ -181,64 +174,7 @@ jobs:
if: always()
run: |
pip install slack_sdk tabulate
python scripts/log_reports.py >> $GITHUB_STEP_SUMMARY
run_lora_nightly_tests:
name: Nightly LoRA Tests with PEFT and TORCH
runs-on: [single-gpu, nvidia-gpu, t4, ci]
container:
image: diffusers/diffusers-pytorch-cuda
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0
defaults:
run:
shell: bash
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2

- name: Install dependencies
run: |
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m uv pip install -e [quality,test]
python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git
python -m uv pip install peft@git+https://github.com/huggingface/peft.git
python -m uv pip install pytest-reportlog
- name: Environment
run: python utils/print_env.py

- name: Run nightly LoRA tests with PEFT and Torch
env:
HF_TOKEN: ${{ secrets.HF_TOKEN }}
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
CUBLAS_WORKSPACE_CONFIG: :16:8
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_torch_lora_cuda \
--report-log=tests_torch_lora_cuda.log \
tests/lora
- name: Failure short reports
if: ${{ failure() }}
run: |
cat reports/tests_torch_lora_cuda_stats.txt
cat reports/tests_torch_lora_cuda_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v2
with:
name: torch_lora_cuda_test_reports
path: reports

- name: Generate Report and Notify Channel
if: always()
run: |
pip install slack_sdk tabulate
python scripts/log_reports.py >> $GITHUB_STEP_SUMMARY
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_flax_tpu_tests:
name: Nightly Flax TPU Tests
Expand Down Expand Up @@ -294,14 +230,14 @@ jobs:
if: always()
run: |
pip install slack_sdk tabulate
python scripts/log_reports.py >> $GITHUB_STEP_SUMMARY
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_nightly_onnx_tests:
name: Nightly ONNXRuntime CUDA tests on Ubuntu
runs-on: [single-gpu, nvidia-gpu, t4, ci]
container:
image: diffusers/diffusers-onnxruntime-cuda
options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/
options: --gpus 0 --shm-size "16gb" --ipc host

steps:
- name: Checkout diffusers
Expand All @@ -318,11 +254,10 @@ jobs:
python -m uv pip install -e [quality,test]
python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git
python -m uv pip install pytest-reportlog
- name: Environment
run: python utils/print_env.py

- name: Run nightly ONNXRuntime CUDA tests
- name: Run Nightly ONNXRuntime CUDA tests
env:
HF_TOKEN: ${{ secrets.HF_TOKEN }}
run: |
Expand All @@ -349,7 +284,7 @@ jobs:
if: always()
run: |
pip install slack_sdk tabulate
python scripts/log_reports.py >> $GITHUB_STEP_SUMMARY
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
run_nightly_tests_apple_m1:
name: Nightly PyTorch MPS tests on MacOS
Expand Down Expand Up @@ -411,4 +346,4 @@ jobs:
if: always()
run: |
pip install slack_sdk tabulate
python scripts/log_reports.py >> $GITHUB_STEP_SUMMARY
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
Loading
Loading