Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge changes #157

Merged
merged 56 commits into from
May 9, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
56 commits
Select commit Hold shift + click to select a range
e963621
[PixArt] fix small nits in pixart sigma (#7767)
sayakpaul Apr 25, 2024
b833d0f
[Tests] mark UNetControlNetXSModelTests::test_forward_no_control to b…
sayakpaul Apr 25, 2024
142f353
Fix lora device test (#7738)
sayakpaul Apr 25, 2024
1816880
[docs] Reproducible pipelines (#7769)
stevhliu Apr 25, 2024
fa750a1
[docs] Refactor image quality docs (#7758)
stevhliu Apr 25, 2024
ebc99a7
Convert RGB to BGR for the SDXL watermark encoder (#7013)
btlorch Apr 26, 2024
e24e54f
[docs] Fix AutoPipeline docstring (#7779)
stevhliu Apr 26, 2024
0d2d424
Add PixArtSigmaPipeline to AutoPipeline mapping (#7783)
Beinsezii Apr 26, 2024
8e4ca1b
[Docs] Update image masking and face id example (#7780)
fabiorigano Apr 26, 2024
9d16daa
Add DREAM training (#6381)
AmericanPresidentJimmyCarter Apr 27, 2024
56bd7e6
[Scheduler] introduce sigma schedule. (#7649)
sayakpaul Apr 27, 2024
5029673
Update InstantStyle usage in IP-Adapter documentation (#7806)
JY-Joy Apr 28, 2024
235d34c
Check for latents, before calling prepare_latents - sdxlImg2Img (#7582)
nileshkokane01 Apr 29, 2024
b1c5817
Add debugging workflow (#7778)
DN6 Apr 29, 2024
a38dd79
[Pipeline] Fix error of SVD pipeline when num_videos_per_prompt > 1 (…
wuyushuwys Apr 29, 2024
eb96ff0
Safetensor loading in AnimateDiff conversion scripts (#7764)
DN6 Apr 29, 2024
8af793b
Adding TextualInversionLoaderMixin for the controlnet_inpaint_sd_xl p…
jschoormans Apr 29, 2024
83ae24c
Added get_velocity function to EulerDiscreteScheduler. (#7733)
RuiningLi Apr 29, 2024
f53352f
Set main_input_name in StableDiffusionSafetyChecker to "clip_input" (…
clinty Apr 29, 2024
31d9f9e
[Tests] reduce the model size in the ddim fast test (#7803)
ariG23498 Apr 30, 2024
21f023e
[Tests] reduce the model size in the ddpm fast test (#7797)
ariG23498 Apr 30, 2024
b02e211
[Tests] reduce the model size in the amused fast test (#7804)
ariG23498 Apr 30, 2024
3fd31ee
[Core] introduce _no_split_modules to `ModelMixin` (#6396)
sayakpaul Apr 30, 2024
26a7851
Add B-Lora training option to the advanced dreambooth lora script (#7…
linoytsaban Apr 30, 2024
725ead2
SSH Runner Workflow Update (#7822)
DN6 Apr 30, 2024
b8ccb46
Fix CPU offload in docstring (#7827)
tolgacangoz Apr 30, 2024
0d08370
[docs] Community pipelines (#7819)
stevhliu Apr 30, 2024
c1edb03
Fix for pipeline slow test fetcher (#7824)
DN6 May 1, 2024
8909ab4
[Tests] fix: device map tests for models (#7825)
sayakpaul May 1, 2024
21a7ff1
update the logic of `is_sequential_cpu_offload` (#7788)
yiyixuxu May 1, 2024
5915c29
[ip-adapter] fix ip-adapter for StableDiffusionInstructPix2PixPipelin…
yiyixuxu May 1, 2024
435d37c
[Tests] reduce the model size in the audioldm fast test (#7833)
ariG23498 May 2, 2024
c1b2a89
Fix key error for dictionary with randomized order in convert_ldm_une…
yunseongcho May 2, 2024
3ffa7b4
Fix hanging pipeline fetching (#7837)
DN6 May 2, 2024
03ca113
Update download diff format tests (#7831)
DN6 May 2, 2024
3c85a57
Update CI cache (#7832)
DN6 May 2, 2024
44ba90c
move to new runners (#7839)
glegendre01 May 2, 2024
ce97d7e
Change GPU Runners (#7840)
glegendre01 May 2, 2024
0d7c479
Update deps for pipe test fetcher (#7838)
DN6 May 2, 2024
fa489ea
[Tests] reduce the model size in the blipdiffusion fast test (#7849)
ariG23498 May 3, 2024
6a47958
Respect `resume_download` deprecation (#7843)
Wauplin May 3, 2024
3e35628
Remove installing python again in container (#7852)
DN6 May 3, 2024
5823736
Add Ascend NPU support for SDXL fine-tuning and fix the model saving …
HelloWorldBeginner May 3, 2024
49b959b
[docs] LCM (#7829)
stevhliu May 3, 2024
7fa3e5b
Ci - change cache folder (#7867)
glegendre01 May 6, 2024
0d23645
[docs] Distilled inference (#7834)
stevhliu May 6, 2024
23e0915
Fix for "no lora weight found module" with some loras (#7875)
asomoza May 7, 2024
8edaf3b
7879 - adjust documentation to use naruto dataset, since pokemon is n…
bghira May 7, 2024
c221714
Modification on the PAG community pipeline (re) (#7876)
HyoungwonCho May 8, 2024
d50baf0
Fix image upcasting (#7858)
tolgacangoz May 8, 2024
f29b934
Check shape and remove deprecated APIs in scheduling_ddpm_flax.py (#7…
ppham27 May 8, 2024
818f760
[Pipeline] AnimateDiff SDXL (#6721)
a-r-r-o-w May 8, 2024
35358a2
fix offload test (#7868)
yiyixuxu May 8, 2024
75aab34
Allow users to save SDXL LoRA weights for only one text encoder (#7607)
dulacp May 8, 2024
c1c4269
Remove dead code and fix f-string issue (#7720)
tolgacangoz May 8, 2024
caf9e98
Fix several imports (#7712)
tolgacangoz May 9, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 25 additions & 25 deletions .github/workflows/nightly_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ env:
jobs:
setup_torch_cuda_pipeline_matrix:
name: Setup Torch Pipelines Matrix
runs-on: ubuntu-latest
runs-on: diffusers/diffusers-pytorch-cpu
outputs:
pipeline_test_matrix: ${{ steps.fetch_pipeline_matrix.outputs.pipeline_test_matrix }}
steps:
Expand Down Expand Up @@ -67,19 +67,19 @@ jobs:
fetch-depth: 2
- name: NVIDIA-SMI
run: nvidia-smi

- name: Install dependencies
run: |
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m uv pip install -e [quality,test]
python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git
python -m uv pip install pytest-reportlog

- name: Environment
run: |
python utils/print_env.py
- name: Nightly PyTorch CUDA checkpoint (pipelines) tests

- name: Nightly PyTorch CUDA checkpoint (pipelines) tests
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
Expand All @@ -88,9 +88,9 @@ jobs:
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_pipeline_${{ matrix.module }}_cuda \
--report-log=tests_pipeline_${{ matrix.module }}_cuda.log \
--report-log=tests_pipeline_${{ matrix.module }}_cuda.log \
tests/pipelines/${{ matrix.module }}

- name: Failure short reports
if: ${{ failure() }}
run: |
Expand All @@ -103,7 +103,7 @@ jobs:
with:
name: pipeline_${{ matrix.module }}_test_reports
path: reports

- name: Generate Report and Notify Channel
if: always()
run: |
Expand All @@ -112,7 +112,7 @@ jobs:

run_nightly_tests_for_other_torch_modules:
name: Torch Non-Pipelines CUDA Nightly Tests
runs-on: docker-gpu
runs-on: [single-gpu, nvidia-gpu, t4, ci]
container:
image: diffusers/diffusers-pytorch-cuda
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0
Expand All @@ -139,7 +139,7 @@ jobs:
run: python utils/print_env.py

- name: Run nightly PyTorch CUDA tests for non-pipeline modules
if: ${{ matrix.module != 'examples'}}
if: ${{ matrix.module != 'examples'}}
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
Expand All @@ -148,7 +148,7 @@ jobs:
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_torch_${{ matrix.module }}_cuda \
--report-log=tests_torch_${{ matrix.module }}_cuda.log \
--report-log=tests_torch_${{ matrix.module }}_cuda.log \
tests/${{ matrix.module }}

- name: Run nightly example tests with Torch
Expand All @@ -161,13 +161,13 @@ jobs:
python -m uv pip install peft@git+https://github.com/huggingface/peft.git
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v --make-reports=examples_torch_cuda \
--report-log=examples_torch_cuda.log \
--report-log=examples_torch_cuda.log \
examples/

- name: Failure short reports
if: ${{ failure() }}
run: |
cat reports/tests_torch_${{ matrix.module }}_cuda_stats.txt
cat reports/tests_torch_${{ matrix.module }}_cuda_stats.txt
cat reports/tests_torch_${{ matrix.module }}_cuda_failures_short.txt

- name: Test suite reports artifacts
Expand All @@ -185,7 +185,7 @@ jobs:

run_lora_nightly_tests:
name: Nightly LoRA Tests with PEFT and TORCH
runs-on: docker-gpu
runs-on: [single-gpu, nvidia-gpu, t4, ci]
container:
image: diffusers/diffusers-pytorch-cuda
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0
Expand Down Expand Up @@ -218,13 +218,13 @@ jobs:
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_torch_lora_cuda \
--report-log=tests_torch_lora_cuda.log \
--report-log=tests_torch_lora_cuda.log \
tests/lora

- name: Failure short reports
if: ${{ failure() }}
run: |
cat reports/tests_torch_lora_cuda_stats.txt
cat reports/tests_torch_lora_cuda_stats.txt
cat reports/tests_torch_lora_cuda_failures_short.txt

- name: Test suite reports artifacts
Expand All @@ -239,12 +239,12 @@ jobs:
run: |
pip install slack_sdk tabulate
python scripts/log_reports.py >> $GITHUB_STEP_SUMMARY

run_flax_tpu_tests:
name: Nightly Flax TPU Tests
runs-on: docker-tpu
if: github.event_name == 'schedule'

container:
image: diffusers/diffusers-flax-tpu
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --privileged
Expand Down Expand Up @@ -274,7 +274,7 @@ jobs:
python -m pytest -n 0 \
-s -v -k "Flax" \
--make-reports=tests_flax_tpu \
--report-log=tests_flax_tpu.log \
--report-log=tests_flax_tpu.log \
tests/

- name: Failure short reports
Expand All @@ -298,11 +298,11 @@ jobs:

run_nightly_onnx_tests:
name: Nightly ONNXRuntime CUDA tests on Ubuntu
runs-on: docker-gpu
runs-on: [single-gpu, nvidia-gpu, t4, ci]
container:
image: diffusers/diffusers-onnxruntime-cuda
options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/

steps:
- name: Checkout diffusers
uses: actions/checkout@v3
Expand All @@ -321,15 +321,15 @@ jobs:

- name: Environment
run: python utils/print_env.py

- name: Run nightly ONNXRuntime CUDA tests
env:
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "Onnx" \
--make-reports=tests_onnx_cuda \
--report-log=tests_onnx_cuda.log \
--report-log=tests_onnx_cuda.log \
tests/

- name: Failure short reports
Expand All @@ -344,7 +344,7 @@ jobs:
with:
name: ${{ matrix.config.report }}_test_reports
path: reports

- name: Generate Report and Notify Channel
if: always()
run: |
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/pr_test_fetcher.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ concurrency:
jobs:
setup_pr_tests:
name: Setup PR Tests
runs-on: docker-cpu
runs-on: [ self-hosted, intel-cpu, 8-cpu, ci ]
container:
image: diffusers/diffusers-pytorch-cpu
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/
Expand Down Expand Up @@ -73,7 +73,7 @@ jobs:
max-parallel: 2
matrix:
modules: ${{ fromJson(needs.setup_pr_tests.outputs.matrix) }}
runs-on: docker-cpu
runs-on: [ self-hosted, intel-cpu, 8-cpu, ci ]
container:
image: diffusers/diffusers-pytorch-cpu
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/
Expand Down Expand Up @@ -123,7 +123,7 @@ jobs:
config:
- name: Hub tests for models, schedulers, and pipelines
framework: hub_tests_pytorch
runner: docker-cpu
runner: [ self-hosted, intel-cpu, 8-cpu, ci ]
image: diffusers/diffusers-pytorch-cpu
report: torch_hub

Expand Down
43 changes: 22 additions & 21 deletions .github/workflows/push_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,22 +21,23 @@ env:
jobs:
setup_torch_cuda_pipeline_matrix:
name: Setup Torch Pipelines CUDA Slow Tests Matrix
runs-on: ubuntu-latest
runs-on: [ self-hosted, intel-cpu, 8-cpu, ci ]
container:
image: diffusers/diffusers-pytorch-cpu
outputs:
pipeline_test_matrix: ${{ steps.fetch_pipeline_matrix.outputs.pipeline_test_matrix }}
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.8"
- name: Install dependencies
run: |
pip install -e .
pip install huggingface_hub
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m uv pip install -e [quality,test]
- name: Environment
run: |
python utils/print_env.py
- name: Fetch Pipeline Matrix
id: fetch_pipeline_matrix
run: |
Expand All @@ -60,7 +61,7 @@ jobs:
runs-on: [single-gpu, nvidia-gpu, t4, ci]
container:
image: diffusers/diffusers-pytorch-cuda
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0 --privileged
options: --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface/diffusers:/mnt/cache/ --gpus 0 --privileged
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
Expand Down Expand Up @@ -114,10 +115,10 @@ jobs:

torch_cuda_tests:
name: Torch CUDA Tests
runs-on: docker-gpu
runs-on: [single-gpu, nvidia-gpu, t4, ci]
container:
image: diffusers/diffusers-pytorch-cuda
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0
options: --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface/diffusers:/mnt/cache/ --gpus 0
defaults:
run:
shell: bash
Expand Down Expand Up @@ -166,10 +167,10 @@ jobs:

peft_cuda_tests:
name: PEFT CUDA Tests
runs-on: docker-gpu
runs-on: [single-gpu, nvidia-gpu, t4, ci]
container:
image: diffusers/diffusers-pytorch-cuda
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0
options: --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface/diffusers:/mnt/cache/ --gpus 0
defaults:
run:
shell: bash
Expand Down Expand Up @@ -219,7 +220,7 @@ jobs:
runs-on: docker-tpu
container:
image: diffusers/diffusers-flax-tpu
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --privileged
options: --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/ --privileged
defaults:
run:
shell: bash
Expand Down Expand Up @@ -263,10 +264,10 @@ jobs:

onnx_cuda_tests:
name: ONNX CUDA Tests
runs-on: docker-gpu
runs-on: [single-gpu, nvidia-gpu, t4, ci]
container:
image: diffusers/diffusers-onnxruntime-cuda
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0
options: --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/ --gpus 0
defaults:
run:
shell: bash
Expand Down Expand Up @@ -311,11 +312,11 @@ jobs:
run_torch_compile_tests:
name: PyTorch Compile CUDA tests

runs-on: docker-gpu
runs-on: [single-gpu, nvidia-gpu, t4, ci]

container:
image: diffusers/diffusers-pytorch-compile-cuda
options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/
options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/

steps:
- name: Checkout diffusers
Expand Down Expand Up @@ -352,11 +353,11 @@ jobs:
run_xformers_tests:
name: PyTorch xformers CUDA tests

runs-on: docker-gpu
runs-on: [single-gpu, nvidia-gpu, t4, ci]

container:
image: diffusers/diffusers-pytorch-xformers-cuda
options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/
options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/

steps:
- name: Checkout diffusers
Expand Down Expand Up @@ -393,11 +394,11 @@ jobs:
run_examples_tests:
name: Examples PyTorch CUDA tests on Ubuntu

runs-on: docker-gpu
runs-on: [single-gpu, nvidia-gpu, t4, ci]

container:
image: diffusers/diffusers-pytorch-cuda
options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/
options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/

steps:
- name: Checkout diffusers
Expand Down
46 changes: 46 additions & 0 deletions .github/workflows/ssh-runner.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
name: SSH into runners

on:
workflow_dispatch:
inputs:
runner_type:
description: 'Type of runner to test (a10 or t4)'
required: true
docker_image:
description: 'Name of the Docker image'
required: true

env:
IS_GITHUB_CI: "1"
HF_HUB_READ_TOKEN: ${{ secrets.HF_HUB_READ_TOKEN }}
HF_HOME: /mnt/cache
DIFFUSERS_IS_CI: yes
OMP_NUM_THREADS: 8
MKL_NUM_THREADS: 8
RUN_SLOW: yes

jobs:
ssh_runner:
name: "SSH"
runs-on: [single-gpu, nvidia-gpu, "${{ github.event.inputs.runner_type }}", ci]
container:
image: ${{ github.event.inputs.docker_image }}
options: --gpus all --privileged --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/

steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2

- name: NVIDIA-SMI
run: |
nvidia-smi

- name: Tailscale # In order to be able to SSH when a test fails
uses: huggingface/tailscale-action@v1
with:
authkey: ${{ secrets.TAILSCALE_SSH_AUTHKEY }}
slackChannel: ${{ secrets.SLACK_CIFEEDBACK_CHANNEL }}
slackToken: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }}
waitForSSH: true
Loading
Loading