Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge changes #123

Merged
merged 31 commits into from
Nov 10, 2023
Merged

Merge changes #123

merged 31 commits into from
Nov 10, 2023

Conversation

Skquark
Copy link
Owner

@Skquark Skquark commented Nov 10, 2023

No description provided.

DN6 and others added 30 commits November 7, 2023 12:46
update custom diffusion attn processor
* fix model xformers test

* update
update free model hooks
* fix

* Update src/diffusers/models/attention.py

Co-authored-by: Patrick von Platen <[email protected]>

---------

Co-authored-by: Patrick von Platen <[email protected]>
* explicit torch dependency check

* update

* update

* update
… with a batch size > 1 (#5677)

* fix embeds

* remove todo

* add: test

* better name
* fix: import bug

* fix

* fix

* fix import utils for lcm

* fix: pixart alpha init

* Fix

---------

Co-authored-by: Patrick von Platen <[email protected]>
* debug

* support non-square images

* add: test

* fix: test

---------

Co-authored-by: Patrick von Platen <[email protected]>
* Refactor LCMScheduler.step such that prev_sample == denoised at the last timestep in the schedule.

* Make timestep scaling when calculating boundary conditions configurable.

* Reparameterize timestep_scaling to be a multiplicative rather than division scaling.

* make style

* fix dtype conversion

* make style

---------

Co-authored-by: Patrick von Platen <[email protected]>
)

* Fix typos, improve, update; kandinsky doesn't want fp16 due to deprecation; ogkalu and kohbanye don't have safetensor; add make_image_grid for better visualization

* Update inpaint.md

* Remove erronous Space

* Update docs/source/en/using-diffusers/conditional_image_generation.md

Co-authored-by: Steven Liu <[email protected]>

* Update img2img.md

* load_image() already converts to RGB

* Update depth2img.md

* Update img2img.md

* Update inpaint.md

---------

Co-authored-by: Steven Liu <[email protected]>
…ion. (#5651)

* I added a new doc string to the class. This is more flexible to understanding other developers what are doing and where it's using.

* Update src/diffusers/models/unet_2d_blocks.py

This changes suggest by maintener.

Co-authored-by: Sayak Paul <[email protected]>

* Update src/diffusers/models/unet_2d_blocks.py

Add suggested text

Co-authored-by: Sayak Paul <[email protected]>

* Update unet_2d_blocks.py

I changed the Parameter to Args text.

* Update unet_2d_blocks.py

proper indentation set in this file.

* Update unet_2d_blocks.py

a little bit of change in the act_fun argument line.

* I run the black command to reformat style in the code

* Update unet_2d_blocks.py

similar doc-string add to have in the original diffusion repository.

* I removed the dummy variable defined in both the encoder and decoder.

* Now, I run black package to reformat my file

* Remove the redundant line from the adapter.py file.

* Black package using to reformated my file

* Replacing the nn.Mish activation function with a get_activation function allows developers to more easily choose the right activation function for their task. Additionally, removing redundant variables can improve code readability and maintainability.

* I try to fix this: Fast tests for PRs / Fast PyTorch Models & Schedulers CPU tests (pull_request)

* Update src/diffusers/models/resnet.py

Co-authored-by: YiYi Xu <[email protected]>

---------

Co-authored-by: Sayak Paul <[email protected]>
Co-authored-by: Dhruv Nair <[email protected]>
Co-authored-by: YiYi Xu <[email protected]>
skip rendering

Co-authored-by: yiyixuxu <yixu310@gmail,com>
* [LCM] Fix img2img

* make fix-copies

* make fix-copies

* make fix-copies

* up
* fix mask feature condition.

* debug

* remove identical test

* set correct

* Empty-Commit
* up

* up

* up

* Empty-Commit

* fix keyword argument call.

---------

Co-authored-by: Sayak Paul <[email protected]>
* Add adapter fusing + PEFT to the docs

* Update docs/source/en/tutorials/using_peft_for_inference.md

Co-authored-by: Sayak Paul <[email protected]>

* Update docs/source/en/tutorials/using_peft_for_inference.md

Co-authored-by: Sayak Paul <[email protected]>

* Update docs/source/en/tutorials/using_peft_for_inference.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/en/tutorials/using_peft_for_inference.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/en/tutorials/using_peft_for_inference.md

* Update docs/source/en/tutorials/using_peft_for_inference.md

---------

Co-authored-by: Sayak Paul <[email protected]>
Co-authored-by: Steven Liu <[email protected]>
* fix prompt bug

* add test
* bugfix peft lor

* Apply suggestions from code review

---------

Co-authored-by: Patrick von Platen <[email protected]>
* consistency decoder

* rename

* Apply suggestions from code review

Co-authored-by: Sayak Paul <[email protected]>

* Update src/diffusers/pipelines/consistency_models/pipeline_consistency_models.py

* uP

* Apply suggestions from code review

* uP

* uP

* uP

---------

Co-authored-by: Patrick von Platen <[email protected]>
Co-authored-by: Sayak Paul <[email protected]>
* uP

* Update src/diffusers/models/consistency_decoder_vae.py

* uP

* uP
* lcm add tests

* uP

* Fix all

* uP

* Add

* all

* uP

* uP

* uP

* uP

* uP

* uP

* uP
* add: locm docs.

* correct path

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <[email protected]>

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <[email protected]>

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <[email protected]>

* up

* add

---------

Co-authored-by: Pedro Cuenca <[email protected]>
Co-authored-by: Patrick von Platen <[email protected]>
* add lcm scripts

* Co-authored-by: [email protected]
* Fix typos, update, trim trailing whitespace

* Trim trailing whitespaces

* Update docs/source/en/optimization/memory.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/en/optimization/memory.md

Co-authored-by: Steven Liu <[email protected]>

* Update _toctree.yml

* Update adapt_a_model.md

* Reverse

* Reverse

* Reverse

* Update dreambooth.md

* Update instructpix2pix.md

* Update lora.md

* Update overview.md

* Update t2i_adapters.md

* Update text2image.md

* Update text_inversion.md

* Update create_dataset.md

* Update create_dataset.md

* Update create_dataset.md

* Update create_dataset.md

* Update coreml.md

* Delete docs/source/en/training/create_dataset.md

* Original create_dataset.md

* Update create_dataset.md

* Delete docs/source/en/training/create_dataset.md

* Add original file

* Delete docs/source/en/training/create_dataset.md

* Add original one

* Delete docs/source/en/training/text2image.md

* Delete docs/source/en/training/instructpix2pix.md

* Delete docs/source/en/training/dreambooth.md

* Add original files

---------

Co-authored-by: Steven Liu <[email protected]>
* [Docs] Running the pipeline twice does not appear to be the intention of these examples

One is with `cross_attention_kwargs` and the other (next line) removes it

* [Docs] Clarify that these are two separate examples

One using `scale` and the other without it
* post release

* fix: variant test

* up

* fix: test
@Skquark Skquark merged commit dead0d7 into Skquark:main Nov 10, 2023
3 of 8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.