Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge changes #117

Merged
merged 25 commits into from
Oct 10, 2023
Merged

Merge changes #117

merged 25 commits into from
Oct 10, 2023

Conversation

Skquark
Copy link
Owner

@Skquark Skquark commented Oct 10, 2023

No description provided.

sayakpaul and others added 25 commits October 5, 2023 14:29
* add: entry for DDPO support.

* move to training

* address steven's comments./
#5238)

Min-SNR Gamma: correct the fix for SNR weighted loss in v-prediction by adding 1 to SNR rather than the resulting loss weights

Co-authored-by: bghira <[email protected]>
Co-authored-by: Sayak Paul <[email protected]>
bump tolerance on shape test
…usionLatentUpscalePipeline (#5194)

* add from single file

* clean up

* make style

* add single file loading for upscaling
fix: torch.compile() for lora conv
* start

* finish draft

* add section

* edits

* feedback

* make fix-copies

* rebase
* Update pipeline_wuerstchen_prior.py

* prior_num_inference_steps updated

* height, width, num_inference_steps, and guidance_scale synced

* parameters synced

* latent_mean, latent_std, and resolution_multiple synced

* prior_num_inference_steps changed

* Formatted pipeline_wuerstchen_prior.py

* Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_prior.py

---------

Co-authored-by: Kashif Rasul <[email protected]>
…ainting mode when EulerAncestralDiscreteScheduler is used (#5305)

* fix(gligen_inpaint_pipeline): 🐛 Wrap the timestep() 0-d tensor in a list to convert to 1-d tensor. This avoids the TypeError caused by trying to directly iterate over a 0-dimensional tensor in the denoising stage

* test(gligen/gligen_text_image): unit test using the EulerAncestralDiscreteScheduler

---------

Co-authored-by: zhen-hao.chu <[email protected]>
Co-authored-by: Sayak Paul <[email protected]>
* Update train_custom_diffusion.py

* make style

* Empty-Commit

---------

Co-authored-by: Sayak Paul <[email protected]>
* Reduce number of down block channels

* Remove debug code

* Set new excepted image slice values for sdxl euler test
* decrease UNet2DConditionModel & ControlNetModel blocks

* decrease UNet2DConditionModel & ControlNetModel blocks

* decrease even more blocks & number of norm groups

* decrease vae block out channels and n of norm goups

* fix code style

---------

Co-authored-by: Sayak Paul <[email protected]>
* improvement: add typehints and docs to diffusers/models/activations.py

* improvement: add typehints and docs to diffusers/models/resnet.py
* add missing docstrings

* chore: run make quality

* improvement: include docs suggestion by @yiyixuxu

---------

Co-authored-by: Patrick von Platen <[email protected]>
Update adapter.md to fix links to adapter pipelines
* Fix fuse Lora

* improve a bit

* make style

* Update src/diffusers/models/lora.py

Co-authored-by: Benjamin Bossan <[email protected]>

* ciao C file

* ciao C file

* test & make style

---------

Co-authored-by: Benjamin Bossan <[email protected]>
`jnp.array` is a function, not a type:
https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.array.html
so it never makes sense to use `jnp.array` in a type annotation.

Presumably the intent was to write `jnp.ndarray` aka `jax.Array`. Change uses of `jnp.array` to `jnp.ndarray`.

Co-authored-by: Peter Hawkins <[email protected]>
Update requirements_sdxl.txt

Add missing 'datasets'

Co-authored-by: Sayak Paul <[email protected]>
…5340)

fix problem of 'accelerator.is_main_process' to run in mutiple GPUs or NPUs

Co-authored-by: jiaqiw <[email protected]>
@Skquark Skquark merged commit 3dca18f into Skquark:main Oct 10, 2023
1 of 4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.