forked from huggingface/diffusers
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge changes #117
Merged
Merged
Merge changes #117
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* add: entry for DDPO support. * move to training * address steven's comments./
#5238) Min-SNR Gamma: correct the fix for SNR weighted loss in v-prediction by adding 1 to SNR rather than the resulting loss weights Co-authored-by: bghira <[email protected]> Co-authored-by: Sayak Paul <[email protected]>
bump tolerance on shape test
…usionLatentUpscalePipeline (#5194) * add from single file * clean up * make style * add single file loading for upscaling
fix: torch.compile() for lora conv
* start * finish draft * add section * edits * feedback * make fix-copies * rebase
tiny fixes
* Update pipeline_wuerstchen_prior.py * prior_num_inference_steps updated * height, width, num_inference_steps, and guidance_scale synced * parameters synced * latent_mean, latent_std, and resolution_multiple synced * prior_num_inference_steps changed * Formatted pipeline_wuerstchen_prior.py * Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_prior.py --------- Co-authored-by: Kashif Rasul <[email protected]>
Co-authored-by: YiYi Xu <[email protected]>
…ainting mode when EulerAncestralDiscreteScheduler is used (#5305) * fix(gligen_inpaint_pipeline): 🐛 Wrap the timestep() 0-d tensor in a list to convert to 1-d tensor. This avoids the TypeError caused by trying to directly iterate over a 0-dimensional tensor in the denoising stage * test(gligen/gligen_text_image): unit test using the EulerAncestralDiscreteScheduler --------- Co-authored-by: zhen-hao.chu <[email protected]> Co-authored-by: Sayak Paul <[email protected]>
* Update train_custom_diffusion.py * make style * Empty-Commit --------- Co-authored-by: Sayak Paul <[email protected]>
* Reduce number of down block channels * Remove debug code * Set new excepted image slice values for sdxl euler test
* decrease UNet2DConditionModel & ControlNetModel blocks * decrease UNet2DConditionModel & ControlNetModel blocks * decrease even more blocks & number of norm groups * decrease vae block out channels and n of norm goups * fix code style --------- Co-authored-by: Sayak Paul <[email protected]>
* improvement: add typehints and docs to diffusers/models/activations.py * improvement: add typehints and docs to diffusers/models/resnet.py
* add missing docstrings * chore: run make quality * improvement: include docs suggestion by @yiyixuxu --------- Co-authored-by: Patrick von Platen <[email protected]>
Update adapter.md to fix links to adapter pipelines
* Fix fuse Lora * improve a bit * make style * Update src/diffusers/models/lora.py Co-authored-by: Benjamin Bossan <[email protected]> * ciao C file * ciao C file * test & make style --------- Co-authored-by: Benjamin Bossan <[email protected]>
`jnp.array` is a function, not a type: https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.array.html so it never makes sense to use `jnp.array` in a type annotation. Presumably the intent was to write `jnp.ndarray` aka `jax.Array`. Change uses of `jnp.array` to `jnp.ndarray`. Co-authored-by: Peter Hawkins <[email protected]>
Update requirements_sdxl.txt Add missing 'datasets' Co-authored-by: Sayak Paul <[email protected]>
…5340) fix problem of 'accelerator.is_main_process' to run in mutiple GPUs or NPUs Co-authored-by: jiaqiw <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.