forked from huggingface/diffusers
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge changes #157
Merged
Merged
Merge changes #157
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
fix small nits in pixart sigma
…e flaky (#7771) decorate UNetControlNetXSModelTests::test_forward_no_control with is_flaky
* fix lora device test * fix more. * fix more/ * quality * empty --------- Co-authored-by: Dhruv Nair <[email protected]>
* reproducibility * feedback * feedback * fix path * github link
* refactor * code snippets * fix path * fix path in guide * code outputs * align toctree title * title * fix title
* Convert channel order to BGR for the watermark encoder. Convert the watermarked BGR images back to RGB. Fixes #6292 * Revert channel order before stacking images to overcome limitations that negative strides are currently not supported --------- Co-authored-by: Sayak Paul <[email protected]>
fix Co-authored-by: YiYi Xu <[email protected]> Co-authored-by: Sayak Paul <[email protected]>
* [Docs] Update image masking and face id example * Update docs * Fix docs
A new function compute_dream_and_update_latents has been added to the training utilities that allows you to do DREAM rectified training in line with the paper https://arxiv.org/abs/2312.00210. The method can be used with an extra argument in the train_text_to_image.py script. Co-authored-by: Jimmy <39@🇺🇸.com>
* introduce sigma schedule. Co-authored-by: Suraj Patil <[email protected]> * address yiyi * update docstrings. * implement the schedule for EDMDPMSolverMultistepScheduler --------- Co-authored-by: Suraj Patil <[email protected]>
* enable control ip-adapter per-transformer block on-the-fly --------- Co-authored-by: sayakpaul <[email protected]> Co-authored-by: ResearcherXman <[email protected]> Co-authored-by: YiYi Xu <[email protected]>
* Check for latents, before calling prepare_latents - sdxlImg2Img * Added latents check for all the img2img pipeline * Fixed silly mistake while checking latents as None
add debug workflow Co-authored-by: Sayak Paul <[email protected]>
…7786) swap the order for do_classifier_free_guidance concat with repeat Co-authored-by: Sayak Paul <[email protected]> Co-authored-by: Dhruv Nair <[email protected]>
* update * update
…ipeline (#7288) * added TextualInversionMixIn to controlnet_inpaint_sd_xl pipeline --------- Co-authored-by: YiYi Xu <[email protected]>
* Added get_velocity function to EulerDiscreteScheduler. * Fix white space on blank lines * Added copied from statement * back to the original. --------- Co-authored-by: Ruining Li <[email protected]> Co-authored-by: Sayak Paul <[email protected]>
…7500) FlaxStableDiffusionSafetyChecker sets main_input_name to "clip_input". This makes StableDiffusionSafetyChecker consistent. Co-authored-by: Sayak Paul <[email protected]> Co-authored-by: YiYi Xu <[email protected]>
chore: reducing model size for ddim fast pipeline Co-authored-by: Sayak Paul <[email protected]>
* chore: reducing unet size for faster tests * review suggestions --------- Co-authored-by: Sayak Paul <[email protected]>
* chore: reducing model sizes * chore: shrinks further * chore: shrinks further * chore: shrinking model for img2img pipeline * chore: reducing size of model for inpaint pipeline --------- Co-authored-by: Sayak Paul <[email protected]>
* introduce _no_split_modules. * unnecessary spaces. * remove unnecessary kwargs and style * fix: accelerate imports. * change to _determine_device_map * add the blocks that have residual connections. * add: CrossAttnUpBlock2D * add: testin * style * line-spaces * quality * add disk offload test without safetensors. * checking disk offloading percentages. * change model split * add: utility for checking multi-gpu requirement. * model parallelism test * splits. * splits. * splits * splits. * splits. * splits. * offload folder to test_disk_offload_with_safetensors * add _no_split_modules * fix-copies
) * add blora * add blora * add blora * add blora * little changes * little changes * remove redundancies * fixes * add B LoRA to readme * style * inference * defaults + path to loras+ generation * minor changes * style * minor changes * minor changes * blora arg * added --lora_unet_blocks * style * Update examples/advanced_diffusion_training/README.md Co-authored-by: Sayak Paul <[email protected]> * add commit hash to B-LoRA repo cloneing * change inference, remove cloning * change inference, remove cloning add section about configureable unet blocks * change inference, remove cloning add section about configureable unet blocks * Apply suggestions from code review --------- Co-authored-by: Sayak Paul <[email protected]>
* add debug workflow * update
Fix cpu offload
* community pipelines * feedback * consolidate
* update * update
* fix: device module tests * remove patch file * Empty-Commit
* up * add comment to the tests + fix dit --------- Co-authored-by: Sayak Paul <[email protected]>
#7820) update prepare_ip_adapter_ for pix2pix
chore: initial size reduction of models
…t_checkpoint (#7680) fix key error for different order Co-authored-by: yunseong <[email protected]> Co-authored-by: Dhruv Nair <[email protected]>
update Co-authored-by: Sayak Paul <[email protected]>
update Co-authored-by: Sayak Paul <[email protected]>
* Move to new GPU Runners for slow tests * Move to new GPU Runners for nightly tests
update Co-authored-by: Sayak Paul <[email protected]>
reducing model size
* Deprecate resume_download * align docstring with transformers * style --------- Co-authored-by: Sayak Paul <[email protected]>
…bug when using DeepSpeed. (#7816) * Add Ascend NPU support for SDXL fine-tuning and fix the model saving bug when using DeepSpeed. * fix check code quality * Decouple the NPU flash attention and make it an independent module. * add doc and unit tests for npu flash attention. --------- Co-authored-by: mhh001 <[email protected]> Co-authored-by: Sayak Paul <[email protected]>
* lcm * lcm lora * fix * fix hfoption * edits
* combine * edits
* return layer weight if not found * better system and test * key example and typo
…ow gated (#7880) * 7879 - adjust documentation to use naruto dataset, since pokemon is now gated * replace references to pokemon in docs * more references to pokemon replaced * Japanese translation update --------- Co-authored-by: bghira <[email protected]>
* edited_pag_implementation * update --------- Co-authored-by: yiyixuxu <[email protected]>
Fix image's upcasting before `vae.encode()` when using `fp16` Co-authored-by: YiYi Xu <[email protected]>
) `model_output.shape` may only have rank 1. There are warnings related to use of random keys. ``` tests/schedulers/test_scheduler_flax.py: 13 warnings /Users/phillypham/diffusers/src/diffusers/schedulers/scheduling_ddpm_flax.py:268: FutureWarning: normal accepts a single key, but was given a key array of shape (1, 2) != (). Use jax.vmap for batching. In a future JAX version, this will be an error. noise = jax.random.normal(split_key, shape=model_output.shape, dtype=self.dtype) tests/schedulers/test_scheduler_flax.py::FlaxDDPMSchedulerTest::test_betas /Users/phillypham/virtualenv/diffusers/lib/python3.9/site-packages/jax/_src/random.py:731: FutureWarning: uniform accepts a single key, but was given a key array of shape (1,) != (). Use jax.vmap for batching. In a future JAX version, this will be an error. u = uniform(key, shape, dtype, lo, hi) # type: ignore[arg-type] ```
* update conversion script to handle motion adapter sdxl checkpoint * add animatediff xl * handle addition_embed_type * fix output * update * add imports * make fix-copies * add decode latents * update docstrings * add animatediff sdxl to docs * remove unnecessary lines * update example * add test * revert conv_in conv_out kernel param * remove unused param addition_embed_type_num_heads * latest IPAdapter impl * make fix-copies * fix return * add IPAdapterTesterMixin to tests * fix return * revert based on suggestion * add freeinit * fix test_to_dtype test * use StableDiffusionMixin instead of different helper methods * fix progress bar iterations * apply suggestions from review * hardcode flip_sin_to_cos and freq_shift * make fix-copies * fix ip adapter implementation * fix last failing test * make style * Update docs/source/en/api/pipelines/animatediff.md Co-authored-by: Dhruv Nair <[email protected]> * remove todo * fix doc-builder errors --------- Co-authored-by: Dhruv Nair <[email protected]>
fix Co-authored-by: Dhruv Nair <[email protected]>
SDXL LoRA weights for text encoders should be decoupled on save The method checks if at least one of unet, text_encoder and text_encoder_2 lora weights are passed, which was not reflected in the implentation. Co-authored-by: Sayak Paul <[email protected]> Co-authored-by: YiYi Xu <[email protected]>
* Remove dead code * PylancereportGeneralTypeIssues: Strings nested within an f-string cannot use the same quote character as the f-string prior to Python 3.12. * Remove dead code
Fix imports
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.