forked from huggingface/diffusers
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge changes #113
Merged
Merged
Merge changes #113
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* fix model offload bug when key isn't present * make style
* [Import] Don't force transformers to be installed * make style
…ing local config files (#5019) finish config_files implementation
* [Import] Add missing settings * up * up * up
* Initial commit P2P * Replaced CrossAttention, added test skeleton * bug fixes * Updated docstring * Removed unused function * Created tests * improved tests - made fast inference tests faster - corrected image shape assertions * Corrected expected output shape in tests * small fix: test inputs * Update tests - used conditional unet2d - set expected image slices - edit_kwargs are now not popped, so pipe can be run multiple times * Fixed bug in int tests * Fixed tests * Linting * Create prompt2prompt.md * Added to docs toc * Ran make fix-copies * Fixed code blocks in docs * Using same interface as StableDiffusionPipeline * Fixed small test bug * Added all options SDPipeline.__call_ has * Fixed docstring; made __call__ like in SD * Linting * Added test for multiple prompts * Improved docs * Incorporated feedback * Reverted formatting on unrelated files * Moved prompt2prompt to community - Moved prompt2prompt pipeline from main to community - Deleted tests - Moved documentation to community and shorted it * Update src/diffusers/utils/dummy_torch_and_transformers_objects.py Co-authored-by: Patrick von Platen <[email protected]> --------- Co-authored-by: Patrick von Platen <[email protected]>
* [Release 0.21] Bump version * fix & remove * fix more * fix all, upload
Co-authored-by: yiyixuxu <yixu310@gmail,com>
* Add files via upload Co-authored-by: Shahmatov Arseniy <[email protected]> Co-authored-by: yiyixuxu <yixu310@gmail,com> Co-authored-by: Sayak Paul <[email protected]> Co-authored-by: Steven Liu <[email protected]>
* add refiner only tests * make style
* Add attn_groups argument to UNet2DMidBlock2D to control theinternal Attention block's GroupNorm. * Add docstring for attn_norm_num_groups in UNet2DModel. * Since the test UNet config uses resnet_time_scale_shift == 'scale_shift', also set attn_norm_num_groups to 32. * Add test for attn_norm_num_groups to UNet2DModelTests. * Fix expected slices for slow tests. * Also fix tolerances for slow tests. --------- Co-authored-by: Sayak Paul <[email protected]>
* convert tensorrt controlnet * Fix code quality * Fix code quality * Fix code quality * Fix code quality * Fix code quality * Fix code quality * Fix number controlnet condition * Add convert SD XL to onnx * Add convert SD XL to tensorrt * Add convert SD XL to tensorrt * Add examples in comments * Add examples in comments * Add test onnx controlnet * Add tensorrt test * Remove copied * Move file test to examples/community * Remove script * Remove script * Remove text * Fix import --------- Co-authored-by: dotieuthien <[email protected]> Co-authored-by: Patrick von Platen <[email protected]>
* [SDXL, Docs] Textual inversion * Update docs/source/en/using-diffusers/sdxl.md * finish * Apply suggestions from code review Co-authored-by: Steven Liu <[email protected]> --------- Co-authored-by: Steven Liu <[email protected]>
…h compile reliably succeeds (#4982) * Remove logger.info statement from Unet2DCondition code to ensure torch compile reliably succeeds * Convert logging statement to a comment for future archaeologists * Update src/diffusers/models/unet_2d_condition.py Co-authored-by: Patrick von Platen <[email protected]> --------- Co-authored-by: bghira <[email protected]> Co-authored-by: Patrick von Platen <[email protected]>
* fix typos in docs * fix for issue #5023
* [LoRA] fix typo in attention_processor.py fixes #5062 * make style * make fix-copies, logger comented for torch compile
* fix * fix num_images_per_prompt >1 * other pipelines * add fast tests for inpaint pipelines --------- Co-authored-by: yiyixuxu <yixu310@gmail,com>
* add doc around fusing multiple loras. * Apply suggestions from code review Co-authored-by: apolinário <[email protected]> * address poli's comments. --------- Co-authored-by: apolinário <[email protected]>
* Implement `CustomDiffusionAttnProcessor2_0` * Doc-strings and type annotations for `CustomDiffusionAttnProcessor2_0`. (#1) * Update attnprocessor.md * Update attention_processor.py * Interops for `CustomDiffusionAttnProcessor2_0`. * Formatted `attention_processor.py`. * Formatted doc-string in `attention_processor.py` * Conditional CustomDiffusion2_0 for training example. * Remove unnecessary reference impl in comments. * Fix `save_attn_procs`.
* [LoRA] Centralize LoRA tests * [LoRA] Centralize LoRA tests * [LoRA] Centralize LoRA tests * [LoRA] Centralize LoRA tests * [LoRA] Centralize LoRA tests
* convert tensorrt controlnet * Fix code quality * Fix code quality * Fix code quality * Fix code quality * Fix code quality * Fix code quality * Fix number controlnet condition * Add convert SD XL to onnx * Add convert SD XL to tensorrt * Add convert SD XL to tensorrt * Add examples in comments * Add examples in comments * Add test onnx controlnet * Add tensorrt test * Remove copied * Move file test to examples/community * Remove script * Remove script * Remove text * Fix import * Fix T2I MultiAdapter * fix tests --------- Co-authored-by: dotieuthien <[email protected]> Co-authored-by: dotieuthien <[email protected]> Co-authored-by: Patrick von Platen <[email protected]> Co-authored-by: dotieuthien <[email protected]>
remove adapter weights in MultiAdapter constructor
* don't break offloading for incompatible lora ckpts. * debugging * better condition. * fix * fix * fix * fix --------- Co-authored-by: Patrick von Platen <[email protected]>
* add support for clip skip * fix condition * fix * add clip_output_layer_to_default * expose * remove the previous functions. * correct condition. * apply final layer norm * address feedback * Apply suggestions from code review Co-authored-by: Patrick von Platen <[email protected]> * refactor clip_skip. * port to the other pipelines. * fix copies one more time --------- Co-authored-by: Patrick von Platen <[email protected]>
…g code examples) (#5089) Fixed `get_word_inds` mistake/typo in P2P community pipeline The function `get_word_inds` was taking a string of text and either a word (str) or a word index (int) and returned the indices of token(s) the word would be encoded to. However, there was a typo, in which in the second `if` branch the word was checked to be a `str` **again**, not `int`, which resulted in an [example code from the docs](https://github.com/huggingface/diffusers/tree/main/examples/community#prompt2prompt-pipeline) to result in an error
* [SDXL] Make sure multi batch prompt embeds works * [SDXL] Make sure multi batch prompt embeds works * improve more * improve more * Apply suggestions from code review
* core: add support for clip ckip to SDXL * add clip_skip support to the rest of the pipeline. * Empty-Commit
* fix failing tests * make style
* move slow tests to nightly * move slow tests to nightly
* wip * fix tests
* fix test * initial commit * change test * updates: * fix tests * test fix * test fix * fix tests * make test faster * clean up * fix precision in test * fix precision * Fix tests * Fix logging test * fix test * fix test --------- Co-authored-by: Patrick von Platen <[email protected]>
#5095) Co-authored-by: bghira <[email protected]> Co-authored-by: Sayak Paul <[email protected]>
…5096) * Resolve v_prediction issue for min-SNR gamma weighted loss function * Combine MSE loss calculation of epsilon and velocity, with a note about the application of the epsilon code to sample prediction * style --------- Co-authored-by: bghira <[email protected]> Co-authored-by: Sayak Paul <[email protected]>
--------- Co-authored-by: yiyixuxu <yixu310@gmail,com> Co-authored-by: Patrick von Platen <[email protected]>
* better condition. * debugging * how about now? * how about now? * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * support for lycoris. * style * add: lycoris test * fix from_pretrained call. * fix assertion values.
Co-authored-by: bghira <[email protected]>
* Add BLIP Diffusion skeleton * Add other model components * Add BLIP2, need to change it for now * Fix pipeline imports * Load pretrained ViT * Make qformer fwd pass same * Replicate fwd passes * Fix device bug * Add accelerate functions * Remove extra functions from Blip2 * Minor bug * Integrate initial review changes * Refactoring * Refactoring * Refactor * Add controlnet * Refactor * Update conversion script * Add image processor * Shift postprocessing to ImageProcessor * Refactor * Fix device * Add fast tests * Update conversion script * Fix checkpoint conversion script * Integrate review changes * Integrate reivew changes * Remove unused functions from test * Reuse HF image processor in Cond image * Create new BlipImageProcessor based on transfomers * Fix image preprocessor * Minor * Minor * Add canny preprocessing * Fix controlnet preprocessing * Fix blip diffusion test * Add controlnet test * Add initial doc strings * Integrate review changes * Refactor * Update examples * Remove DDIM comments * Add copied from for prepare_latents * Add type anotations * Add docstrings * Do black formatting * Add batch support * Make tests pass * Make controlnet tests pass * Black formatting * Fix progress bar * Fix some licensing comments * Fix imports * Refactor controlnet * Make tests faster * Edit examples * Black formatting/Ruff * Add doc * Minor Co-authored-by: Patrick von Platen <[email protected]> * Move controlnet pipeline * Make tests faster * Fix imports * Fix formatting * Fix make errors * Fix make errors * Minor * Add suggested doc changes Co-authored-by: Sayak Paul <[email protected]> * Edit docs * Fix 16 bit loading * Update examples * Edit toctree * Update docs/source/en/api/pipelines/blip_diffusion.md Co-authored-by: Sayak Paul <[email protected]> * Minor * Add tips * Edit examples * Update model paths --------- Co-authored-by: Patrick von Platen <[email protected]> Co-authored-by: Sayak Paul <[email protected]>
* min-SNR gamma for Dreambooth training * Align the mse_loss_weights style with SDXL training example --------- Co-authored-by: bghira <[email protected]> Co-authored-by: Sayak Paul <[email protected]>
fix and add tests Co-authored-by: yiyixuxu <yixu310@gmail,com>
…alDiscreteScheduler` (#5111) --------- Co-authored-by: yiyixuxu <yixu310@gmail,com>
…5144) Add missing parenthesis in the sample code of BLIP Diffusion
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.