Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge changes #123

Merged
merged 31 commits into from
Nov 10, 2023
Merged
Changes from 1 commit
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
6a89a6c
Update custom diffusion attn processor (#5663)
DN6 Nov 7, 2023
71f56c7
Model tests xformers fixes (#5679)
DN6 Nov 7, 2023
8ca179a
Update free model hooks (#5680)
DN6 Nov 7, 2023
414d7c4
Fix Basic Transformer Block (#5683)
DN6 Nov 7, 2023
97c8199
Explicit torch/flax dependency check (#5673)
DN6 Nov 7, 2023
a8523bf
[PixArt-Alpha] fix `mask_feature` so that precomputed embeddings work…
sayakpaul Nov 7, 2023
84cd9e8
Make sure DDPM and `diffusers` can be used without Transformers (#5668)
sayakpaul Nov 7, 2023
1dc231d
[PixArt-Alpha] Support non-square images (#5672)
sayakpaul Nov 7, 2023
aab6de2
Improve LCMScheduler (#5681)
dg845 Nov 7, 2023
7942bb8
[`Docs`] Fix typos, improve, update at Using Diffusers' Task page (#5…
tolgacangoz Nov 7, 2023
9ae9059
Replacing the nn.Mish activation function with a get_activation funct…
hi-sushanta Nov 8, 2023
6999693
speed up Shap-E fast test (#5686)
yiyixuxu Nov 8, 2023
11c1256
Fix the misaligned pipeline usage in dreamshaper docstrings (#5700)
kirill-fedyanin Nov 8, 2023
d384265
Fixed is_safetensors_compatible() handling of windows path separators…
PhilLab Nov 8, 2023
c803a8f
[LCM] Fix img2img (#5698)
patrickvonplaten Nov 8, 2023
78be400
[PixArt-Alpha] fix mask feature condition. (#5695)
sayakpaul Nov 8, 2023
17528af
Fix styling issues (#5699)
patrickvonplaten Nov 8, 2023
6e68c71
Add adapter fusing + PEFT to the docs (#5662)
apolinario Nov 8, 2023
65ef7a0
Fix prompt bug in AnimateDiff (#5702)
DN6 Nov 8, 2023
6110d7c
[Bugfix] fix error of peft lora when xformers enabled (#5697)
okotaku Nov 8, 2023
43346ad
Install accelerate from PyPI in PR test runner (#5721)
DN6 Nov 9, 2023
2fd4640
consistency decoder (#5694)
williamberman Nov 9, 2023
bf406ea
Correct consist dec (#5722)
patrickvonplaten Nov 9, 2023
3d7eaf8
LCM Add Tests (#5707)
patrickvonplaten Nov 9, 2023
bc2ba00
[LCM] add: locm docs. (#5723)
sayakpaul Nov 9, 2023
db2d8e7
Add LCM Scripts (#5727)
patil-suraj Nov 9, 2023
53a8439
[`Docs`] Fix typos and update files at Optimization Page (#5674)
tolgacangoz Nov 9, 2023
1328aeb
[Docs] Clarify that these are two separate examples (#5734)
up_the_irons Nov 9, 2023
77ba494
[ConsistencyDecoder] fix: doc type (#5745)
sayakpaul Nov 10, 2023
1f87f83
add load_datasete data_dir parameter (#5747)
aihao2000 Nov 10, 2023
1477865
post release v0.23.0 (#5730)
sayakpaul Nov 10, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
[Docs] Clarify that these are two separate examples (huggingface#5734)
* [Docs] Running the pipeline twice does not appear to be the intention of these examples

One is with `cross_attention_kwargs` and the other (next line) removes it

* [Docs] Clarify that these are two separate examples

One using `scale` and the other without it
up_the_irons authored Nov 9, 2023
commit 1328aeb274610f492c10a246ffba0bc4de8f689b
14 changes: 8 additions & 6 deletions docs/source/en/training/lora.md
Original file line number Diff line number Diff line change
@@ -113,14 +113,15 @@ Load the LoRA weights from your finetuned model *on top of the base model weight
```py
>>> pipe.unet.load_attn_procs(lora_model_path)
>>> pipe.to("cuda")
# use half the weights from the LoRA finetuned model and half the weights from the base model

# use half the weights from the LoRA finetuned model and half the weights from the base model
>>> image = pipe(
... "A pokemon with blue eyes.", num_inference_steps=25, guidance_scale=7.5, cross_attention_kwargs={"scale": 0.5}
... ).images[0]
# use the weights from the fully finetuned LoRA model

>>> image = pipe("A pokemon with blue eyes.", num_inference_steps=25, guidance_scale=7.5).images[0]
# OR, use the weights from the fully finetuned LoRA model
# >>> image = pipe("A pokemon with blue eyes.", num_inference_steps=25, guidance_scale=7.5).images[0]

>>> image.save("blue_pokemon.png")
```

@@ -225,17 +226,18 @@ Load the LoRA weights from your finetuned DreamBooth model *on top of the base m
```py
>>> pipe.unet.load_attn_procs(lora_model_path)
>>> pipe.to("cuda")
# use half the weights from the LoRA finetuned model and half the weights from the base model

# use half the weights from the LoRA finetuned model and half the weights from the base model
>>> image = pipe(
... "A picture of a sks dog in a bucket.",
... num_inference_steps=25,
... guidance_scale=7.5,
... cross_attention_kwargs={"scale": 0.5},
... ).images[0]
# use the weights from the fully finetuned LoRA model

>>> image = pipe("A picture of a sks dog in a bucket.", num_inference_steps=25, guidance_scale=7.5).images[0]
# OR, use the weights from the fully finetuned LoRA model
# >>> image = pipe("A picture of a sks dog in a bucket.", num_inference_steps=25, guidance_scale=7.5).images[0]

>>> image.save("bucket-dog.png")
```