A text prompt weighting and blending library for transformers-type text embedding systems, by @damian0815.
With a flexible and intuitive syntax, you can re-weight different parts of a prompt string and thus re-weight the different parts of the embeddning tensor produced from the string.
Tested and developed against Hugging Face's StableDiffusionPipeline
but it should work with any diffusers-based system that uses an Tokenizer
and a Text Encoder
of some kind.
Adapted from the InvokeAI prompting code (also by @damian0815). For now, the syntax is fully documented here.
Note that cross-attention control .swap()
is currently ignored by Compel, but you can use it by calling build_conditioning_tensor_for_prompt_object()
yourself, and implementing cross-attention control in your diffusion loop.
pip install compel
with Hugging Face diffusers >=0.12:
from diffusers import StableDiffusionPipeline
from compel import Compel
pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
compel = Compel(tokenizer=pipeline.tokenizer, text_encoder=pipeline.text_encoder)
# upweight "ball"
prompt = "a cat playing with a ball++ in the forest"
conditioning = compel.build_conditioning_tensor(prompt)
# or: conditioning = compel([prompt])
# generate image
images = pipeline(prompt_embeds=conditioning, num_inference_steps=20).images
images[0].save("image.jpg")
For batched input, use the call interface to compel:
import torch
from diffusers import StableDiffusionPipeline
from compel import Compel
pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
compel = Compel(tokenizer=pipeline.tokenizer, text_encoder=pipeline.text_encoder)
prompts = ["a cat playing with a ball++ in the forest", "a dog playing with a ball in the forest"]
prompt_embeds = compel(prompts)
images = pipeline(prompt_embeds=prompt_embeds).images
images[0].save("image0.jpg")
images[1].save("image1.jpg")
Compel.parse_prompt_string()
now returns aConjunction
- any appearances of
withLora(name[, weight])
oruseLora(name[, weight])
anywhere in the prompt string will be parsed toLoraWeight
instances, and returned on the outermostConjunction
returned byparse_prompt_string()
.
also fix test case for default swap parameters
1.0.3 - better defaults for .swap (damian0815#8)
1.0.2 - fix padding for non-truncated batched embeddings (damian0815#9)
Downweighting now works by applying an attention mask to remove the downweighted tokens, rather than literally removing them from the sequence. This behaviour is the default, but the old behaviour can be re-enabled by passing downweight_mode=DownweightMode.REMOVE
on init of the Compel
instance.
Formerly, downweighting a token worked by both multiplying the weighting of the token's embedding, and doing an inverse-weighted blend with a copy of the token sequence that had the downweighted tokens removed. The intuition is that as weight approaches zero, the tokens being downweighted should be actually removed from the sequence. However, removing the tokens resulted in the positioning of all downstream tokens becoming messed up. The blend ended up blending a lot more than just the tokens in question.
As of v1.0.0, taking advice from @keturn and @bonlime (damian0815#7) the procedure is by default different. Downweighting still involves a blend but what is blended is a version of the token sequence with the downweighted tokens masked out, rather than removed. This correctly preserves positioning embeddings of the other tokens.
Also a bugfix: fix black images on weight 0 (invoke-ai/InvokeAI#2832)
To enable, initialize Compel
with truncate_long_prompts=False
(default is True). Prompts that are longer than the model's max_token_length
will be chunked and padded out to an integer multiple of max_token_length
.
Note that even if you don't use a negative prompt, you'll need to build a conditioning tensor for a negative prompt of at least ""
, and use compel.pad_conditioning_tensors_to_same_length()
, otherwise the you'll get an error about mismatched conditioning tensor lengths:
compel = Compel(..., truncate_long_prompts=False)
prompt = "a cat playing with a ball++ in the forest, amazing, exquisite, stunning, masterpiece, skilled, powerful, incredible, amazing, trending on gregstation, greg, greggy, greggs greggson, greggy mcgregface, ..." # very long prompt
conditioning = compel.build_conditioning_tensor(prompt)
negative_prompt = "" # it's necessary to create an empty prompt - it can also be very long, if you want
negative_conditioning = compel.build_conditioning_tensor(negative_prompt)
[conditioning, negative_conditioning] = compel.pad_conditioning_tensors_to_same_length([conditioning, negative_conditioning])