Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference of Text-Driven Stylized Synthesis #13

Open
Podidiving opened this issue Oct 19, 2024 · 1 comment
Open

Inference of Text-Driven Stylized Synthesis #13

Podidiving opened this issue Oct 19, 2024 · 1 comment

Comments

@Podidiving
Copy link

👋
Thanks for the great job 👏
I have 2 questions:

  1. It's not quite clear from the paper what does Text-Driven Stylized Synthesis mean. AFAIU it's quite the same as using IP-adapter on the style block of Unet (the only difference is that you have your own adapter model, trained exactly for this task). am I right or not? Controlnet in this case is not used at all
  2. In the notebook I assume that the example with a cat is Text-Driven Stylized Synthesis (cell 15). Although I'm quite confused, why do you still use content image to generate it (even though with a really small controlnet_conditioning_scale). Moreover, why CSGO.generate method has pil_content_image as a required param? The Idea is to condition only on the style image and text, no content image provided. If it's not the case, than why do we need content image at all?
@xingp-ng
Copy link
Member

👋 Thanks for the great job 👏 I have 2 questions:

  1. It's not quite clear from the paper what does Text-Driven Stylized Synthesis mean. AFAIU it's quite the same as using IP-adapter on the style block of Unet (the only difference is that you have your own adapter model, trained exactly for this task). am I right or not? Controlnet in this case is not used at all
  2. In the notebook I assume that the example with a cat is Text-Driven Stylized Synthesis (cell 15). Although I'm quite confused, why do you still use content image to generate it (even though with a really small controlnet_conditioning_scale). Moreover, why CSGO.generate method has pil_content_image as a required param? The Idea is to condition only on the style image and text, no content image provided. If it's not the case, than why do we need content image at all?

Thanks.
(1)Text-driven stylized compositing is the generated result of text and style image driving, whose content is controlled by the text.
(2)In the example given, controlnet_conditioning_scale is set to a very small parameter, which actually prevents controlnet from working. Theoretically you could discard controlnet when performing Text-Driven Stylized Synthesis.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants