Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update lora_conversion_utils.py #9980

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

zhaowendao30
Copy link

@zhaowendao30 zhaowendao30 commented Nov 21, 2024

x-flux single-blocks lora load

What does this PR do?

Fixes # (issue)

Before submitting

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

x-flux single-blocks lora load
@zhaowendao30
Copy link
Author

I check the x-flux code, the single block lora should load after qkv attention,not norm,they are just has the same shape
image
image

@sayakpaul
Copy link
Member

@raulmosa could you give this a look?

@raulmosa
Copy link
Contributor

I've checked it and it's right, it shouldn't be norm. Looks good to me @sayakpaul .
Nice catch @zhaowendao30 , thanks! =)

@sayakpaul
Copy link
Member

@zhaowendao30 thanks for your contribnutions!

Could you also do a side-by-side comparison of your changes applied and without your changes in the outputs? That would be very much appreciated.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@zhaowendao30
Copy link
Author

I've checked it and it's right, it shouldn't be norm. Looks good to me @sayakpaul . Nice catch @zhaowendao30 , thanks! =)

No thanks =)

@zhaowendao30
Copy link
Author

zhaowendao30 commented Nov 22, 2024

@zhaowendao30 thanks for your contribnutions!

Could you also do a side-by-side comparison of your changes applied and without your changes in the outputs? That would be very much appreciated.

OK, the single block i just trained index 1,2,3,4, and the first image is load in qkv, the second is load in norm, the last is no lora
image
image
image

@sayakpaul
Copy link
Member

@zhaowendao30 thanks, but please prefer not using human subjects in the public forums.

I will run the tests today and update the slices as needed because of the change.

@zhaowendao30
Copy link
Author

@zhaowendao30 thanks, but please prefer not using human subjects in the public forums.

I will run the tests today and update the slices as needed because of the change.

OK, I've uploaded it again. However, this is a LoRA about people, and the panda looks a bit strange. =)

@sayakpaul
Copy link
Member

Just ran

pytest tests/lora/ -k "test_flux_xlabs"

test_flux_xlabs is passing and test_flux_xlabs_load_lora_with_single_blocks is failing because of hardware change, which is expected. I will change the slices in https://github.com/huggingface/diffusers/pull/9845/files.

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants