Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scale swapped face before ....? #33

Open
instant-high opened this issue Jan 11, 2021 · 6 comments
Open

Scale swapped face before ....? #33

instant-high opened this issue Jan 11, 2021 · 6 comments

Comments

@instant-high
Copy link

instant-high commented Jan 11, 2021

gui
Great work.
I've installed it on my local machine and have written a GUI for the face swap and also the first order motion model.
For this I a had to modify some of the python scripts to pass all extra parameters....

My question:
Would it be possible to rescale the swapped face from the source-image before it is "merged" to the target-video?

Problem:
Depending on the face in the source-image it somtimes does not fit correctly.
For example, the results then have 4 ears or the mouth is not at the expected position.

@AliaksandrSiarohin
Copy link
Owner

Not sure if I get what you mean. Can you provide an example?

@instant-high
Copy link
Author

Thanks for your reply
Since I have deleted most of my test videos and images i made a simple graphic to show what I mean.

image1

Depending on the target-video and the source-image I get a result similar to the picture above.
The swapped face (e.g 5-segment / segments 1,2,5) seems to be a bit too small.
(using 10 segment or 15 segment gives the same result)

When manually swapping a face using a graphic software by cutting out and dragging it over the other face I can resize the overlay as a whole until it fits best.

Can this be done by changing some parameter in your code?
If so, in which part of your code?

(I'm not familiar with python, all changes I've done so far are a lot of searching the internet)

Or am I completly wrong in understanding how it works?

@AliaksandrSiarohin
Copy link
Owner

Have you tried supervised Segmentation?

@instant-high
Copy link
Author

Yes. Same result or even worse.
A workaround for me is to do the swap manually for 1 frame and use it as source image for swap parts or animation.

@AliaksandrSiarohin
Copy link
Owner

I think you can try to use occlusion mask, lets say driving mask is Md source mask deformed is Ms. You can try to zero out pixels in occlusion mask which correspond to Md / Ms (blue border in your example).

@instant-high
Copy link
Author

Ok. Looks like it's not that simple to do.
For now I added a function to make a screenshot of the best matching frame of the target video to make a part swap image for use as source image.
I think the problem occured because of bad quality, low contrast, of the driving video.

Thank you so far

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants