-
Notifications
You must be signed in to change notification settings - Fork 140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scale swapped face before ....? #33
Comments
Not sure if I get what you mean. Can you provide an example? |
Thanks for your reply Depending on the target-video and the source-image I get a result similar to the picture above. When manually swapping a face using a graphic software by cutting out and dragging it over the other face I can resize the overlay as a whole until it fits best. Can this be done by changing some parameter in your code? (I'm not familiar with python, all changes I've done so far are a lot of searching the internet) Or am I completly wrong in understanding how it works? |
Have you tried supervised Segmentation? |
Yes. Same result or even worse. |
I think you can try to use occlusion mask, lets say driving mask is Md source mask deformed is Ms. You can try to zero out pixels in occlusion mask which correspond to Md / Ms (blue border in your example). |
Ok. Looks like it's not that simple to do. Thank you so far |
Great work.
I've installed it on my local machine and have written a GUI for the face swap and also the first order motion model.
For this I a had to modify some of the python scripts to pass all extra parameters....
My question:
Would it be possible to rescale the swapped face from the source-image before it is "merged" to the target-video?
Problem:
Depending on the face in the source-image it somtimes does not fit correctly.
For example, the results then have 4 ears or the mouth is not at the expected position.
The text was updated successfully, but these errors were encountered: