You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I am using uvcgan2 to make image set A have image set B image style while maintaining the original image content.
However, I am currently encountering an issue.
The generated 'fake_b.png' result completely lost the content of image "real_a.png" and approximated a certain image in the B training set.
What is the reason for this and how can it be resolved? I need help. Thank you~
The text was updated successfully, but these errors were encountered:
It is hard to tell without knowing the details of the dataset. My speculation is that the following things can be involved:
Insufficient magnitude of the cycle-consistency loss. To check this -- you can examine reco_a image. If reco_a looks like real_a then everything is fine. Otherwise, you could try increasing the magnitude of the lambda_a and lambda_b hyperparameters.
"Unaligned" datasets. If the datasets are very different in terms of the positions of the objects, this may also contribute. For instance, if one tries to perform Male <-> Female translation, but Male faces are located in the top 1/2 of the image, while Female faces are located in the bottom 1/2 of the image, then the resulting translation will fail to match Male <-> Female faces and produce random outputs. If your dataset exhibits similar properties, then you would need to add some data augmentations to fix that.
Hello, I am using uvcgan2 to make image set A have image set B image style while maintaining the original image content.
However, I am currently encountering an issue.
The generated 'fake_b.png' result completely lost the content of image "real_a.png" and approximated a certain image in the B training set.
What is the reason for this and how can it be resolved? I need help. Thank you~
The text was updated successfully, but these errors were encountered: