Code for the Transfer-Learning competition.
We explore metrics in the re-labelling folder. We then relabel the FFHQ dataset (in the data subfolder). We finally perform a metrics analysis.
Here are statistics on the first 1000 images of the FFHQ dataset:
We calculate continuous deformations of the images to make the desired transformations:
- first, for each keypoint, we define the desired translation
- then, we interpolate between the keypoints
- finally, we smooth out the translation map
(Inverting this map gives the opposite transformation)
For the nose, we have 4 possible transformations:
We use a similar technique to make large and small lips:
we found that when the mouth is open, this usually do not work that well.
And again to make round or narrow eyes:
To change the skin tone, we create a mask of the skin (using RGB conditions):
note that we used RGB conditions, but HVS conditions might work better...
using this mask, we get what skin tone is the person; we then apply the correspondong transform in the HVS colorspace:
note that we only transform the V and S values, this gives better results, and can be interpreted as "the H value is a chracteristic of the person, while the V&S values correspond to their skin tone"
To create bags under eyes, we just darken the region under the eyes: