-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use cat-dataset #4
Comments
I resolved. Close the issue. |
Sorry for the late reply, I am glad you solved it though :) |
Thank you very much for your immediate reply. I'm sorry. I thought I had solved the problem, but now I can't figure out how to do unsupervised learning of the target. ("Learning the target domain network" in chapter 3) |
Hi, You don't need the original annotations whatsoever, indeed that's the point of the method being unsupervised!! The target domain network is the one you will learn in that unsupervised way, with the cats in this case. The original domain network is the one attached to this repo that was trained for human pose estimation. You only need raw images to train the network on the target domain (cats in this case). I hope this helps :) |
Thank you for your reply. I thought I had to prepare a file like "list_landmarks_align_celeba.txt" corresponding to the cat dataset, could I be wrong? |
Yes, the landmarks for the celeba are just used to crop the images without using a face detector because the images might not be cropped. For the cats dataset, if the heads are already cropped which I remember to be the case, you don't need to do that |
Sorry, I was trying a few things and I'm late in replying. I think I need to define the point as well as the filename of the image for the database (in databases.py, collect() returns image, points), |
Hello,
I could train a model with celebA and got good results,
but I wasn't sure how to train a model using the cat-dataset in your paper.
I have already downloaded the data set.
Would you tell me how to use it?
The text was updated successfully, but these errors were encountered: