-
Notifications
You must be signed in to change notification settings - Fork 140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fine tuning the motion-cosegmentation Model #38
Comments
Checkpoint and config should have the same number of segments (keypoints) in your case it is 10 and 15 respectively. Either use different checkpoint or different config. |
Thank you for the reply. Yes. I have already tried different number of segments, and it worked for 10 segments and checkpoint is saved in log folder. But, When trying to infer, with the new fine tuned checkpoint model weights, I get the missing key error in the checkpoint - "blend_downsample.weight" (in Image). Please guide me. The above is thrown, when I exceute this block during Inference. reconstruction_module, segmentation_module = load_checkpoints(config='config/Retrain_10segments.yaml', The configuration file, I have adapted is from the "vox-256-sem-10segments.yaml" with slight modifications . Below is my config file. (Uploading as txt file) Thank you |
You are using |
My goal is to swap faces in the video. So, which one should I fine tune. There isn't any --supervised flag in config file. Where are you referring to. |
Do you mean, I need to fine tune fomm? To be more specific, I am aiming at this - https://github.com/AliaksandrSiarohin/first-order-model#face-swap |
Yes you need to finetune fomm than. |
Hi @AliaksandrSiarohin thanks for amazing work! I was trying to fine-tune motion-cosegmentation model with my dataset but same problem occurred. So at first I tried to fine-tune fomm model with 10 segments config file, the training went without problems, but when I load the models it gives me Then I tried to fine-tune motion-copart's checkpoint with 15 segments. The training went without problems, but when trying to test it I got the same error as above. My goal is to fine-tune motion-co parts checkpoint with my dataset and then run it with supervised option, without fomm and using face parsing. |
So quick update on my problem This error occurs when I run with --supervised flag. But when I fine-tune that model or train it from scratch with vox-256-sem-15segments.yaml config it only works without --supervised flag. So my question is why your checkpoint works with supervised flag but mine not? |
Supervised uses original fomm, you should not fine-tune. |
But I can use supervised with your vox-15segments.pth.tar and I guess it's motion-co part model not fomm. |
Hi @AliaksandrSiarohin ,
This is very interesting work.
I wanted to fine tune the model with few more new videos. Though I could do the video-processing part, I am unable to run the training code. I am getting the below error. How should i fine tune the model.
I have followed the steps in the repository. Please help me out!
!python train.py --config "config/Retrain_15segments.yaml" --checkpoint "models/vox-cpk.pth.tar"
The text was updated successfully, but these errors were encountered: