-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dimension mismatch while loading model from checkpoint #5
Comments
I meet the very similar error while running the evaluation for the pointgroup captioning. Here is the error: |
Delete Line 453 in b505e98
|
@CurryYuan Thank you very much for your help! I have fixed the code as you state. But when I run the evaluation as Output: Could not import cythonized box intersection. Consider compiling box_intersection.pyx for faster training. I would be very appreciated if you can help me. Thank you very much! |
@daveredrum any suggestions will be helpful. |
Thanks for sharing this great work!
I am currently hitting an issue while running the evaluation for the pointgroup detector using the checkpoint file you shared.
python scripts/eval.py --folder <output_folder> --task detection
Output:
Traceback (most recent call last):
File "scripts/eval.py", line 522, in
model = init_model(cfg, dataset)
File "scripts/eval.py", line 121, in init_model
model.load_state_dict(checkpoint["state_dict"], strict=False)
File "/home/rajrup/miniconda3/envs/d3net-original/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1406, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for PipelineNet:
size mismatch for embeddings: copying a param with shape torch.Size([3441, 300]) from checkpoint, the shape in current model is torch.Size([3535, 300]).
size mismatch for speaker.caption.embeddings: copying a param with shape torch.Size([3441, 300]) from checkpoint, the shape in current model is torch.Size([3535, 300]).
size mismatch for speaker.caption.classifier.2.weight: copying a param with shape torch.Size([3441, 512]) from checkpoint, the shape in current model is torch.Size([3535, 512]).
size mismatch for speaker.caption.classifier.2.bias: copying a param with shape torch.Size([3441]) from checkpoint, the shape in current model is torch.Size([3535]).
The dimension of the tensors in checkpoint doesn't match the one required in the code. Before the model load step, the val splits, and the vocabulary loads fine. I might be missing something here. Can you please help me solve this issue?
Thanks!
The text was updated successfully, but these errors were encountered: