-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training tensor not match error #37
Comments
python test.py also has same error. anyone can help me? |
I change the torch version and it works! |
I have the same issue, feature_loss = torch.sum(torch.mul(feature_loss_mat, mask_ap)) / sum_value After downgrading PyTorch to the 1.0.1 version, then I got "[CUDA error: no kernel image is available for execution on the device]" error. |
I'm also having the same issue, did anyone fix it? Please enlighten |
if anyone knows how to fix it, please inform me. |
It is because nn.TripletMarginLoss(). If the input tensor is (B, C, H, W), |
Thank you very much!
…---Original---
From: ***@***.***>
Date: Mon, Jan 29, 2024 15:39 PM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [JirongZhang/DeepHomography] Training tensor not match error (Issue #37)
It is because nn.TripletMarginLoss(). If the input tensor is (B, C, H, W),
In PyTorch 1.0.1 version as this work used, this loss function will take the second dim as C, and it works.
But in some latest versions, this loss function will take the last dim as C, so you need to tanspose the input tensor to (B, H, W, C)
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
Thanks all of you |
Hi I face some errors,
Oneline-DLTv1/resnet.py", line 288, in forward
feature_loss = torch.sum(torch.mul(feature_loss_mat, mask_ap)) / sum_value
RuntimeError: The size of tensor a (315) must match the size of tensor b (560) at non-singleton dimension 3
when training:
python train.py --gpus 2 --cpus 8 --lr 0.0001 --batch_size 32
Could you please share me how to solve it thx!
The text was updated successfully, but these errors were encountered: