You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to run a test that trains only 20 of VGGFace2's train data.
And I tried to use inceptionresnet v2 as the network architecture, but it does not run with the following error.
RuntimeError: Given input size: (1536x5x5). Calculated output size: (1536x0x0). Output size is too small
Please let me know if there is any guessing or wrong part of the reason.
The text was updated successfully, but these errors were encountered:
Hi JinhaSong,
I had similar issue and I managed to fix it by setting AvgPool2d kernel_size to 5. Since the
VGGFace2 images are smaller the current network at that layer ends up with tensor 1536x5x5 instead of 1536x8x8.
Applying pooling with kernel_size 8 which is larger that what is provided by the previous layer causes this issue. Hope that helps !
I tried to run a test that trains only 20 of VGGFace2's train data.
And I tried to use inceptionresnet v2 as the network architecture, but it does not run with the following error.
Please let me know if there is any guessing or wrong part of the reason.
The text was updated successfully, but these errors were encountered: