Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

If I want to use Mobilevit in Cifar10, how should I set the parameters? #9

Open
learningelectric opened this issue Jan 5, 2022 · 4 comments

Comments

@learningelectric
Copy link

learningelectric commented Jan 5, 2022

If I want to use Mobilevit in Cifar10, how should I set the parameters?

Because I changed the input size, but the parameters don't match.

For example:

EinopsError: Shape mismatch, can't divide axis of length 2 in chunks of 4

During handling of the above exception, another exception occurred:

EinopsError Traceback (most recent call last)
C:\Users\ADMINI~1\AppData\Local\Temp/ipykernel_16468/2105769576.py in
44 torch.cuda.empty_cache()
45 if name == "main":
---> 46 main()

C:\Users\ADMINI~1\AppData\Local\Temp/ipykernel_16468/2105769576.py in main()
31
32 for epoch in range(1, total_epoch+1):
---> 33 train_one_epoch(model,
34 train_dataloader,
35 criterion,

C:\Users\ADMINI~1\AppData\Local\Temp/ipykernel_16468/4152541084.py in train_one_epoch(model, dataloader, criterion, optimizer, epoch, total_epoch, report_freq)
70 label = data[1].to(device)
71
---> 72 out = model(image)
73 loss = criterion(out, label)
74

D:\anaconda\envs\CoAtnet\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []

E:\code\awesome_lightweight_networks-main\light_cnns\Transformer\mobile_vit.py in forward(self, x)
216
217 x = self.mv25
--> 218 x = self.mvit1
219
220 x = self.mv26

D:\anaconda\envs\CoAtnet\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []

E:\code\awesome_lightweight_networks-main\light_cnns\Transformer\mobile_vit.py in forward(self, x)
162 # Global representations
163 _, _, h, w = x.shape
--> 164 x = rearrange(x, 'b d (h ph) (w pw) -> b (ph pw) (h w) d', ph=self.ph, pw=self.pw)
165 x = self.transformer(x)
166 x = rearrange(x, 'b (ph pw) (h w) d -> b d (h ph) (w pw)', h=h // self.ph, w=w // self.pw, ph=self.ph,

D:\anaconda\envs\CoAtnet\lib\site-packages\einops\einops.py in rearrange(tensor, pattern, **axes_lengths)
450 raise TypeError("Rearrange can't be applied to an empty list")
451 tensor = get_backend(tensor[0]).stack_on_zeroth_dimension(tensor)
--> 452 return reduce(tensor, pattern, reduction='rearrange', **axes_lengths)
453
454

D:\anaconda\envs\CoAtnet\lib\site-packages\einops\einops.py in reduce(tensor, pattern, reduction, **axes_lengths)
388 message += '\n Input is list. '
389 message += 'Additional info: {}.'.format(axes_lengths)
--> 390 raise EinopsError(message + '\n {}'.format(e))
391
392

EinopsError: Error while processing rearrange-reduction pattern "b d (h ph) (w pw) -> b (ph pw) (h w) d".
Input tensor shape: torch.Size([200, 80, 2, 2]). Additional info: {'ph': 4, 'pw': 4}.
Shape mismatch, can't divide axis of length 2 in chunks of 4

@Yiming-M
Copy link

Hi @learningelectric,

MobileNetV2 blocks reduce the height & weight of input by 2. It seems like you only need to resize input images so that the these two quantities can be perfectly divided.

Best wishes,
Yiming

@learningelectric
Copy link
Author

Thank you very much! I will try it in the near future.

@learningelectric
Copy link
Author

learningelectric commented Jan 13, 2022

Hi @yimingma
I am sorry that I am not familiar with MobileNet. Could you tell me how to modify it in more detail?

I know how to modify input image size and input channel , but I am not clear that how MobileNetV2 blocks reduce the height & weight of input by 2.

Actually, my input data size is (4,32,32), It is a custom data set.

Please! I really apperatiate your patience and generous for me!

 class MV2Block(nn.Module):
      def __init__(self, inp, oup, stride=1, expansion=4):
          super().__init__()
          self.stride = stride
          assert stride in [1, 2]
          hidden_dim = int(inp * expansion)
          self.use_res_connect = self.stride == 1 and inp == oup
          if expansion == 1:
              self.conv = nn.Sequential(
                  # dw
                  nn.Conv2d(hidden_dim, hidden_dim, 3, stride, 1, groups=hidden_dim, bias=False),
                  nn.BatchNorm2d(hidden_dim),
                  nn.SiLU(),
                  # pw-linear
                  nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False),
                  nn.BatchNorm2d(oup),
              )
          else:
              self.conv = nn.Sequential(
                  # pw
                  nn.Conv2d(inp, hidden_dim, 1, 1, 0, bias=False),
                  nn.BatchNorm2d(hidden_dim),
                  nn.SiLU(),
                  # dw
                  nn.Conv2d(hidden_dim, hidden_dim, 3, stride, 1, groups=hidden_dim, bias=False),
                  nn.BatchNorm2d(hidden_dim),
                  nn.SiLU(),
                  # pw-linear
                  nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False),
                  nn.BatchNorm2d(oup),
              )
      def forward(self, x):
          if self.use_res_connect:
              return x + self.conv(x)
          else:
              return self.conv(x)

@Yiming-M
Copy link

Hi @learningelectric,

With pleasure to help.

I guess you are following this tutorial provided by PyTorch Training a classifier. If so, you only need to add torchvision.transforms.Resize to the original image transform. No need to modify the MobileNetV2 block.

If you don't want to resize the images, then you can also change the stride of one MV2Block from 2 to 1. But this is more complex and you may have to modify the following layers parameters.

Best wishes,
Yiming

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants