Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GroupLinearLayer should add "device" parameter #7

Open
ildefons opened this issue Jan 8, 2021 · 2 comments
Open

GroupLinearLayer should add "device" parameter #7

ildefons opened this issue Jan 8, 2021 · 2 comments

Comments

@ildefons
Copy link

ildefons commented Jan 8, 2021

Hi RIM dev team,

Code fails when device = 'cuda'
It can be easily solved adding an extra parameter "device" to all "Group" classes.

Thank you for the great RIM implementation,
Ildefons

@dido1998
Copy link
Owner

dido1998 commented Jan 8, 2021

HI @ildefons, could you share the error message with device='cuda'?

@ildefons
Copy link
Author

ildefons commented Jan 8, 2021

Error message:

RuntimeError Traceback (most recent call last)
in
1 for x in xs:
2 print(1)
----> 3 hs, cs = rim_model(x, hs, cs)

~\anaconda3\envs\eg2\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)

~\OneDrive\Documentos\YK\eg\Recurrent-Independent-Mechanisms\RIM.py in forward(self, x, hs, cs)
249
250 # Compute input attention
--> 251 inputs, mask = self.input_attention_mask(x, hs)
252 h_old = hs * 1.0
253 if cs is not None:

~\OneDrive\Documentos\YK\eg\Recurrent-Independent-Mechanisms\RIM.py in input_attention_mask(self, x, h)
177 key_layer = self.key(x)
178 value_layer = self.value(x)
--> 179 query_layer = self.query(h)
180
181 key_layer = self.transpose_for_scores(key_layer, self.num_input_heads, self.input_key_size)

~\anaconda3\envs\eg2\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)

~\OneDrive\Documentos\YK\eg\Recurrent-Independent-Mechanisms\RIM.py in forward(self, x)
31 x = x.permute(1,0,2)
32
---> 33 x = torch.bmm(x,self.w)
34 return x.permute(1,0,2)
35

RuntimeError: Expected object of device type cuda but got device type cpu for argument #2 'mat2' in call to _th_bmm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants