-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Addition of parametrized gates missing in TensorflowMatrices
#1208
Conversation
TensorflowMatrices
+ addition of parametrized gatesTensorflowMatrices
Why did you go back to the previous implementation where you define again the gates? Do I have to go back to that implementation also on the PyTorch backend? |
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #1208 +/- ##
=======================================
Coverage 99.96% 99.96%
=======================================
Files 70 70
Lines 10414 10452 +38
=======================================
+ Hits 10410 10448 +38
Misses 4 4
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @renatomello.
Regarding what you tried in 15c707d, I think there should be a way to simplify TensorflowMatrices
to just inherit the gates from NumpyMatrices
so that we don't have to redefine everything, while still not breaking backprop.
One simple idea is to modify NumpyMatrices
as follows:
class NumpyMatrices:
def _cast(self, x):
return self.np.array(x, dtype=self.dtype)
def U3(self, theta, phi, lam):
cost = self.np.cos(theta / 2)
sint = self.np.sin(theta / 2)
eplus = self.np.exp(1j * (phi + lam) / 2.0)
eminus = self.np.exp(1j * (phi - lam) / 2.0)
return self._cast(
[
[self.np.conj(eplus) * cost, -self.np.conj(eminus) * sint],
[eminus * sint, eplus * cost],
],
)
and similar for other parametrized gates, and then TensorflowMatrices
would just be
class TensorflowMatrices(NumpyMatrices):
def __init__(self, dtype):
super().__init__(dtype)
import tensorflow as tf # pylint: disable=import-error
import tensorflow.experimental.numpy as tnp # pylint: disable=import-error
self.tf = tf
self.np = tnp
def _cast(self, x):
return self.tf.cast(x, dtype=self.dtype)
without having to redefine U3
and other gates. I have not tested if this fully works, though.
I am not sure if even having different classes for matrices in each backend is necessary, most likely there is a way to get around this with just functions but until that, maybe the proposal above is a doable simplification.
I will try this and see if it works.
I was thinking that maybe the parametrized gates should be defined as returning a list instead of an array and then each backend would just need to cast it, but I think your proposal above is cleaner. |
I will try this and see if it works.
I was thinking that maybe the parametrized gates should be defined as returning a list instead of an array and then each backend would just need to cast it, but I think your proposal above is cleaner. |
This follows a discussion in #1202
Checklist: