Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Addition of parametrized gates missing in TensorflowMatrices #1208

Merged
merged 6 commits into from
Feb 12, 2024

Conversation

renatomello
Copy link
Contributor

@renatomello renatomello commented Feb 10, 2024

This follows a discussion in #1202

Checklist:

  • Reviewers confirm new code works as expected.
  • Tests are passing.
  • Coverage does not decrease.
  • Documentation is updated.

@renatomello renatomello added this to the Qibo 0.2.5 milestone Feb 10, 2024
@renatomello renatomello self-assigned this Feb 10, 2024
@renatomello renatomello changed the title Concise implementation of TensorflowMatrices + addition of parametrized gates Addition of parametrized gates missing in TensorflowMatrices Feb 10, 2024
@Simone-Bordoni
Copy link
Contributor

Why did you go back to the previous implementation where you define again the gates? Do I have to go back to that implementation also on the PyTorch backend?

@renatomello
Copy link
Contributor Author

Why did you go back to the previous implementation where you define again the gates? Do I have to go back to that implementation also on the PyTorch backend?

tensorflow loses track of the variable if it goes through a numpy function (in this case, np.array) and then back to tf. I don't know if the same thing happens in torch

@renatomello renatomello added the enhancement New feature or request label Feb 11, 2024
@codecov-commenter
Copy link

codecov-commenter commented Feb 11, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (45e4e22) 99.96% compared to head (85f26bc) 99.96%.

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #1208   +/-   ##
=======================================
  Coverage   99.96%   99.96%           
=======================================
  Files          70       70           
  Lines       10414    10452   +38     
=======================================
+ Hits        10410    10448   +38     
  Misses          4        4           
Flag Coverage Δ
unittests 99.96% <100.00%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Member

@stavros11 stavros11 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @renatomello.

Regarding what you tried in 15c707d, I think there should be a way to simplify TensorflowMatrices to just inherit the gates from NumpyMatrices so that we don't have to redefine everything, while still not breaking backprop.

One simple idea is to modify NumpyMatrices as follows:

class NumpyMatrices:

    def _cast(self, x):
        return self.np.array(x, dtype=self.dtype)

    def U3(self, theta, phi, lam):
        cost = self.np.cos(theta / 2)
        sint = self.np.sin(theta / 2)
        eplus = self.np.exp(1j * (phi + lam) / 2.0)
        eminus = self.np.exp(1j * (phi - lam) / 2.0)
        return self._cast(
            [
                [self.np.conj(eplus) * cost, -self.np.conj(eminus) * sint],
                [eminus * sint, eplus * cost],
            ],
        )

and similar for other parametrized gates, and then TensorflowMatrices would just be

class TensorflowMatrices(NumpyMatrices):
    def __init__(self, dtype):
        super().__init__(dtype)
        import tensorflow as tf  # pylint: disable=import-error
        import tensorflow.experimental.numpy as tnp  # pylint: disable=import-error

        self.tf = tf
        self.np = tnp

    def _cast(self, x):
        return self.tf.cast(x, dtype=self.dtype)

without having to redefine U3 and other gates. I have not tested if this fully works, though.

I am not sure if even having different classes for matrices in each backend is necessary, most likely there is a way to get around this with just functions but until that, maybe the proposal above is a doable simplification.

@renatomello
Copy link
Contributor Author

Thanks @renatomello.

Regarding what you tried in 15c707d, I think there should be a way to simplify TensorflowMatrices to just inherit the gates from NumpyMatrices so that we don't have to redefine everything, while still not breaking backprop.

One simple idea is to modify NumpyMatrices as follows:

class NumpyMatrices:

    def _cast(self, x):
        return self.np.array(x, dtype=self.dtype)

    def U3(self, theta, phi, lam):
        cost = self.np.cos(theta / 2)
        sint = self.np.sin(theta / 2)
        eplus = self.np.exp(1j * (phi + lam) / 2.0)
        eminus = self.np.exp(1j * (phi - lam) / 2.0)
        return self._cast(
            [
                [self.np.conj(eplus) * cost, -self.np.conj(eminus) * sint],
                [eminus * sint, eplus * cost],
            ],
        )

and similar for other parametrized gates, and then TensorflowMatrices would just be

class TensorflowMatrices(NumpyMatrices):
    def __init__(self, dtype):
        super().__init__(dtype)
        import tensorflow as tf  # pylint: disable=import-error
        import tensorflow.experimental.numpy as tnp  # pylint: disable=import-error

        self.tf = tf
        self.np = tnp

    def _cast(self, x):
        return self.tf.cast(x, dtype=self.dtype)

without having to redefine U3 and other gates. I have not tested if this fully works, though.

I will try this and see if it works.

I am not sure if even having different classes for matrices in each backend is necessary, most likely there is a way to get around this with just functions but until that, maybe the proposal above is a doable simplification.

I was thinking that maybe the parametrized gates should be defined as returning a list instead of an array and then each backend would just need to cast it, but I think your proposal above is cleaner.

@renatomello
Copy link
Contributor Author

renatomello commented Feb 12, 2024

Thanks @renatomello.
Regarding what you tried in 15c707d, I think there should be a way to simplify TensorflowMatrices to just inherit the gates from NumpyMatrices so that we don't have to redefine everything, while still not breaking backprop.
One simple idea is to modify NumpyMatrices as follows:

class NumpyMatrices:

    def _cast(self, x):
        return self.np.array(x, dtype=self.dtype)

    def U3(self, theta, phi, lam):
        cost = self.np.cos(theta / 2)
        sint = self.np.sin(theta / 2)
        eplus = self.np.exp(1j * (phi + lam) / 2.0)
        eminus = self.np.exp(1j * (phi - lam) / 2.0)
        return self._cast(
            [
                [self.np.conj(eplus) * cost, -self.np.conj(eminus) * sint],
                [eminus * sint, eplus * cost],
            ],
        )

and similar for other parametrized gates, and then TensorflowMatrices would just be

class TensorflowMatrices(NumpyMatrices):
    def __init__(self, dtype):
        super().__init__(dtype)
        import tensorflow as tf  # pylint: disable=import-error
        import tensorflow.experimental.numpy as tnp  # pylint: disable=import-error

        self.tf = tf
        self.np = tnp

    def _cast(self, x):
        return self.tf.cast(x, dtype=self.dtype)

without having to redefine U3 and other gates. I have not tested if this fully works, though.

I will try this and see if it works.

I am not sure if even having different classes for matrices in each backend is necessary, most likely there is a way to get around this with just functions but until that, maybe the proposal above is a doable simplification.

I was thinking that maybe the parametrized gates should be defined as returning a list instead of an array and then each backend would just need to cast it, but I think your proposal above is cleaner.
I will merge as is and try the proposed solution in a new branch.

@renatomello renatomello added this pull request to the merge queue Feb 12, 2024
Merged via the queue into master with commit 2a493f6 Feb 12, 2024
21 checks passed
@renatomello renatomello deleted the tensorflow branch February 12, 2024 10:59
@renatomello renatomello restored the tensorflow branch February 13, 2024 06:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants