Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enabling PSR, analytical diff with pytorch and moving interfaces to dedicated module #42

Open
wants to merge 39 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 17 commits
Commits
Show all changes
39 commits
Select commit Hold shift + click to select a range
bed5146
feat: psr draft
MatteoRobbiati Oct 22, 2024
309a26a
refactor: use nshots value to choose if use samples or not
MatteoRobbiati Oct 22, 2024
5871804
fix: expectation decoding test
MatteoRobbiati Oct 22, 2024
c1534e1
working on an example using PSR
MatteoRobbiati Oct 25, 2024
687e843
feat: PSR working but x is padded
MatteoRobbiati Oct 28, 2024
e015b86
reminder about diffrule
MatteoRobbiati Oct 28, 2024
fa83ef2
docs: tutorial on models
MatteoRobbiati Oct 28, 2024
ea51b91
Merge branch 'model' into diffrules
MatteoRobbiati Nov 1, 2024
a412b5e
fixing PSR and moving interfaces to qiboml.interfaces
MatteoRobbiati Nov 1, 2024
bb7d7a0
test: add diffrules test
MatteoRobbiati Nov 4, 2024
5880001
refactor: moving interfaces to proper module
MatteoRobbiati Nov 4, 2024
6110339
chore: add qibojit to test deps
MatteoRobbiati Nov 4, 2024
c5af099
chore: tf and torch are not optional
MatteoRobbiati Nov 4, 2024
1b75cae
tests: rm tf import
MatteoRobbiati Nov 4, 2024
bd228bc
fix: adapting PSR padding to expval shape
MatteoRobbiati Nov 4, 2024
a9e2481
test: enable PSR test in test_models_interface
MatteoRobbiati Nov 4, 2024
46de4e9
test: fix seed in diffrules test
MatteoRobbiati Nov 4, 2024
69b8f12
feat: analytic as property
MatteoRobbiati Nov 5, 2024
52ec4cd
chore: set optional == True for tf and torch in our deps
MatteoRobbiati Nov 7, 2024
2e341f1
fix: rm unused __init__ method from PSR class
MatteoRobbiati Nov 7, 2024
9bdf627
fix: adapt padding to x shape
MatteoRobbiati Nov 7, 2024
16df115
fix: remove useless passed qubits list
MatteoRobbiati Nov 7, 2024
d279f13
tests: fix diffrules tests grad calculation
MatteoRobbiati Nov 7, 2024
1842a1d
Update tests/test_models_interfaces.py
MatteoRobbiati Nov 7, 2024
5ae1ade
tests: adapt interfaces tests to analytical as property
MatteoRobbiati Nov 7, 2024
7552520
Merge branch 'diffrules' of github.com:qiboteam/qiboml into diffrules
MatteoRobbiati Nov 7, 2024
ae04a26
chore: restoring tf and torch non-optional because of lint complainings
MatteoRobbiati Nov 7, 2024
921485f
tests: increase nshots in diffrules test
MatteoRobbiati Nov 7, 2024
590cf5c
fix random seed in diffrules tests
MatteoRobbiati Nov 7, 2024
45b2063
Update src/qiboml/models/decoding.py
MatteoRobbiati Nov 12, 2024
55f7a06
chore: updating lock after merging main to solve conflicts
MatteoRobbiati Nov 12, 2024
208ea10
feat: add gates_encoding_feature method
MatteoRobbiati Nov 13, 2024
eac535e
feat: drafting gradient wrt data
MatteoRobbiati Nov 13, 2024
6e3870d
fix: working derivative wrt x
MatteoRobbiati Nov 13, 2024
3a8e66f
refactor: moving gradient wrt data into a method of PSR class
MatteoRobbiati Nov 14, 2024
1c0c3c8
fix: rm gates_encoding_feature from abstractmethods
MatteoRobbiati Nov 14, 2024
f8d944a
Update src/qiboml/operations/differentiation.py
MatteoRobbiati Nov 15, 2024
73dd6c7
test: reducing decimals in PSR test
MatteoRobbiati Nov 15, 2024
824613f
fix: wrt --> from in psr
MatteoRobbiati Nov 15, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
100 changes: 0 additions & 100 deletions exercise.py

This file was deleted.

315 changes: 191 additions & 124 deletions poetry.lock

Large diffs are not rendered by default.

7 changes: 3 additions & 4 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,10 @@ packages = [{ include = "qiboml", from = "src" }]
python = ">=3.9,<3.13"
numpy = "^1.26.4"
keras = { version = "^3.0.0", optional = true }
tensorflow = { version = "^2.16.1", markers = "sys_platform == 'linux' or sys_platform == 'darwin'", optional = true }
tensorflow = { version = "^2.16.1", markers = "sys_platform == 'linux' or sys_platform == 'darwin'"}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would leave this optional

Copy link
Contributor

@BrunoLiegiBastonLiegi BrunoLiegiBastonLiegi Nov 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MatteoRobbiati I would keep this optional

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reminder: add it to test deps

# TODO: the marker is a temporary solution due to the lack of the tensorflow-io 0.32.0's wheels for Windows, this package is one of
# the tensorflow requirements
torch = { version = "^2.3.1", optional = true }
torch = { version = "^2.3.1"}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here

qibo = {git="https://github.com/qiboteam/qibo"}
jax = "^0.4.25"
jaxlib = "^0.4.25"
Expand All @@ -33,11 +33,10 @@ pdbpp = "^0.10.3"
optional = true

[tool.poetry.group.tests.dependencies]
torch = "^2.3.1"
tensorflow = { version = "^2.16.1", markers = "sys_platform == 'linux'" }
pytest = "^7.2.1"
pylint = "3.1.0"
pytest-cov = "4.0.0"
qibojit = "^0.1.7"

[tool.poetry.group.benchmark.dependencies]
pytest-benchmark = { version = "^4.0.0", extras = ["histogram"] }
Expand Down
4 changes: 2 additions & 2 deletions src/qiboml/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
try:
from tensorflow import Tensor as tf_tensor

from qiboml.models import keras
from qiboml.interfaces import keras
BrunoLiegiBastonLiegi marked this conversation as resolved.
Show resolved Hide resolved

ndarray = Union[ndarray, tf_tensor]
except ImportError: # pragma: no cover
Expand All @@ -21,7 +21,7 @@
try:
from torch import Tensor as pt_tensor

from qiboml.models import pytorch
from qiboml.interfaces import pytorch

ndarray = Union[ndarray, pt_tensor]
except ImportError: # pragma: no cover
Expand Down
File renamed without changes.
35 changes: 10 additions & 25 deletions src/qiboml/models/pytorch.py → src/qiboml/interfaces/pytorch.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,21 +2,13 @@

from dataclasses import dataclass

import numpy as np
import torch
from qibo import Circuit
from qibo.backends import Backend, _check_backend
from qibo.config import raise_error
from qibo.backends import Backend

from qiboml.models.decoding import QuantumDecoding
from qiboml.models.encoding import QuantumEncoding
from qiboml.operations import differentiation as Diff

BACKEND_2_DIFFERENTIATION = {
"pytorch": None,
"qibolab": "PSR",
"jax": "Jax",
}
from qiboml.operations.differentiation import DifferentiationRule


@dataclass(eq=False)
Expand All @@ -25,30 +17,23 @@ class QuantumModel(torch.nn.Module):
encoding: QuantumEncoding
circuit: Circuit
decoding: QuantumDecoding
differentiation: str = "auto"
differentiation_rule: DifferentiationRule = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not a fan of calling these differentiation_rule and DifferantionRule, as this would cover the Jax case as well, which is not really a rule but rather a way to perform differentiation. I would drop the _rule thus and just call it generically differentiation.


def __post_init__(
self,
):
super().__init__()

circuit = self.encoding.circuit
params = [p for param in self.circuit.get_parameters() for p in param]
params = torch.as_tensor(self.backend.to_numpy(params)).ravel()
params = torch.as_tensor(self.backend.to_numpy(x=params)).ravel()
params.requires_grad = True
self.circuit_parameters = torch.nn.Parameter(params)

if self.differentiation == "auto":
self.differentiation = BACKEND_2_DIFFERENTIATION.get(
self.backend.name, "PSR"
)

if self.differentiation is not None:
self.differentiation = getattr(Diff, self.differentiation)()
Comment on lines -40 to -46
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what happens now if the backend is not pytorch, say numba, and differentiation_rule=None? I believe it will crash, because it will try to use the QuantumModelAutoGrad with no differentiation rule.

Copy link
Contributor

@BrunoLiegiBastonLiegi BrunoLiegiBastonLiegi Nov 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


def forward(self, x: torch.Tensor):
if (
self.backend.name != "pytorch"
or self.differentiation is not None
or self.differentiation_rule is not None
or not self.decoding.analytic
):
x = QuantumModelAutoGrad.apply(
Expand All @@ -57,7 +42,7 @@ def forward(self, x: torch.Tensor):
self.circuit,
self.decoding,
self.backend,
self.differentiation,
self.differentiation_rule,
*list(self.parameters())[0],
)
else:
Expand Down Expand Up @@ -93,15 +78,15 @@ def forward(
circuit: Circuit,
decoding: QuantumDecoding,
backend,
differentiation,
differentiation_rule,
*parameters: list[torch.nn.Parameter],
):
ctx.save_for_backward(x, *parameters)
ctx.encoding = encoding
ctx.circuit = circuit
ctx.decoding = decoding
ctx.backend = backend
ctx.differentiation = differentiation
ctx.differentiation_rule = differentiation_rule
x_clone = x.clone().detach().cpu().numpy()
x_clone = backend.cast(x_clone, dtype=x_clone.dtype)
params = [
Expand All @@ -127,7 +112,7 @@ def backward(ctx, grad_output: torch.Tensor):
]
grad_input, *gradients = (
torch.as_tensor(ctx.backend.to_numpy(grad).tolist())
for grad in ctx.differentiation.evaluate(
for grad in ctx.differentiation_rule.evaluate(
x_clone, ctx.encoding, ctx.circuit, ctx.decoding, ctx.backend, *params
)
)
Expand Down
19 changes: 12 additions & 7 deletions src/qiboml/models/ansatze.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,20 @@
from qibo import Circuit, gates


def ReuploadingCircuit(nqubits: int, qubits: list[int] = None) -> Circuit:
def ReuploadingCircuit(
nqubits: int, qubits: list[int] = None, nlayers: int = 1
) -> Circuit:
if qubits is None:
qubits = list(range(nqubits))

circuit = Circuit(nqubits)
for q in qubits:
circuit.add(gates.RY(q, theta=random.random() * np.pi, trainable=True))
circuit.add(gates.RZ(q, theta=random.random() * np.pi, trainable=True))
for i, q in enumerate(qubits[:-2]):
circuit.add(gates.CNOT(q0=q, q1=qubits[i + 1]))
circuit.add(gates.CNOT(q0=qubits[-1], q1=qubits[0]))

for _ in range(nlayers):
for q in qubits:
circuit.add(gates.RY(q, theta=random.random() * np.pi, trainable=True))
circuit.add(gates.RZ(q, theta=random.random() * np.pi, trainable=True))
for i, q in enumerate(qubits[:-2]):
circuit.add(gates.CNOT(q0=q, q1=qubits[i + 1]))
circuit.add(gates.CNOT(q0=qubits[-1], q1=qubits[0]))

return circuit
5 changes: 2 additions & 3 deletions src/qiboml/models/decoding.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ class QuantumDecoding:

nqubits: int
qubits: list[int] = None
nshots: int = 1000
nshots: int = None
analytic: bool = True
backend: Backend = None
_circuit: Circuit = None
Expand Down Expand Up @@ -58,7 +58,6 @@ def output_shape(self):
class Expectation(QuantumDecoding):

observable: Union[ndarray, Hamiltonian] = None
analytic: bool = False
MatteoRobbiati marked this conversation as resolved.
Show resolved Hide resolved

def __post_init__(self):
if self.observable is None:
Expand All @@ -69,7 +68,7 @@ def __post_init__(self):
super().__post_init__()

def __call__(self, x: Circuit) -> ndarray:
if self.analytic:
if self.nshots is None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if self.nshots is None:
if self.analytic:

better to use the property now that we have it

return self.observable.expectation(
super().__call__(x).state(),
).reshape(1, 1)
Expand Down
Loading