Skip to content

Commit

Permalink
Updating upload.yml to run Julia tests (#340)
Browse files Browse the repository at this point in the history
Updates the `upload.yml` file so that before running the tests, it sets
Julia up (version `1.9.3`).
This setup now matches closely to that in the other `.yml` files, in
particular to that in `build.yml`

---------

Co-authored-by: Sebastián Duque Mesa <[email protected]>
Co-authored-by: JacobHast <[email protected]>
Co-authored-by: elib20 <[email protected]>
Co-authored-by: ziofil <[email protected]>
Co-authored-by: ziofil <[email protected]>
Co-authored-by: Luke Helt <[email protected]>
Co-authored-by: zeyueN <[email protected]>
Co-authored-by: Robbe De Prins <[email protected]>
Co-authored-by: Robbe De Prins (UGent-imec) <[email protected]>
Co-authored-by: Yuan <[email protected]>
Co-authored-by: Ryk <[email protected]>
Co-authored-by: Gabriele Gullì <[email protected]>
Co-authored-by: Yuan Yao <[email protected]>
Co-authored-by: Yuan Yao <[email protected]>
Co-authored-by: heltluke <[email protected]>
Co-authored-by: Tanner Rogalsky <[email protected]>
Co-authored-by: Jan Provazník <[email protected]>
  • Loading branch information
18 people authored Feb 8, 2024
1 parent f4898e0 commit e235882
Show file tree
Hide file tree
Showing 9 changed files with 62 additions and 27 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/builds.yml
Original file line number Diff line number Diff line change
Expand Up @@ -53,4 +53,4 @@ jobs:
run: julia --project="julia_pkg" -e "using Pkg; Pkg.instantiate()"

- name: Run tests
run: python -m pytest tests -p no:warnings --tb=native
run: python -m pytest tests -p no:warnings --tb=native --backend=tensorflow
10 changes: 9 additions & 1 deletion .github/workflows/upload.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,16 @@ jobs:
- name: Install only test dependencies
run: poetry install --no-root --extras "ray" --with dev

- name: Setup Julia
uses: julia-actions/setup-julia@v1
with:
version: 1.9.3

- name: Setup Julia part 2
run: julia --project="julia_pkg" -e "using Pkg; Pkg.instantiate()"

- name: Run tests
run: python -m pytest tests -p no:warnings --tb=native
run: python -m pytest tests -p no:warnings --tb=native --backend=tensorflow

- name: Publish
uses: pypa/gh-action-pypi-publish@release/v1
Expand Down
26 changes: 18 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,30 +5,30 @@
[![Actions Status](https://github.com/XanaduAI/MrMustard/workflows/Tests/badge.svg)](https://github.com/XanaduAI/MrMustard/actions)
[![Python version](https://img.shields.io/pypi/pyversions/mrmustard.svg?style=popout-square)](https://pypi.org/project/MrMustard/)

Mr Mustard is a differentiable simulator with a sophisticated built-in optimizer, that operates seamlessly across phase space and Fock space. It is built on top of an agnostic autodiff interface, to allow for plug-and-play backends (TensorFlow by default).
Mr Mustard is a differentiable simulator with a sophisticated built-in optimizer, that operates seamlessly across phase space and Fock space. It is built on top of an agnostic autodiff interface, to allow for plug-and-play backends (`numpy` by default).

Mr Mustard supports:
- Phase space representation of Gaussian states and Gaussian channels on an arbitrary number of modes
- Exact Fock representation of any Gaussian circuit and any Gaussian state up to an arbitrary cutoff
- Riemannian optimization on the symplectic group (for Gaussian transformations) and on the unitary group (for interferometers)
- Adam optimizer for euclidean parameters
- single-mode gates (parallelizable):
- Single-mode gates (parallelizable):
- squeezing, displacement, phase rotation, attenuator, amplifier, additive noise, phase noise
- two-mode gates:
- Two-mode gates:
- beam splitter, Mach-Zehnder interferometer, two-mode squeezing, CX, CZ, CPHASE
- N-mode gates (with dedicated Riemannian optimization):
- Interferometer (unitary), RealInterferometer (orthogonal), Gaussian transformation (symplectic)
- single-mode states (parallelizable):
- Single-mode states (parallelizable):
- Vacuum, Coherent, SqueezedVacuum, Thermal, Fock
- two-mode states:
- Two-mode states:
- TMSV (two-mode squeezed vacuum)
- N-mode states:
- Gaussian
- Photon number moments and entropic measures
- PNR detectors and Threshold detectors with trainable quantum efficiency and dark counts
- Homodyne, Heterodyne and Generaldyne measurements
- Composable circuits
- Plug-and-play backends (TensorFlow as default)
- Plug-and-play backends (`numpy` as default)
- An abstraction layer `XPTensor` for seamless symplectic algebra (experimental)

# Increased numerical stability using Julia [optional]
Expand Down Expand Up @@ -65,7 +65,7 @@ fock4 = Fock(4) # fock state |4>

D = Dgate(x=1.0, y=-0.4) # Displacement by 1.0 along x and -0.4 along y
S = Sgate(r=0.5) # Squeezer with r=0.5
R = Rgate(phi=0.3) # Phase rotation by 0.3
R = Rgate(angle=0.3) # Phase rotation by 0.3
A = Amplifier(gain=2.0) # noisy amplifier with 200% gain
L = Attenuator(0.5) # pure loss channel with 50% transmissivity
N = AdditiveNoise(noise=0.1) # additive noise with noise level 0.1
Expand Down Expand Up @@ -233,25 +233,35 @@ The physics module contains a growing number of functions that we can apply to s

# The math module
The math module is the backbone of Mr Mustard. Mr Mustard comes with a plug-and-play backends through a math interface. You can use it as a drop-in replacement for tensorflow or numpy and your code will be plug-and-play too!

Here's an example where the `numpy` backend is used.
```python
import mrmustard.math as math

math.cos(0.1) # numpy
```

In a different session, we can change the backend to ``tensorflow``.
```python
import mrmustard.math as math
math.change_backend("tensorflow")

math.cos(0.1) # tensorflow
```

### Optimization
The `Optimizer` (available in `mrmustard.training` uses Adam underneath the hood for Euclidean parameters and a custom symplectic optimizer for Gaussian gates and states and a unitary/orthogonal optimizer for interferometers.
The `mrmustard.training.Optimizer` uses Adam underneath the hood for the optimization of Euclidean parameters, a custom symplectic optimizer for Gaussian gates and states and a unitary/orthogonal optimizer for interferometers.

We can turn any simulation in Mr Mustard into an optimization by marking which parameters we wish to be trainable. Let's take a simple example: synthesizing a displaced squeezed state.

```python
from mrmustard import math
from mrmustard.lab import Dgate, Ggate, Attenuator, Vacuum, Coherent, DisplacedSqueezed
from mrmustard.physics import fidelity
from mrmustard.training import Optimizer

math.change_backend("tensorflow")

D = Dgate(x = 0.1, y = -0.5, x_trainable=True, y_trainable=True)
L = Attenuator(transmissivity=0.5)

Expand Down
16 changes: 13 additions & 3 deletions doc/introduction/basic_reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -180,26 +180,36 @@ The physics module contains a growing number of functions that we can apply to s
### The math module
The math module is the backbone of Mr Mustard, which consists in the [Math](https://github.com/XanaduAI/MrMustard/blob/main/mrmustard/math/math_interface.py) interface
Mr Mustard comes with a plug-and-play backends through a math interface. You can use it as a drop-in replacement for tensorflow or numpy and your code will be plug-and-play too!
Here's an example where the ``numpy`` backend is used.

```python
from mrmustard import math
import mrmustard.math as math

math.cos(0.1) # numpy
```

In a different session, we can change the backend to ``tensorflow``.
```python
import mrmustard.math as math
math.change_backend("tensorflow")

math.change_backend("numpy")
math.cos(0.1) # tensorflow
```

### Optimization
The `Optimizer` (available in `mrmustard.training` uses Adam underneath the hood for Euclidean parameters and a custom symplectic optimizer for Gaussian gates and states, an unitary optimizer for interferometers, and an orthogonal optimizer for real interferometers.
The `mrmustard.training.Optimizer` uses Adam underneath the hood for the optimization of Euclidean parameters, a custom symplectic optimizer for Gaussian gates and states and a unitary/orthogonal optimizer for interferometers.

We can turn any simulation in Mr Mustard into an optimization by marking which parameters we wish to be trainable. Let's take a simple example: synthesizing a
displaced squeezed state.

```python
from mrmustard import math
from mrmustard.lab import Dgate, Ggate, Attenuator, Vacuum, Coherent, DisplacedSqueezed
from mrmustard.physics import fidelity
from mrmustard.training import Optimizer

math.change_backend("tensorflow")

D = Dgate(x = 0.1, y = -0.5, x_trainable=True, y_trainable=True)
L = Attenuator(transmissivity=0.5)

Expand Down
11 changes: 6 additions & 5 deletions mrmustard/lab/abstract/state.py
Original file line number Diff line number Diff line change
Expand Up @@ -761,9 +761,10 @@ def mikkel_plot(
plt.subplots_adjust(wspace=0.05, hspace=0.05)

# Wigner function

ax[1][0].contourf(X, P, W, 120, cmap=plot_args["cmap"], vmin=-abs(W).max(), vmax=abs(W).max())
ax[1][0].set_xlabel("$x$", fontsize=12)
ax[1][0].set_ylabel("$p$", fontsize=12)
ax[1][0].set_xlabel("x", fontsize=12)
ax[1][0].set_ylabel("p", fontsize=12)
ax[1][0].get_xaxis().set_ticks(plot_args["xticks"])
ax[1][0].xaxis.set_ticklabels(plot_args["xtick_labels"])
ax[1][0].get_yaxis().set_ticks(plot_args["yticks"])
Expand All @@ -780,7 +781,7 @@ def mikkel_plot(
ax[0][0].xaxis.set_ticklabels([])
ax[0][0].get_yaxis().set_ticks([])
ax[0][0].tick_params(direction="in")
ax[0][0].set_ylabel("Prob($x$)", fontsize=12)
ax[0][0].set_ylabel("Prob(x)", fontsize=12)
ax[0][0].set_xlim(xbounds)
ax[0][0].set_ylim([0, 1.1 * max(ProbX)])
ax[0][0].grid(plot_args["grid"])
Expand All @@ -792,14 +793,14 @@ def mikkel_plot(
ax[1][1].get_yaxis().set_ticks(plot_args["yticks"])
ax[1][1].yaxis.set_ticklabels([])
ax[1][1].tick_params(direction="in")
ax[1][1].set_xlabel("Prob($p$)", fontsize=12)
ax[1][1].set_xlabel("Prob(p)", fontsize=12)
ax[1][1].set_xlim([0, 1.1 * max(ProbP)])
ax[1][1].set_ylim(ybounds)
ax[1][1].grid(plot_args["grid"])

# Density matrix
ax[0][1].matshow(abs(rho), cmap=plot_args["cmap"], vmin=-abs(rho).max(), vmax=abs(rho).max())
ax[0][1].set_title(r"abs($\rho$)", fontsize=12)
ax[0][1].set_title("abs(ρ)", fontsize=12)
ax[0][1].tick_params(direction="in")
ax[0][1].get_xaxis().set_ticks([])
ax[0][1].get_yaxis().set_ticks([])
Expand Down
5 changes: 2 additions & 3 deletions mrmustard/math/backend_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -1187,9 +1187,8 @@ def Categorical(self, probs: Tensor, name: str):
"""Categorical distribution over integers.
Args:
probs (Tensor): tensor representing the probabilities of a set of Categorical
distributions.
name (str): name prefixed to operations created by this class
probs: The unnormalized probabilities of a set of Categorical distributions.
name: The name prefixed to operations created by this class.
Returns:
tfp.distributions.Categorical: instance of ``tfp.distributions.Categorical`` class
Expand Down
4 changes: 2 additions & 2 deletions mrmustard/math/backend_numpy.py
Original file line number Diff line number Diff line change
Expand Up @@ -367,8 +367,8 @@ def __init__(self, probs):
self._probs = probs

def sample(self):
array = np.random.multinomial(1, pvals=probs)
return np.where(array == 1)[0][0]
idx = [i for i, _ in enumerate(probs)]
return np.random.choice(idx, p=probs / sum(probs))

return Generator(probs)

Expand Down
7 changes: 3 additions & 4 deletions tests/test_lab/test_detectors.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@

import numpy as np
import pytest
import tensorflow as tf
from hypothesis import given
from hypothesis import strategies as st
from hypothesis.extra.numpy import arrays
Expand Down Expand Up @@ -297,16 +296,16 @@ def test_homodyne_on_2mode_squeezed_vacuum_with_displacement(self, s, X, d):
],
)
@pytest.mark.parametrize("gaussian_state", [True, False])
@pytest.mark.parametrize("normalization", [1, 1 / 3])
def test_sampling_mean_and_var(
self, state, kwargs, mean_expected, var_expected, gaussian_state
self, state, kwargs, mean_expected, var_expected, gaussian_state, normalization
):
"""Tests that the mean and variance estimates of many homodyne
measurements are in agreement with the expected values for the states"""
state = state(**kwargs)

tf.random.set_seed(123)
if not gaussian_state:
state = State(dm=state.dm(cutoffs=[40]))
state = State(dm=state.dm(cutoffs=[40]) * normalization)
detector = Homodyne(0.0)

results = np.zeros((self.N_MEAS, 2))
Expand Down
8 changes: 8 additions & 0 deletions tests/test_math/test_backend_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -587,3 +587,11 @@ def test_sum(self):
arr = 4 * np.eye(3)
res = math.asnumpy(math.sum(arr))
assert np.allclose(res, 12)

def test_categorical(self):
r"""
Tests the ``Categorical`` method.
"""
probs = np.array([1e-6 for _ in range(300)])
results = [math.Categorical(probs, "") for _ in range(100)]
assert len(set(results)) > 1

0 comments on commit e235882

Please sign in to comment.