Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release v0.7.0 #336

Merged
merged 120 commits into from
Feb 6, 2024
Merged

Release v0.7.0 #336

merged 120 commits into from
Feb 6, 2024

Conversation

SamFerracin
Copy link
Contributor

New features

  • Added a new interface for backends, as well as a numpy backend (which is now default). Users can run
    all the functions in the utils, math, physics, and lab with both backends, while training
    requires using tensorflow. The numpy backend provides significant improvements both in import
    time and runtime. (#301)

  • Added the classes and methods to create, contract, and draw tensor networks with mrmustard.math.
    (#284)

  • Added functions in physics.bargmann to join and contract (A,b,c) triples.
    (#295)

  • Added an Ansatz abstract class and PolyExpAnsatz concrete implementation. This is used in the Bargmann representation.
    (#295)

  • Added complex_gaussian_integral and real_gaussian_integral methods.
    (#295)

  • Added Bargmann representation (parametrized by Abc). Supports all algebraic operations and CV (exact) inner product.
    (#296)

Breaking changes

  • Removed circular dependencies by:

    • Removing graphics.py--moved ProgressBar to training and mikkel_plot to lab.
    • Moving circuit_drawer and wigner to physics.
    • Moving xptensor to math.
      (#289)
  • Created settings.py file to host Settings.
    (#289)

  • Moved settings.py, logger.py, and typing.py to utils.
    (#289)

  • Removed the Math class. To use the mathematical backend, replace
    from mrmustard.math import Math ; math = Math() with import mrmustard.math as math
    in your scripts.
    (#301)

  • The numpy backend is now default. To switch to the tensorflow
    backend, add the line math.change_backend("tensorflow") to your scripts.
    (#301)

Improvements

  • Calculating Fock representations and their gradients is now more numerically stable (i.e. numerical blowups that
    result from repeatedly applying the recurrence relation are postponed to higher cutoff values).
    This holds for both the "vanilla strategy" (#274) and for the
    "diagonal strategy" and "single leftover mode strategy" (#288).
    This is done by representing Fock amplitudes with a higher precision than complex128 (countering floating-point errors).
    We run Julia code via PyJulia (where Numba was used before) to keep the code fast.
    The precision is controlled by setting settings.PRECISION_BITS_HERMITE_POLY. The default value is 128,
    which uses the old Numba code. When setting to a higher value, the new Julia code is run.

  • Replaced parameters in training with Constant and Variable classes.
    (#298)

  • Improved how states, transformations, and detectors deal with parameters by replacing the Parametrized class with ParameterSet.
    (#298)

  • Includes julia dependencies into the python packaging for downstream installation reproducibility.
    Removes dependency on tomli to load pyproject.toml for version info, uses importlib.metadata instead.
    (#303)
    (#304)

  • Improves the algorithms implemented in vanilla and vanilla_vjp to achieve a speedup.
    Specifically, the improved algorithms work on flattened arrays (which are reshaped before being returned) as opposed to multi-dimensional array.
    (#312)
    (#318)

  • Adds functions hermite_renormalized_batch and hermite_renormalized_diagonal_batch to speed up calculating
    Hermite polynomials over a batch of B vectors.
    (#308)

  • Added suite to filter undesired warnings, and used it to filter tensorflow's ComplexWarnings.
    (#332)

Bug fixes

  • Added the missing shape input parameters to all methods U in the gates.py file.
    (#291)
  • Fixed inconsistent use of atol in purity evaluation for Gaussian states.
    (#294)
  • Fixed the documentations for loss_XYd and amp_XYd functions for Gaussian channels.
    (#305)
  • Replaced all instances of np.empty with np.zeros to fix instabilities.
    (#309)

sduquemesa and others added 30 commits October 14, 2022 15:19
**Context:**
Now the default branch for Mr Mustard repository is develop.

**Description of the Change:**
This PR updates the CI checks so that
- Every PR pointing to develop is checked using python 3.8
- Every PR pointing to develop is built and checked using python 3.8, 3.9, 3.10 so that code is surely releasable
- adds codeowners file
- adds `.coveragerc` file to avoid some non-sense codecov issues

**Benefits:**
CI checks now will be in sync with MrMustard development policies, this also reduces usage of CI checks (meaning less bills 💸 and less CO2 ☘️). Also, codecov should be less annoying from now.

**Possible Drawbacks:**
We'll have to wait for a full release cycle to check that all workflows are working as expected
**Context:**
Currently homodyne and heterodyne measurements on Mr Mustard require
user-specified outcome values.

**Description of the Change:**
This PR implements sampling for homodyne and heterodyne measurements:
when no measurement outcome value is specified (`result=None` for
homodyne and `x, y = None, None` for heterodyne), the outcome is sampled
from the measurement probability distribution and the conditional state
(conditional on the outcome) on the remaining modes is generated.

```python
    import numpy as np
    from mrmustard.lab import Homodyne, Heterodyne, TMSV, SqueezedVacuum

    # conditional state from measurement
    conditional_state1 = TMSV(r=0.5, phi=np.pi)[0, 1] >> Homodyne(quadrature_angle=np.pi/2, result=None)[1]
    conditional_state2 = TMSV(r=0.5, phi=np.pi)[0, 1] >> Heterodyne(x=None, y=None)[1]

    # outcome probability
    outcome_prob1 = SqueezedVacuum(r=0.5) >> Homodyne(result=None)
    outcome_prob2 = SqueezedVacuum(r=0.5) >> Heterodyne(x=None, y=None)
```

To do so:
- `tensorflow-probability==0.17.0` is added to Mr Mustard's
dependencies. [TensorFlow
Probability](https://www.tensorflow.org/probability/overview) provides
integration of probabilistic methods with automatic differentiation and
hardware acceleration (GPUs). Pytorch also provides similar
functionality through the
[`torch.distributions`](https://pytorch.org/docs/stable/distributions.html)
module.
- For gaussian states 𝛠, samples are drawn from the gaussian PDF
generated by a highly squeezed state ξ and the state of interest, i.e.,
`PDF=Tr[𝛠 ξ]`. Sampling from the distribution is implemented using
TensorFlow's multivariate normal distribution.
- For Homodyne only: To calculate the pdf for Fock states the
q-quadrature eigenfunctions `|x><x|` are used: `PDF=Tr[𝛠 |x><x|]`.
Sampling from the distribution uses TensorFlow's categorical
distribution. This case is inspired by the [implementation on
strawberryfields](https://github.com/XanaduAI/strawberryfields/blob/9a9a352b5b8cf7b2915e45d1538b51d7d306cfc8/strawberryfields/backends/tfbackend/circuit.py#L818-L926).

**Benefits:**
Sampling from homodyne measurements! 🎲

**Possible Drawbacks:**
Sampling for Fock states can be improved to reduce overheads on
execution: currently the pdf is recalculated every time a measurement is
performed leading to unnecessary slow-downs. To improve this, one can
use sampling algorithms that do not require the complete calculation of
the pdf, for example: using the cumulative distribution (however this is
highly dependent on normalization), rejection sampling,
Metropolis-Hasting, etc.

Co-authored-by: JacobHast <[email protected]>
Co-authored-by: elib20 <[email protected]>
**Context:**
The rotation gate unitary is currently being calculated using the Choi
isomorphisms which generates `nan` for angles of value zero where the
identity matrix should be returned.

**Description of the Change:**
The rotation gate being diagonal in the Fock basis is easy to implement
with better speed performance. This PR implements the `U` method of the
rotation gate overriding the default way of calculating the unitaries of
Gaussian transforms.

**Benefits:**
Fast computation of the rotation gate Fock representation which avoids
invalid numerical outcomes.

**Possible Drawbacks:**
None
**Context:**
[The Walrus PR #351](XanaduAI/thewalrus#351)
implemented a more stable calculation of the displacement unitary.

**Description of the Change:**
The `Dgate` now uses The Walrus to calculate the unitary and gradients
of the displacement gate.

Before:

![before](https://user-images.githubusercontent.com/675763/193073762-42a82fe8-7bcf-405b-ade0-e59e4fcdf270.png)

After:

![after](https://user-images.githubusercontent.com/675763/193073791-e573381f-3e97-4dc1-97e5-161897b34f0d.png)




**Benefits:**
This provides better numerical stability for larger cutoff and
displacement values.

**Possible Drawbacks:**
None

**Related GitHub Issues:**
None
------------------------------------------------------------------------------------------------------------

**Context:**
Math interface

**Description of the Change:**
Adds `eye_like` function, which returns the identity matrix to match the
shape and dtype of the argument.

**Benefits:**
Simpler to get the identity of the right size and dtype

**Possible Drawbacks:**
Not standard

**Related GitHub Issues:**
None
**Context:**
Calculation of the Wigner function is already included in Mr Mustard but
hidden in the graphics module.

**Description of the Change:**
This PR moves the Wigner function calculation to its own module and
numbifies it.

**Benefits:**
Now you can calculate the Wigner function using
```python
from mrmustard.utils.wigner import wigner_discretized

wigner_discretized(dm, q, p) # dm is a density matrix
```

It should be faster as well because it uses numba jit.

**Possible Drawbacks:**
None
**Context:**
There was a bug in the gradient computation of the Gate

**Description of the Change:**
Bug is fixed (there was a numpy array at some point that was breaking
the chain rule)

**Benefits:**
It works

**Possible Drawbacks:**
None

**Related GitHub Issues:**
None
**Context:**
settings are not easy do discover

**Description of the Change:**
the settings object has now a nice repr

**Benefits:**
users can see al the settings at once

**Possible Drawbacks:**
None I can think of

**Related GitHub Issues:**
None

<img width="513" alt="Screen Shot 2022-11-08 at 9 55 08 AM"
src="https://user-images.githubusercontent.com/8944955/200597522-66ebfa0f-dd87-4109-982f-c8d3231823f8.png">

Co-authored-by: Sebastián Duque Mesa <[email protected]>
**Context:**
PR #143 introduced sampling for homodyne measurements and with it a
bunch of functionality in the `utils/homodyne.py` module. Most of this
functions are related to the sampling in fock representation and are not
only useful for homodyne sampling — take for example the calculation of
the quadrature distribution.

**Description of the Change:**
This PR refactors the homodyne module into:

- `physics.fock` — functions related to the fock representation were
moved into this module. Some of them are even split and written as
separated functions such that they are available for use in other
contexts (for example `quadrature_distribution` and
`oscillator_eigenstate`).
- `math.caching` — the cache decorator used for the Hermite polys is
refactored into this new module, this decorator is not only applicable
to the Hermite polys but also to any function taking a 1D tensor + int
parameters. Now it can be used generically by any function with this
same signature. The idea of this module is to contain caching functions
of this kind.

Also

- Hermite polynomials have the modified flag removed (as per
[this](#143 (review))
comment) and now only the regular polynomials are used.

_Note:_ This PR only refactors code, meaning where the code is placed
and _not_ what it does and _nor_ how it is done; there is no change to
the logic whatsoever.

**Benefits:**
Pieces of the code that were there already are now reusable. The code
for the sampling logic is more readable now.

**Possible Drawbacks:**
None

**Related GitHub Issues:**
None
**Context:**
Currently marginals are calculated form the Wigner function. In cases in
which not all features of the state are captured on the Wigner function,
the marginals contain negativities hence not representing a true
probability density.

**Description of the Change:**
This PR makes `mikkel_plot` 
- calculate marginals independently from the Wigner function thus
ensuring that the marginals are physical even though the Wigner function
might not contain all the features of the state within the defined
window,
- expose the `ticks`, `tick_labels` and `grid` arguments to customize
the visualization — useful for example when checking that your state has
peaks in those dreaded multiples of √(π)
- return the figure and axes for further processing if needed

**Benefits:**

- Marginals in the visualization will always represent physical states

_before_: note how this lion is not fully displayed in the Wigner
function leading to negativities in the marginal distributions

<img width="399" alt="image"
src="https://user-images.githubusercontent.com/675763/202817119-1824ecc0-139b-42ef-8461-8eba3ebf5405.png">

_after_: although the big kittie is not displayed in its full glory its
marginals show the true probability distribution

<img width="390" alt="image"
src="https://user-images.githubusercontent.com/675763/202817226-b06560e6-d481-4530-af0f-010cfafb6885.png">

- More configurability of the visualization

```python
from matplotlib import cm
ticks = [-2*np.sqrt(2),2*np.sqrt(2)]
labels = [r"$-2\sqrt{\hbar}$", r"$2\sqrt{\hbar}$"]
graphics.mikkel_plot(dm, xticks=ticks, yticks=ticks, xtick_labels=labels, ytick_labels=[r"$-2\sqrt{\hbar}$", r"$2\sqrt{\hbar}$"], grid=True, cmap=cm.PuOr)
```


![image](https://user-images.githubusercontent.com/675763/203838559-06b81520-5abf-4f90-87f5-5f9e9947237c.png)

- Users can take the figure and axes for post-processing or storing

**Possible Drawbacks:**
This will _marginally_ increase the computation time for the
visualization but shouldn't be too impactful.

**Related GitHub Issues:**
None

Co-authored-by: ziofil <[email protected]>
Co-authored-by: Luke Helt <[email protected]>
**Context:**
Optimizer is opaque

**Description of the Change:**
in the minimize function now we can pass a callback that will be
executed at the end of each step (with `trainable_parameters` as
argument) and the return is stored in `self.callback_history`.

**Benefits:**
Can do lots of things, e.g.

![test](https://user-images.githubusercontent.com/8944955/200894110-a34b8f5d-caa7-40fc-9716-1d67a8667ca6.gif)


**Possible Drawbacks:**
None

**Related GitHub Issues:**
None
**Context:**
The `Rgate` and the `Dgate` don't work correctly in Fock representation
in some cases.

**Description of the Change:**
1. Fixed the `Rgate` and `Dgate` by removing the parser method and
simplifying the code.
2. Added two functions in the fock module for explicitly applying an
operator to a ket or to a dm (sandwich) which avoid constructing
unitaries as large as the whole circuit. Now they are used in the
`transform_fock` method of the `Transformation` class.
3. fixed a bug in the trace function
4. minor improvements

**Benefits:**

**Possible Drawbacks:**

**Related GitHub Issues:**

Co-authored-by: Sebastián Duque Mesa <[email protected]>
**Context:**
sometimes it's useful to compute the fock representation using different
cutoffs for input-output indices of the same mode

**Description of the Change:**
transformations (gates, circuit) now accept also a list of double (or
quadruple for choi) length which specifies the cutoffs per index (rather
than per mode)

**Benefits:**
saves runtime if one needs e.g. to input fock states into a gaussian
circuit

**Possible Drawbacks:**
none

**Related GitHub Issues:**
**Context:**
norm is confusing field in the repr of `State` (it's the sqrt of the
probability if the state is pure and the probability if it's mixed).

**Description of the Change:**
Replace norm with probability 

**Benefits:**
Consistent meaning and also more practically useful

**Possible Drawbacks:**
some users may miss the good old norm?

**Related GitHub Issues:**
**Context:**
The application of a choi operator to a density matrix was resulting in
a transposed dm

**Description of the Change:**
Fixes the order of the indices in the application of a choi operator to
dm and ket

**Benefits:**
Correct result

**Possible Drawbacks:**
None

**Related GitHub Issues:**
None

Co-authored-by: Sebastián Duque Mesa <[email protected]>
**Context:**
Always forgetting to add entries to the CHANGELOG file?

**Description of the Change:**
Now github will remind you to do so — this PR implements a new CI
workflow checking if an entry has been added to the CHANGELOG file. This
check is not mandatory meaning it won't block the ability to merge the
PR, however checks will appear as failed. In case no changelog entry is
needed one can use the `no changelog` label to disable the check.

This PR also adds the changelog entry for PR #188.

**Benefits:**
No more PRs without CHANGELOG entries

**Possible Drawbacks:**
None
**Context:**
Setting a seed in MM is not trivial and often we need reproducible
reults

**Description of the Change:**
The `settings` object now supports the `SEED` attribute, which is random
unless it is set by the user. To unset it, just set it equal to `None`.

**Benefits:**
Easy to get reproducible results without messing with numpy.random.

**Possible Drawbacks:**
None?

**Related GitHub Issues:**

Co-authored-by: Sebastián Duque Mesa <[email protected]>
**Context:**
The fock representation of a state is stored internally for future
access.
However, if in the meantime the cutoff is updated this is not taken into
account, this PR solves this issue.

**Description of the Change:**
The cutoffs are applied to the internal fock representation upon access.

**Benefits:**
More correct implementation.

**Possible Drawbacks:**
None?

**Related GitHub Issues:**
None

Co-authored-by: Sebastián Duque Mesa <[email protected]>
**Context:**
Going forward we need a big refactoring of the methods to transform
between representations.
This PR is the first in this direction.

**Description of the Change:**
Introduced two new modules and refactored various methods

**Benefits:**
Allows for easier extension of representation methods

**Possible Drawbacks:**
Some methods and functions have a new name and/or arguments

**Related GitHub Issues:**
None

Co-authored-by: Sebastián Duque Mesa <[email protected]>
**Context:**
Fixes a few bugs related to transformation of states in Fock
representation

**Description of the Change:**
- adds missing default argument 
- adds a function for convenience
- fixes a bug due to old code not removed in
`Transformation.transform_fock`
- adds 11 tests to avoid same type of bug
- adds 2 parametrized tests (10 in total) to check correct application
of unitaries and channels to kets and dm in Fock representation

**Benefits:**
Correct code

**Possible Drawbacks:**
None

**Related GitHub Issues:**
No issue but FTB simulation not running on develop branch
**Context:**
Fixing a bug related to tensorflow not liking products of complex128 and
float64

**Description of the Change:**
cast where necessary

**Benefits:**
Code that runs

**Possible Drawbacks:**
None

**Related GitHub Issues:**
None
**Context:**
new black version, wants to make new changes

**Description of the Change:**
black all the files that need to be blacked

**Benefits:**
to bring the develop branch up to date so that all the PRs that are open
don't have to

**Possible Drawbacks:**
None

**Related GitHub Issues:**
None
**Context:**

Adds `map_trainer` as the interface for distributing optimization
workflows using ray so that things can happen in parallel -- replacing
the need for `for` loops.

_Code here has been personally looked at by Trudeau._

**Description of the Change:**
Demo notebook in recipes.

Documentation page for the `trainer` module is added with some examples,
which is also in the docstring of `map_trainer` so that it can be
conveniently read in Jupyter directly with shift+tab.

I've also had some weird problem with the unit tests (all passing
locally) running on github action where some times it would just hang.
I've tried removing my new tests one by one and adding them back again
one by one, and it somehow worked at the end... I changed the testing
workflow file with direct `pip install .[ray]` instead of building wheel
first, which I don't think is the reason. Thanks Trudeau I guess?


**Benefits:**
fast and simple experimentation -> more research done.

**Possible Drawbacks:**
more interface to introduce

**Related GitHub Issues:**
**Context:**
Circuit building is a bit obscure because there's no visual circuit
representation

**Description of the Change:**
Adds a basic circuit drawer (adapted from pennylane).

**Benefits:**
Circuit visualizations

**Possible Drawbacks:**
Does not yet include initializations and measurements, but for this we
need to refactor the `Circuit` class first.

**Related GitHub Issues:**
None

---------

Co-authored-by: Sebastián Duque Mesa <[email protected]>
**Context:**
Attempt to fix the (random) never ending tests on github actions.
I have so far rerun the test 5 times without the issue reoccurring. 🤞 

**Description of the Change:**
Forces ray to init with 1 cpu for testing.

**Benefits:**

**Possible Drawbacks:**

**Related GitHub Issues:**

---------

Co-authored-by: Sebastián Duque Mesa <[email protected]>
**Context:**
Tests could improve in MM

**Description of the Change:**
Add several new tests and improves hypothesis strategies

**Benefits:**
Better test suite

**Possible Drawbacks:**
More tests to maintain? Nah

**Related GitHub Issues:**
None
**Context:** Make PNR sampling faster for Gaussian circuits when using
density matrices. This is done by applying the recurrence relation in a
selective manner such that useless (off-diagonal) amplitudes from the
Fock representation are not calculated. When all modes are detected,
`math.hermite_renormalized` can be replaced by
`math.hermite_renormalized_diagonal`. In case all but the first mode are
detected, `math.hermite_renormalized_1leftoverMode` can be used. The
complexity of these new methods is equal to performing a pure state
simulation. The methods are differentiable, such that they can be used
for defining costfunctions.
  
**Description of the Change:** Adds the function
`math.hermite_renormalized_diagonal` and
`math.hermite_renormalized_1leftoverMode`.

**Benefits:** Faster simulation and optimization of Gaussian circuits
with PNR detectors when using density matrices.

---------

Co-authored-by: Robbe De Prins (UGent-imec) <[email protected]>
Co-authored-by: ziofil <[email protected]>
Co-authored-by: ziofil <[email protected]>
**Context:**
Fixing small bugs/typos before 0.4 release

**Description of the Change:**
- Threshold detector can now be correctly initialized
- changes in settings.HBAR are correctly reflected everywhere
- Interferometer can be placed on any set of modes
- number of decimals are respected in circuit drawer

**Benefits:**
Fewer bugs/typos

**Possible Drawbacks:**
None

**Related GitHub Issues:**
None
Copy link

codecov bot commented Feb 1, 2024

Codecov Report

Attention: 522 lines in your changes are missing coverage. Please review.

Comparison is base (b62d01d) 71.62% compared to head (43d27ca) 84.14%.
Report is 10 commits behind head on main.

Additional details and impacted files
@@             Coverage Diff             @@
##             main     #336       +/-   ##
===========================================
+ Coverage   71.62%   84.14%   +12.51%     
===========================================
  Files          29       65       +36     
  Lines        2848     4775     +1927     
===========================================
+ Hits         2040     4018     +1978     
+ Misses        808      757       -51     
Files Coverage Δ
mrmustard/__init__.py 100.00% <100.00%> (+3.63%) ⬆️
mrmustard/_version.py 100.00% <100.00%> (ø)
mrmustard/lab/abstract/__init__.py 100.00% <100.00%> (ø)
mrmustard/lab/circuit.py 93.54% <100.00%> (+42.75%) ⬆️
mrmustard/lab/states.py 100.00% <100.00%> (+0.89%) ⬆️
mrmustard/lab/utils.py 100.00% <100.00%> (ø)
mrmustard/math/__init__.py 100.00% <100.00%> (+28.57%) ⬆️
mrmustard/math/autocast.py 100.00% <100.00%> (+3.12%) ⬆️
mrmustard/math/backend_base.py 100.00% <100.00%> (ø)
mrmustard/math/backend_numpy.py 100.00% <100.00%> (ø)
... and 53 more

Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update cc19952...43d27ca. Read the comment docs.

@SamFerracin SamFerracin force-pushed the release-0.7 branch 2 times, most recently from 9cd7586 to ab42599 Compare February 1, 2024 20:17
SamFerracin and others added 18 commits February 1, 2024 17:13
**Context:**
In the latest MrM release, `scipy` was unpinned.
Syncing the `develop` branch with this latest release causes pytest
errors due to improper type handling on `scipy`'s side.

**Description of the Change:**
- Syncing `develop` with latest `main`
- Fixing scipy error by adding a `dtype` parameter to `math.sqrtm` so
that it never returns arrays of an unsupported type (specifically,
`complex256`)
- Updating dependencies
- Modifying the CHANGELOG in preparation for release

---------

Co-authored-by: zy <[email protected]>
@ziofil ziofil self-requested a review February 3, 2024 01:24
Copy link
Collaborator

@ziofil ziofil left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚀

SamFerracin and others added 2 commits February 6, 2024 10:03
**Context:**
The previous solution based on `logging` does not appear to work on
every OS
@SamFerracin SamFerracin merged commit f4898e0 into main Feb 6, 2024
9 checks passed
@SamFerracin SamFerracin deleted the release-0.7 branch February 6, 2024 18:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants