Skip to content

Commit

Permalink
Merge pull request #1180 from qiboteam/relative_renyi
Browse files Browse the repository at this point in the history
Add `classical_renyi_relative_entropy` and `renyi_relative_entropy` to `quantum_info.entropies`
  • Loading branch information
renatomello authored Jan 31, 2024
2 parents b4f8a37 + 0c4b75a commit d044de2
Show file tree
Hide file tree
Showing 3 changed files with 413 additions and 8 deletions.
26 changes: 19 additions & 7 deletions doc/source/api-reference/qibo.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1271,7 +1271,7 @@ Callbacks
Callbacks provide a way to calculate quantities on the state vector as it
propagates through the circuit. Example of such quantity is the entanglement
entropy, which is currently the only callback implemented in
:class:`qibo.callbacks.EntanglementEntropy`.
:class:`qibo.callbacks.Entanglemententropy`.
The user can create custom callbacks by inheriting the
:class:`qibo.callbacks.Callback` class. The point each callback is
calculated inside the circuit is defined by adding a :class:`qibo.gates.CallbackGate`.
Expand All @@ -1284,7 +1284,7 @@ This can be added similarly to a standard gate and does not affect the state vec
Entanglement entropy
^^^^^^^^^^^^^^^^^^^^

.. autoclass:: qibo.callbacks.EntanglementEntropy
.. autoclass:: qibo.callbacks.Entanglemententropy
:members:
:member-order: bysource

Expand Down Expand Up @@ -1537,18 +1537,24 @@ Shannon entropy
.. autofunction:: qibo.quantum_info.shannon_entropy


Classical Relative Entropy
Classical relative entropy
""""""""""""""""""""""""""

.. autofunction:: qibo.quantum_info.classical_relative_entropy


Classical Rényi Entropy
Classical Rényi entropy
"""""""""""""""""""""""

.. autofunction:: qibo.quantum_info.classical_renyi_entropy


Classical Rényi relative entropy
""""""""""""""""""""""""""""""""

.. autofunction:: qibo.quantum_info.classical_relative_renyi_entropy


Entropy
"""""""

Expand All @@ -1562,7 +1568,7 @@ Entropy
and ``state`` is non-Hermitian, an error will be raised when using `cupy` backend.


Relative Entropy
Relative entropy
""""""""""""""""

.. autofunction:: qibo.quantum_info.relative_entropy
Expand All @@ -1576,13 +1582,19 @@ Relative Entropy
an error will be raised when using `cupy` backend.


Rényi Entropy
Rényi entropy
"""""""""""""

.. autofunction:: qibo.quantum_info.renyi_entropy


Entanglement Entropy
Rényi relative entropy
""""""""""""""""""""""

.. autofunction:: qibo.quantum_info.renyi_relative_entropy


Entanglement entropy
""""""""""""""""""""

.. autofunction:: qibo.quantum_info.entanglement_entropy
Expand Down
197 changes: 196 additions & 1 deletion src/qibo/quantum_info/entropies.py
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ def classical_renyi_entropy(
Another special case is the limit :math:`\\alpha \\to 0`, where the function is
reduced to :math:`\\log\\left(|\\mathbf{p}|\\right)`, with :math:`|\\mathbf{p}|`
being the support of :math:`\\mathbf{p}`.
This is knows as the `Hartley entropy <https://en.wikipedia.org/wiki/Hartley_function>`
This is knows as the `Hartley entropy <https://en.wikipedia.org/wiki/Hartley_function>`_
(also known as *Hartley function* or *max-entropy*).
In the limit :math:`\\alpha \\to \\infty`, the function reduces to
Expand Down Expand Up @@ -238,6 +238,106 @@ def classical_renyi_entropy(
return renyi_ent


def classical_relative_renyi_entropy(
prob_dist_p, prob_dist_q, alpha: Union[float, int], base: float = 2, backend=None
):
"""Calculates the classical relative Rényi entropy between two discrete probability distributions.
This function is also known as
`Rényi divergence <https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy#R%C3%A9nyi_divergence>`_.
For :math:`\\alpha \\in (0, \\, 1) \\cup (1, \\, \\infty)` and probability distributions
:math:`\\mathbf{p}` and :math:`\\mathbf{q}`, the classical relative Rényi entropy is defined as
.. math::
H_{\\alpha}(\\mathbf{p} \\, \\| \\, \\mathbf{q}) = \\frac{1}{\\alpha - 1} \\,
\\log\\left( \\sum_{x} \\, \\frac{\\mathbf{p}^{\\alpha}(x)}
{\\mathbf{q}^{\\alpha - 1}(x)} \\right) \\, .
A special case is the limit :math:`\\alpha \\to 1`, in which the classical Rényi divergence
coincides with the :func:`qibo.quantum_info.entropies.classical_relative_entropy`.
Another special case is the limit :math:`\\alpha \\to 1/2`, where the function is
reduced to :math:`-2 \\log\\left(\\sum_{x} \\, \\sqrt{\\mathbf{p}(x) \\, \\mathbf{q}(x)} \\right)`.
The sum inside the :math:`\\log` is known as the
`Bhattacharyya coefficient <https://en.wikipedia.org/wiki/Bhattacharyya_distance>`_.
In the limit :math:`\\alpha \\to \\infty`, the function reduces to
:math:`\\log(\\max_{x}(\\mathbf{p}(x) \\, \\mathbf{q}(x))`.
Args:
prob_dist_p (ndarray or list): discrete probability distribution :math:`p`.
prob_dist_q (ndarray or list): discrete probability distribution :math:`q`.
alpha (float or int): order of the Rényi entropy.
base (float): the base of the log. Defaults to :math:`2`.
backend (:class:`qibo.backends.abstract.Backend`, optional): backend to be
used in the execution. If ``None``, it uses
:class:`qibo.backends.GlobalBackend`. Defaults to ``None``.
Returns:
float: Classical relative Rényi entropy :math:`H_{\\alpha}(\\mathbf{p} \\, \\| \\, \\mathbf{q})`.
"""
if backend is None: # pragma: no cover
backend = GlobalBackend()

if isinstance(prob_dist_p, list):
# np.float64 is necessary instead of native float because of tensorflow
prob_dist_p = backend.cast(prob_dist_p, dtype=np.float64)
if isinstance(prob_dist_q, list):
# np.float64 is necessary instead of native float because of tensorflow
prob_dist_q = backend.cast(prob_dist_q, dtype=np.float64)

if (len(prob_dist_p.shape) != 1) or (len(prob_dist_q.shape) != 1):
raise_error(
TypeError,
"Probability arrays must have dims (k,) but have "
+ f"dims {prob_dist_p.shape} and {prob_dist_q.shape}.",
)

if (len(prob_dist_p) == 0) or (len(prob_dist_q) == 0):
raise_error(TypeError, "At least one of the arrays is empty.")

if not isinstance(alpha, (float, int)):
raise_error(
TypeError, f"alpha must be type float, but it is type {type(alpha)}."
)

if alpha < 0.0:
raise_error(ValueError, "alpha must a non-negative float.")

if base <= 0:
raise_error(ValueError, "log base must be non-negative.")

if (any(prob_dist_p < 0) or any(prob_dist_p > 1.0)) or (
any(prob_dist_q < 0) or any(prob_dist_q > 1.0)
):
raise_error(
ValueError,
"All elements of the probability array must be between 0. and 1..",
)
if np.abs(np.sum(prob_dist_p) - 1.0) > PRECISION_TOL:
raise_error(ValueError, "First probability array must sum to 1.")

if np.abs(np.sum(prob_dist_q) - 1.0) > PRECISION_TOL:
raise_error(ValueError, "Second probability array must sum to 1.")

if alpha == 0.5:
return -2 * np.log2(np.sum(np.sqrt(prob_dist_p * prob_dist_q))) / np.log2(base)

if alpha == 1.0:
return classical_relative_entropy(
prob_dist_p, prob_dist_q, base=base, backend=backend
)

if alpha == np.inf:
return np.log2(max(prob_dist_p / prob_dist_q)) / np.log2(base)

prob_p = prob_dist_p**alpha
prob_q = prob_dist_q ** (1 - alpha)

return (1 / (alpha - 1)) * np.log2(np.sum(prob_p * prob_q)) / np.log2(base)


def entropy(
state,
base: float = 2,
Expand Down Expand Up @@ -503,6 +603,101 @@ def renyi_entropy(state, alpha: Union[float, int], base: float = 2, backend=None
return (1 / (1 - alpha)) * log / np.log2(base)


def relative_renyi_entropy(
state, target, alpha: Union[float, int], base: float = 2, backend=None
):
if backend is None: # pragma: no cover
backend = GlobalBackend()

if (
(len(state.shape) >= 3)
or (len(state) == 0)
or (len(state.shape) == 2 and state.shape[0] != state.shape[1])
):
raise_error(
TypeError,
f"state must have dims either (k,) or (k,k), but have dims {state.shape}.",
)

if (
(len(target.shape) >= 3)
or (len(target) == 0)
or (len(target.shape) == 2 and target.shape[0] != target.shape[1])
):
raise_error(
TypeError,
f"target must have dims either (k,) or (k,k), but have dims {target.shape}.",
)

if not isinstance(alpha, (float, int)):
raise_error(
TypeError, f"alpha must be type float, but it is type {type(alpha)}."
)

if alpha < 0.0:
raise_error(ValueError, "alpha must a non-negative float.")

if base <= 0.0:
raise_error(ValueError, "log base must be non-negative.")

if (
abs(purity(state) - 1.0) < PRECISION_TOL
and abs(purity(target) - 1) < PRECISION_TOL
):
return 0.0

if alpha == 1.0:
return relative_entropy(state, target, base, backend=backend)

if alpha == np.inf:
if backend.__class__.__name__ in ["CupyBackend", "CuQuantumBackend"]:
eigenvalues_state, eigenvectors_state = np.linalg.eigh(state)
new_state = np.zeros_like(state, dtype=complex)
new_state = backend.cast(new_state, dtype=new_state.dtype)
for eigenvalue, eigenstate in zip(

Check warning on line 657 in src/qibo/quantum_info/entropies.py

View check run for this annotation

Codecov / codecov/patch

src/qibo/quantum_info/entropies.py#L654-L657

Added lines #L654 - L657 were not covered by tests
eigenvalues_state, np.transpose(eigenvectors_state)
):
new_state += np.sqrt(eigenvalue) * np.outer(

Check warning on line 660 in src/qibo/quantum_info/entropies.py

View check run for this annotation

Codecov / codecov/patch

src/qibo/quantum_info/entropies.py#L660

Added line #L660 was not covered by tests
eigenstate, np.conj(eigenstate)
)

eigenvalues_target, eigenvectors_target = np.linalg.eigh(target)
new_target = np.zeros_like(target, dtype=complex)
new_target = backend.cast(new_target, dtype=new_target.dtype)
for eigenvalue, eigenstate in zip(

Check warning on line 667 in src/qibo/quantum_info/entropies.py

View check run for this annotation

Codecov / codecov/patch

src/qibo/quantum_info/entropies.py#L664-L667

Added lines #L664 - L667 were not covered by tests
eigenvalues_target, np.transpose(eigenvectors_target)
):
new_target += np.sqrt(eigenstate) * np.outer(

Check warning on line 670 in src/qibo/quantum_info/entropies.py

View check run for this annotation

Codecov / codecov/patch

src/qibo/quantum_info/entropies.py#L670

Added line #L670 was not covered by tests
eigenstate, np.conj(eigenstate)
)
else:
from scipy.linalg import sqrtm # pylint: disable=C0415

# astype method needed because of tensorflow
new_state = sqrtm(state).astype("complex128")
new_target = sqrtm(target).astype("complex128")
new_state = backend.cast(new_state, dtype=new_state.dtype)
new_target = backend.cast(new_target, dtype=new_target.dtype)

log = np.log2(
backend.calculate_norm_density_matrix(new_state @ new_target, order=1)
)

return -2 * log / np.log2(base)

if len(state.shape) == 1:
state = np.outer(state, np.conj(state))

Check warning on line 689 in src/qibo/quantum_info/entropies.py

View check run for this annotation

Codecov / codecov/patch

src/qibo/quantum_info/entropies.py#L689

Added line #L689 was not covered by tests

if len(target.shape) == 1:
target = np.outer(target, np.conj(target))

Check warning on line 692 in src/qibo/quantum_info/entropies.py

View check run for this annotation

Codecov / codecov/patch

src/qibo/quantum_info/entropies.py#L692

Added line #L692 was not covered by tests

log = np.linalg.matrix_power(state, alpha)
log = log @ np.linalg.matrix_power(target, 1 - alpha)
log = np.log2(np.trace(log))

return (1 / (alpha - 1)) * log / np.log2(base)


def entanglement_entropy(
state,
bipartition,
Expand Down
Loading

0 comments on commit d044de2

Please sign in to comment.