Replies: 1 comment 2 replies
-
Having looked at the paper briefly, this method seems conceptually interesting, but it does not jump out to me as something worthwhile from a more practical point of view. The main issue I have with the paper is that there are huge problems with the "GRAPE" that the method is being compared with:
I'm actually surprised the "GRAPE" used here converges at all, but it certainly invalidates any kind of meaningful comparison. I would think a proper GRAPE would converge within 100 iterations, so that does not bode well for the convergence of PEPR as shown in Fig 2. I would also point out that GRAPE provably has no traps (according to all of Rabitz' work). I think that even extends to saddle points. At least not for sufficiently small time steps, and as long as you don't add any constraints. That makes me think that the "barren plateau" shown here is just a result of an improperly used GRAPE (results from VQA notwithstanding). Actually, it's the other way around: analytical parameterizations like the mode functions used here are implicit constraints and usually introduce traps in the optimization landscape. Beyond that, I'm also quite concerned about limitations of the method. Let me go by your summary of the method:
That seems bad. If I understand correctly, this is due to the functional in Eq. (3) being "hard-coded" in the method via Eqs (7-9). So that makes the method directly applicable only to a state-to-state transitions. Then, to get a gate, you have to do a stochastic sampling of that gate via random initial states. That's got to slow down convergence pretty dramatically! As a reminder, what we would like to be doing in general is to have an arbitrary functional
Fine
I think you mean the control operator B. By the way, what about non-linear dependencies on the control fields? That would be something we'd very much want to support: at the very least, Eq. (G2) in the Glossary But the other thing is that I don't generally want to propagate density matrices. Of course, I can easily convert a pure state into a density matrix, so that is not a problem per se, but then...
We don't ever propagate operators. If anything, we propagate basis states spanning an operator. If we're talking about a state-to-state optimization of a pure state, that also seems like a pretty significant escalation in the required numerical resources. I'm also finding it hard to square that with the optimization in QuantumControl.jl being defined by a set of "trajectories". If we're doing a state-to-state optimization with PEPR, do we set up a single trajectory for that state? But then how do we communicate the basis for the operators? Conversely, if we specify the logical basis for the operators, how do we specify the initial state?
Fine, except I don't like having to promote the target to a density matrix, and of course the "fidelity" should be an arbitrary function
This might be rather trivial, but if I choose arbitrary mode functions Can we do a line-search in Also, another point from your notes:
Not the gradient If it was, this method would be just an alternative way to calculate that exact gradient. As I said, we already have efficient methods for calculating exact gradients, but that at least would make the method interesting as an alternative to the Shermer gradient.
At this time, though, given the apparent limitations, I wouldn't really spend time implementing it. Of course, I might have misunderstood something, and I might reach out to the authors for some clarification or discussion. |
Beta Was this translation helpful? Give feedback.
-
Hi, @goerz! Just wanted to have some chat around this recent paper and maybe it's inclusion in QuantumControl.jl ecosystem in the form of PEPR.jl, if feasible? I am excited to know your thoughts on this. Please feel free to respond after the workshop! No hurries.
The paper is titled: "Pulse Engineering via Projection of Response Functions"(PEPR): https://arxiv.org/pdf/2404.10462. PEPR has better convergence than GRAPE as the latter is susceptible to barren plateaus.
I have prepared this summary about PEPR. I hope it decently conveys the idea about PEPR.
System and Probe
Consider a quantum system described by a Hamiltonian,
H
. An external influence, represented by a generalized forceF(t),
interacts with a specific system operator,B.
This force serves as a probe to investigate the system's response.Observable of Interest
We are interested in the effect of this probing force on a particular observable,
A
, of the system. The goal is to quantify the change in the expectation value ofA
due to the interaction with operatorB
mediated byF(t).
Linear Response Theory
LRT posits that for weak perturbations (small
F(t))
, the system's response is linearly proportional to the strength of the probing force. This proportionality is captured by the linear susceptibility,χ_AB(t, t')
. The susceptibility depends on the time(t)
of observation and the time(t')
of force application.LRT allows us to calculate the susceptibility using the commutator of the observable
(A)
and the control operator(B)
, both expressed in the interaction picture.Application to Control Optimization
We now connect LRT to the task of optimizing a control system. The objective is to determine the optimal control parameters that manipulate the system (via control operators) to achieve a desired outcome.
Leveraging LRT for Gradient Calculation
B_j,
as the operator the force interacts with. We randomly select a time point,t_r,
within the control time interval.t_r,
within the interaction picture.t_r.
Subsequently, the commutator of the propagated control operator and the propagated target observable is taken. Finally, this commutator is propagated to the final control time. This step captures the response of the target observable to the control action att_r.
PEPR Parameter Update Algorithm
The PEPR update algorithm utilizes the LRT-based gradient calculation for iterative control parameter optimization.
(t_r).
t_r).
This iterative process ensures convergence towards the optimal control parameters that achieve the desired outcome for the quantum system.
In summary, PEPR leverages the strengths of LRT to obtain an exact gradient for control parameter optimization. This leads to a more efficient and hyperparameter-free method compared to traditional techniques that rely on approximations such as GRAPE.
Source of Slides: https://www.youtube.com/watch?v=MofRCwSMG8U&t=967s (Optimal control and implementation of quantum algorithms Speaker: Ludwig MATHEY (University of Hamburg))
Beta Was this translation helpful? Give feedback.
All reactions