We study the behavior of a universal combination of susceptibility and correlation length in the Ising model in two and three dimensions, in presence of both magnetic and thermal perturbations, in the neighborhood of the critical point. In three dimensions we address the problem using a parametric representation of the equation of state. In two dimensions we make use of the exact integrability of the model along the thermal and the magnetic axes. Our results can be used as a sort of “reference frame” to chart the critical region of the model. While our results can be applied in principle to any possible realization of the Ising universality class, we address in particular, as specific examples, three instances of Ising behavior in finite temperature QCD related in various ways to the deconfinement transition. In particular, in the last of these examples, we study the critical ending point in the finite density, finite temperature phase diagram of QCD. In this finite density framework, due to the well-known sign problem, Monte Carlo simulations are not possible and thus a direct comparison of experimental results with quantum field theory & statistical mechanics predictions like the one we discuss in this paper may be important. Moreover in this example it is particularly difficult to disentangle “magnetic-like” from “thermal-like” observables and thus an explicit charting of the neighborhood of the critical point can be particularly useful.
Despite its apparent simplicity the Ising model is one of the cornerstones of modern statistical mechanics. Over the years it has become a theoretical laboratory to test new ideas, ranging from symmetry breaking to conformal field theories. Moreover, thanks to its exact solvability in two dimensions
The corresponding universality class, in the renormalization group sense
From a statistical mechanics point of view it represents the simplest way to describe systems with short-range interactions and a scalar order parameter (density or uniaxial magnetization) which undergo a symmetry breaking phase transition. From a quantum field theory (QFT) point of view it is the simplest example of a unitary conformal field theory (CFT)
Thanks to integrability, conformal perturbation, and bootstrap
The aim of this paper is to partially fill this gap by studying a suitable universal combination of thermodynamic quantities (see below for the precise definition) in the presence of both perturbing operators. In three dimensions we shall address the problem using a parametric representation of the equation of state
The universal combination that we shall study involves the magnetic susceptibility and thus our proposal is particularly effective when the model is characterized by an explicit
Thanks to universality, our results hold not only for the standard nearest neighbor Ising model, but also for any possible realization of the Ising universality class and in fact we shall use the high precision Monte Carlo estimates obtained from an improved version of the Ising model to benchmark and test our results
In particular, we shall concentrate in the second part of the paper on realizations in the context of high energy physics, suggested by the lattice regularization of QCD. We shall discuss three instances of Ising behavior in finite temperature QCD related in various ways to the deconfinement transition. In the last of these examples, we shall address the critical ending point of finite density QCD. In this case, due to the well-known sign problem, Monte Carlo simulations are not possible and thus a direct comparison of experimental results with QFT/statistical mechanics predictions like the one we discuss in this paper may be important.
This paper is organized as follows. Section
The Ising model has a global
From a QFT point of view, the model in the vicinity of the critical point can be written as a perturbed conformal field theory
Thanks to the exact integrability of the model for
In three dimensions there are not exact results, but from the recent progress of the bootstrap approach and the improvement of Monte Carlo methods several universal quantities can be evaluated with very high precision.
The most important realization of this QFT is the spin Ising model on a cubic (in
The partition function of the model is
From the partition function defined above it is easy to obtain all the thermodynamic observables. In particular, following the standard notation we have for the magnetization and the magnetic susceptibility
In several practical applications also useful is the so-called second-moment correlation length which is defined through the second moment of the spin-spin correlation function as
In the scaling limit the critical behavior of all thermodynamic quantities is controlled by the two “scaling exponents”
While this is a well studied subject when only a single perturbation is present, its extension to the whole scaling region of the model, where both the
The main goal of this paper is to show that such an extension can be easily obtained making use of a
While the parametric approach is completely general and could be applied in principle to any universal combination of thermodynamic quantities, in this paper we shall study in particular the following ratio,
This choice is motivated by the fact that the two observables which appear in the ratio are rather easy to evaluate, both in numerical simulations and in experiments, since they only involve derivatives or correlations of the order parameter and are normalized with respect to the values they have along the critical line of first order phase transitions, which is easy to identify (again, both numerically and experimentally).
The main drawback of this choice is that it assumes an explicit realization of the
In the scaling region, when both the relevant perturbations are present, all the thermodynamic observables depend on the scaling combination Notice that our definition of
The three limits in which only one of the two perturbations is present (
These values can be used as benchmarks to test the reliability of our estimates and as “anchors” of the reference frame we are constructing.
It is useful to introduce a parametric representation of the critical equation of state that not only satisfies the scaling hypothesis but additionally allows a simpler implementation of the analytic properties of the equation of state itself. Following
Calling
The key point of the whole analysis is the determination of the function
A similar parametric representation can be introduced also for the correlation length. Following
The constants
Truncating at the quadratic order Eqs.
Using the above results we may construct a parametric representation of
There are a few Using There is a maximum value of
Result for
In practical applications one may be interested in the expression of
Using the parametric representation of Eq.
Result for
In three dimensions we have
Inserting these values into Eq.
Using the known values of
This tells us that we can trust the parametric representation of
We find in particular
In two dimensions the situation is not as good as in
From this we find
In
Combining these quantities we obtain a precise estimate for
Result for
The coefficients are reported in Table
Expansion coefficients of
As expected, the three limiting cases
Among the many physical realizations of the Ising universality class in this paper we decided to focus on three examples taken from high energy physics and in particular from the lattice regularization of QCD at finite temperature.
The most direct realization of the Ising universality class in lattice gauge theories (LGTs) is given by the deconfinement transition of pure gauge theories with a symmetry group The Polyakov loop is mapped to the spin ( The deconfining transition of the gauge model corresponds to the magnetization transition of the Ising model. In particular, the confining phase (low temperature of the gauge model) is mapped to the The Wilson action (i.e., the trace of the ordered product of the gauge field along the links of a plaquette) is mapped to the energy operator Actually it is mapped to the most general The screening mass of the gauge model in the deconfined phase is mapped to the mass of the Ising model in the low temperature phase, while
This mapping has been widely used in the past years to predict the behavior of various physical observables of the gauge model near the deconfinement transition, like for instance the short distance behavior of the Polyakov loop correlator
The situation is different if we study full QCD, i.e., if we include dynamical quarks in the model. In this case the center symmetry is explicitly broken by the Dirac operator and all the above considerations do not hold anymore. For physical values of the quark masses there is no phase transition between the high temperature quark-gluon plasma phase and the low temperature confined phase which are only separated by a smooth crossover
Columbia plot.
The phase diagram reported in the Columbia plot can be studied with standard Monte Carlo simulations, and in the vicinity of the critical lines the results of these simulations could be mapped using our tools to the Ising phase diagram and then compared with the Ising predictions as we discussed above for the simpler case of the pure gauge
It is important to notice that the two critical regions are of a different nature. The one in the top-right corner is a deconfinement transition similar to the one discussed in the previous section. In fact in the limit of infinite mass quarks the model becomes a pure gauge theory. In this limit the
The critical region in the bottom left portion of the Columbia plot has a completely different origin. It describes the restoration of the chiral symmetry in QCD at finite temperature and small quark masses. In QCD with three massless quark flavors the chiral phase transition is expected to be first order and to remain of first order even for small but nonzero values of the quark masses. As the quark masses increase, the gap in the order parameter decreases and the first order region terminates in a critical line of the Ising type. Above this line chiral symmetry is restored through a smooth crossover. In this case, the
Even if the precise location of the critical line is still debated, it seems that the physical point is not too far from this bottom left Ising line. If this is the case, then our analysis could be applied, maybe with some degree of approximation, also to the physical point.
The most interesting application of our results is for QCD at finite baryon density, which is realized by adding a finite chemical potential
In this regime the QCD phase diagram is expected to reveal interesting novel phases
QCD phase diagram at finite chemical potential.
Also for this model, as for the liquid-vapor transition (or the chiral transition discussed above), the
As more and more experimental results are obtained, it will become possible to directly test them with universal predictions from the Ising model and it will be important to have a precise charting of the Ising phase diagram to organize results and drive our understanding of strongly coupled QCD in this regime. Our paper is a first step in this direction. We proposed and studied one particular combination, chosen for its simplicity from a theoretical point of view and its accessibility from an experimental and numerical point of view, but other combinations are possible and could be studied using, as we suggest here, parametric representations in
We thank C. Bonati, M. Hasenbusch, and M. Panero for useful comments and suggestions.
We list below the scaling behavior of the observables used in the main text:
From these definitions it is possible to construct the following universal amplitude ratios:
In
In three dimensions there are no exact results but, thanks to the recent improvement of the bootstrap approach
It is also possible to choose realizations of the model in which the amplitude of the first irrelevant operator is tuned toward zero, thus allowing a more efficient approach to the fixed point. This is for instance the idea followed in
The scaling parameter
For instance, in the case of the 3D Ising model defined by Eqs.
From the parametric representation of the critical equation of state [Eq.
We report here the first few terms: in the
Plugging the above expansions into the expression for We report here only the first term for the three expansions to avoid too complex formulas; it is straightforward to obtain the next orders.
Upon substitution
We report here the expansions, in the three regimes of interest, for
Expansion coefficients of the magnetic susceptibility in the three regimes of interest, according to Eq.
Expansion coefficients of the correlation length in the three regimes of interest, according to Eq.
We analyze the parametric structure of twin Higgs (TH) theories and assess the gain in fine tuning which they enable compared to extensions of the standard model with colored top partners. Estimates show that, at least in the simplest realizations of the TH idea, the separation between the mass of new colored particles and the electroweak scale is controlled by the coupling strength of the underlying UV theory, and that a parametric gain is achieved only for strongly-coupled dynamics. Motivated by this consideration we focus on one of these simple realizations, namely composite TH theories, and study how well such constructions can reproduce electroweak precision data. The most important effect of the twin states is found to be the infrared contribution to the Higgs quartic coupling, while direct corrections to electroweak observables are subleading and negligible. We perform a careful fit to the electroweak data including the leading-logarithmic corrections to the Higgs quartic up to three loops. Our analysis shows that agreement with electroweak precision tests can be achieved with only a moderate amount of tuning, in the range 5%–10%, in theories where colored states have mass of order 3–5 TeV and are thus out of reach of the LHC. For these levels of tuning, larger masses are excluded by a perturbativity bound, which makes these theories possibly discoverable, hence falsifiable, at a future 100 TeV collider.
The principle of naturalness offers arguably the main motivation for exploring physics at around the weak scale. According to naturalness, the plausibility of specific parameter choices in quantum field theory must be assessed using symmetries and selection rules. When viewing the standard model (SM) as an effective field theory valid below a physical cutoff scale and considering only the known interactions of the Higgs boson, we expect the following corrections to its mass. We take
Indeed, for a given
Equation
It is important to recognize that the factor that boosts the mass of the states with SM gauge quantum numbers in Eq. The factor of two difference between the fine tuning
As already observed in the literature
The paper is organized as follows: in Sec.
In this section we outline the essential aspects of the TH mechanism. Up to details and variants which are not crucial for the present discussion, the TH scenario involves an exact duplicate,
Our basic assumption is that the SM and its twin emerge from a more fundamental Notice that the effective Higgs quartic receives approximately equal contributions from
Our estimate of
The ratio
The simplest option is given by models with For this naive estimate we have taken the twin-top contribution equal to the top one, so that the result is just twice the SM one. For a more precise statement see Sec.
It is perhaps too narrow-minded to stick rigidly to such estimates to determine the boost that
Concerning in particular composite TH scenarios one last important model building issue concerns the origin of the Higgs quartic
The second option corresponds to the structurally robust situation where
This option is a clever variant of the previous one, where below the scale
It should be said that in order to realize this scenario one would need to complete
In all the scenarios discussed so far the tuning of the Higgs vacuum expectation value (
In this section and in the remainder of the paper, we will focus on the CH realization of the TH, which belongs to the subhypersoft class of models. In this simple and well-motivated context we shall discuss EWPT, fine tuning, and structural consistency of the model.
Our basic structural assumption is that at a generic UV scale
The elementary and composite sectors are coupled according to the paradigm of partial compositeness
In order to proceed we now consider a specific realization of the composite TH and introduce a concrete simplified effective Lagrangian description of its dynamics. Our model captures the most important features of this class of theories, like the pNGB nature of the Higgs field, and provides at the same time a simple framework for the interactions between the elementary fields and the composite states, vectors and fermions. We make use of this effective model as an example of a specific scenario in which we can compute EW observables, and study the feasibility of the TH idea as a new paradigm for physics at the EW scale.
We write down an effective Lagrangian for the composite TH model using the Callan-Coleman-Wess-Zumino (CCWZ) construction
Before proceeding, we would like to recall the simplified model philosophy of Ref.
We start our analysis of the effective Lagrangian with the bosonic sector. Together with the elementary SM gauge bosons, the Notice that in the Lagrangian
We now introduce the Lagrangian for the fermionic sector. This depends on the choice of quantum numbers for the composite operators in Eq. Notice that in general we should introduce two different singlets in our Lagrangian. One corresponds to a full
The fermionic effective Lagrangian is split into three parts, which have the same meaning as the analogous distinctions we made for the bosonic sector of the theory:
The last term that we need to introduce in the effective Lagrangian describes the interactions between the vector and fermion resonances and originates completely in the composite sector. We have:
We conclude the discussion of our effective Lagrangian by clarifying its two-site model limit
In Sec.
Alternatively, the limit set by perturbativity on the UV interaction strength may also be estimated in the effective theory described by the nonlinear
Requiring that the process
As a third alternative, one could analyze when 1-loop corrections to a given observable become of the same order as its tree-level value. We applied this criterion to our simplified model by considering the
The perturbative limits obtained from Eqs.
As anticipated in the general discussion of Sec. Subleading
Below the scale
The Higgs effective action, including the leading Here
We have computed the IR contribution to the Higgs mass in a combined expansion in
The value of
IR contribution to the Higgs mass as a function of the scale
The plot of Fig.
In this section we compute the contribution of the new states described by our simplified model to the EWPO. Although it neglects the effects of the heavier resonances, our calculation is expected to give a fair assessment of the size of the corrections due to the full strong dynamics, and in particular to reproduce the correlations among different observables.
It is well known that, under the assumption of quark and lepton universality, short-distance corrections to the electroweak observables due to heavy new physics can be expressed in terms of four parameters, We define
We work at the 1-loop level and at leading order in the electroweak couplings and perform an expansion in inverse powers of the new physics scale. In this limit, the twin states do not affect the EWPO as a consequence of their being neutral under the SM gauge group. Deviations from the SM predictions arise only from heavy states with SM quantum numbers and are parametrically the same as in ordinary CH models with singlet Notice that
An additional contribution could in principle come from loops of the extra three “twin” NGBs contained in the coset
Since the effects from the twin sector can be neglected, the corrections to
The leading contribution to the We neglect for simplicity a contribution from the operator
Besides the UV threshold effects described above,
The total correction to the
Tree-level contributions to the
Further contributions to
The total contribution to the
In the limit of vanishing transferred momentum, tree-level corrections to
At the 1-loop level, corrections to
The second term in Eq. The operators
It is interesting that in our model the fermionic corrections to In this limit one has The existence of a similar sign correlation in the limit of a light (2,2) has been pointed out in the context of
Considering that no additional correction to
We are now ready to translate the prediction for the Higgs mass and the EWPO into bounds on the parameter space of our simplified model and for the composite TH in general. We are interested in quantifying the degree of fine tuning that our construction suffers when requiring the mass scale of the heavy fermions to lie above the ultimate experimental reach of the LHC. As discussed in Sec. Notice indeed that
Let us now describe the various pieces of our analysis. Consider first the Higgs potential, where the dependence on physics at the resonance mass scale is encapsulated in the function
The EWPO and the Higgs mass computed in the previous sections depend on several parameters, in particular on the mass spectrum of resonances (see Appendix
Let us now discuss the numerical bounds on the parameter space of our simplified model. They have been obtained by fixing the top and Higgs masses to their experimental value and performing the numerical fit described in Appendix Notice that because of our choice
Allowed regions in the
In the left panel of Fig.
In the right panel of Fig.
The results of Fig.
Allowed regions in the
In this paper we tried to assess how plausible a scenario yielding no new particles at the LHC can be obtained using the TH construction. We distinguished three possible classes of models: the subhypersoft, the hypersoft and the superhypersoft, with increasing degree of technical complexity and decreasing (technical) fine tuning. We then focused on the CH incarnation of the simplest option, the subhypersoft scenario, where the boost factor for the mass of colored partners [Eq.
Although EWPT work similarly in the CH and composite TH frameworks, the two are crucially different when it comes to contributions to the Higgs quartic. In the CH these are enhanced when
Finally, we comment on the classes of models not covered in this paper: the hypersoft and superhypersoft scenarios. The latter requires combining supersymmetry and compositeness with the TH mechanism, which, while logically possible, does not correspond to any existing construction. Such a construction would need to be rather ingenious, and we currently do not feel compelled to provide it, given the already rather epicyclic nature of the TH scenario. The simpler hypersoft scenario, though also clever, can by contrast be implemented in a straightforward manner, via, e.g., a tumbling
We would like to thank Andrey Katz, Alberto Mariotti, Kin Mimouni, Giuliano Panico, Diego Redigolo, Matteo Salvarezza, and Andrea Wulzer for useful discussions. The Swiss National Science Foundation partially supported the work of D. G. and R. R. under Contracts No. 200020-150060 and No. 200020-169696, the work of R. C. under Contract No. 200021-160190, and the work of R. T. under the Sinergia network CRSII2-160814. The work of R. C. was partly supported by the ERC Advanced Grant No. 267985
In this Appendix we define the generators of the
We start by listing the twenty-eight generators of
The spontaneous breaking of
Given the above symmetry breaking pattern, it is possible to define a LR parity,
The CCWZ variables
In this Appendix, we briefly discuss the mass matrices of the different charged sectors in the composite TH model and the related particle spectrum. We refer to Ref.
We start by considering the fields that do not have the right quantum numbers to mix with the elementary SM and twin quarks and whose mass is therefore independent of the mixing parameters
The remaining sectors have charge
As regards the sector of charge
Finally, we analyze the neutral sector of our model. It comprises eight fields, the twin top and bottom quarks, and six of the composite fermions contained in the
We conclude by noticing that the masses of the particles in different charged sectors are not unrelated to each other, but must be connected according to the action of the twin symmetry. In particular, it is obvious that the two singlets
In this Appendix we describe the computation of the Higgs effective potential and its RG improvement. First of all, the UV threshold correction can be computed with a standard Coleman-Weinberg (CW) procedure, from which we can easily derive the function
The IR contribution to the Higgs mass can be organized using a joint expansion in
We present now a procedure for computing the aforementioned contributions to the Higgs mass based on the approach of Ref.
In order to compute the improved potential to
On integrating out the heavy degrees of freedom, the composite twin Higgs model at the scale
The relevant light degrees of freedom are the SM fermion doublet,
Since the background field calculation at leading-log order requires corrections to the fermion masses and Higgs wave function at only one loop, we can neglect in the effective Lagrangian at the scale In any case, these do not contribute to the effective potential at this order.
The Wilson coefficients at the scale
We now expand the effective Lagrangian in the background of the Higgs field
We can now compute the running masses of the top and twin top in the background:
We can now substitute Eqs.
A numerical determination of the IR contribution
In this Appendix we discuss the UV threshold contribution to
An adjoint of
In order to analyze such effect, we make use of an operator approach. We classify the operators that can be generated at the scale
The effective operators contributing to Additional structures constructed in terms of
In this Appendix we report the results of our calculation of the electroweak precision observables, in particular we collect here the explicit expression of the coefficients
Let us start considering the
The derivation of
Diagrams with a loop of fermions and NGBs contributing to the
The diagram contributing to
Finally, we report the contribution to
For our analysis of the electroweak observables we make use of the fit to the parameters
This Appendix contains details on the derivation of the perturbative limits discussed in Sec.
The perturbative limits are obtained by first expressing the scattering amplitudes in terms of components with definite One has
Let us consider first the case
We analyze now the constraint from the scattering in the antisymmetric representation,
We analyze the parametric structure of twin Higgs (TH) theories and assess the gain in fine tuning which they enable compared to extensions of the standard model with colored top partners. Estimates show that, at least in the simplest realizations of the TH idea, the separation between the mass of new colored particles and the electroweak scale is controlled by the coupling strength of the underlying UV theory, and that a parametric gain is achieved only for strongly-coupled dynamics. Motivated by this consideration we focus on one of these simple realizations, namely composite TH theories, and study how well such constructions can reproduce electroweak precision data. The most important effect of the twin states is found to be the infrared contribution to the Higgs quartic coupling, while direct corrections to electroweak observables are subleading and negligible. We perform a careful fit to the electroweak data including the leading-logarithmic corrections to the Higgs quartic up to three loops. Our analysis shows that agreement with electroweak precision tests can be achieved with only a moderate amount of tuning, in the range 5%–10%, in theories where colored states have mass of order 3–5 TeV and are thus out of reach of the LHC. For these levels of tuning, larger masses are excluded by a perturbativity bound, which makes these theories possibly discoverable, hence falsifiable, at a future 100 TeV collider.
The principle of naturalness offers arguably the main motivation for exploring physics at around the weak scale. According to naturalness, the plausibility of specific parameter choices in quantum field theory must be assessed using symmetries and selection rules. When viewing the standard model (SM) as an effective field theory valid below a physical cutoff scale and considering only the known interactions of the Higgs boson, we expect the following corrections to its mass. We take
Indeed, for a given
Equation
It is important to recognize that the factor that boosts the mass of the states with SM gauge quantum numbers in Eq. The factor of two difference between the fine tuning
As already observed in the literature
The paper is organized as follows: in Sec.
In this section we outline the essential aspects of the TH mechanism. Up to details and variants which are not crucial for the present discussion, the TH scenario involves an exact duplicate,
Our basic assumption is that the SM and its twin emerge from a more fundamental Notice that the effective Higgs quartic receives approximately equal contributions from
Our estimate of
The ratio
The simplest option is given by models with For this naive estimate we have taken the twin-top contribution equal to the top one, so that the result is just twice the SM one. For a more precise statement see Sec.
It is perhaps too narrow-minded to stick rigidly to such estimates to determine the boost that
Concerning in particular composite TH scenarios one last important model building issue concerns the origin of the Higgs quartic
The second option corresponds to the structurally robust situation where
This option is a clever variant of the previous one, where below the scale
It should be said that in order to realize this scenario one would need to complete
In all the scenarios discussed so far the tuning of the Higgs vacuum expectation value (
In this section and in the remainder of the paper, we will focus on the CH realization of the TH, which belongs to the subhypersoft class of models. In this simple and well-motivated context we shall discuss EWPT, fine tuning, and structural consistency of the model.
Our basic structural assumption is that at a generic UV scale
The elementary and composite sectors are coupled according to the paradigm of partial compositeness
In order to proceed we now consider a specific realization of the composite TH and introduce a concrete simplified effective Lagrangian description of its dynamics. Our model captures the most important features of this class of theories, like the pNGB nature of the Higgs field, and provides at the same time a simple framework for the interactions between the elementary fields and the composite states, vectors and fermions. We make use of this effective model as an example of a specific scenario in which we can compute EW observables, and study the feasibility of the TH idea as a new paradigm for physics at the EW scale.
We write down an effective Lagrangian for the composite TH model using the Callan-Coleman-Wess-Zumino (CCWZ) construction
Before proceeding, we would like to recall the simplified model philosophy of Ref.
We start our analysis of the effective Lagrangian with the bosonic sector. Together with the elementary SM gauge bosons, the Notice that in the Lagrangian
We now introduce the Lagrangian for the fermionic sector. This depends on the choice of quantum numbers for the composite operators in Eq. Notice that in general we should introduce two different singlets in our Lagrangian. One corresponds to a full
The fermionic effective Lagrangian is split into three parts, which have the same meaning as the analogous distinctions we made for the bosonic sector of the theory:
The last term that we need to introduce in the effective Lagrangian describes the interactions between the vector and fermion resonances and originates completely in the composite sector. We have:
We conclude the discussion of our effective Lagrangian by clarifying its two-site model limit
In Sec.
Alternatively, the limit set by perturbativity on the UV interaction strength may also be estimated in the effective theory described by the nonlinear
Requiring that the process
As a third alternative, one could analyze when 1-loop corrections to a given observable become of the same order as its tree-level value. We applied this criterion to our simplified model by considering the
The perturbative limits obtained from Eqs.
As anticipated in the general discussion of Sec. Subleading
Below the scale
The Higgs effective action, including the leading Here
We have computed the IR contribution to the Higgs mass in a combined expansion in
The value of
IR contribution to the Higgs mass as a function of the scale
The plot of Fig.
In this section we compute the contribution of the new states described by our simplified model to the EWPO. Although it neglects the effects of the heavier resonances, our calculation is expected to give a fair assessment of the size of the corrections due to the full strong dynamics, and in particular to reproduce the correlations among different observables.
It is well known that, under the assumption of quark and lepton universality, short-distance corrections to the electroweak observables due to heavy new physics can be expressed in terms of four parameters, We define
We work at the 1-loop level and at leading order in the electroweak couplings and perform an expansion in inverse powers of the new physics scale. In this limit, the twin states do not affect the EWPO as a consequence of their being neutral under the SM gauge group. Deviations from the SM predictions arise only from heavy states with SM quantum numbers and are parametrically the same as in ordinary CH models with singlet Notice that
An additional contribution could in principle come from loops of the extra three “twin” NGBs contained in the coset
Since the effects from the twin sector can be neglected, the corrections to
The leading contribution to the We neglect for simplicity a contribution from the operator
Besides the UV threshold effects described above,
The total correction to the
Tree-level contributions to the
Further contributions to
The total contribution to the
In the limit of vanishing transferred momentum, tree-level corrections to
At the 1-loop level, corrections to
The second term in Eq. The operators
It is interesting that in our model the fermionic corrections to In this limit one has The existence of a similar sign correlation in the limit of a light (2,2) has been pointed out in the context of
Considering that no additional correction to
We are now ready to translate the prediction for the Higgs mass and the EWPO into bounds on the parameter space of our simplified model and for the composite TH in general. We are interested in quantifying the degree of fine tuning that our construction suffers when requiring the mass scale of the heavy fermions to lie above the ultimate experimental reach of the LHC. As discussed in Sec. Notice indeed that
Let us now describe the various pieces of our analysis. Consider first the Higgs potential, where the dependence on physics at the resonance mass scale is encapsulated in the function
The EWPO and the Higgs mass computed in the previous sections depend on several parameters, in particular on the mass spectrum of resonances (see Appendix
Let us now discuss the numerical bounds on the parameter space of our simplified model. They have been obtained by fixing the top and Higgs masses to their experimental value and performing the numerical fit described in Appendix Notice that because of our choice
Allowed regions in the
In the left panel of Fig.
In the right panel of Fig.
The results of Fig.
Allowed regions in the
In this paper we tried to assess how plausible a scenario yielding no new particles at the LHC can be obtained using the TH construction. We distinguished three possible classes of models: the subhypersoft, the hypersoft and the superhypersoft, with increasing degree of technical complexity and decreasing (technical) fine tuning. We then focused on the CH incarnation of the simplest option, the subhypersoft scenario, where the boost factor for the mass of colored partners [Eq.
Although EWPT work similarly in the CH and composite TH frameworks, the two are crucially different when it comes to contributions to the Higgs quartic. In the CH these are enhanced when
Finally, we comment on the classes of models not covered in this paper: the hypersoft and superhypersoft scenarios. The latter requires combining supersymmetry and compositeness with the TH mechanism, which, while logically possible, does not correspond to any existing construction. Such a construction would need to be rather ingenious, and we currently do not feel compelled to provide it, given the already rather epicyclic nature of the TH scenario. The simpler hypersoft scenario, though also clever, can by contrast be implemented in a straightforward manner, via, e.g., a tumbling
We would like to thank Andrey Katz, Alberto Mariotti, Kin Mimouni, Giuliano Panico, Diego Redigolo, Matteo Salvarezza, and Andrea Wulzer for useful discussions. The Swiss National Science Foundation partially supported the work of D. G. and R. R. under Contracts No. 200020-150060 and No. 200020-169696, the work of R. C. under Contract No. 200021-160190, and the work of R. T. under the Sinergia network CRSII2-160814. The work of R. C. was partly supported by the ERC Advanced Grant No. 267985
In this Appendix we define the generators of the
We start by listing the twenty-eight generators of
The spontaneous breaking of
Given the above symmetry breaking pattern, it is possible to define a LR parity,
The CCWZ variables
In this Appendix, we briefly discuss the mass matrices of the different charged sectors in the composite TH model and the related particle spectrum. We refer to Ref.
We start by considering the fields that do not have the right quantum numbers to mix with the elementary SM and twin quarks and whose mass is therefore independent of the mixing parameters
The remaining sectors have charge
As regards the sector of charge
Finally, we analyze the neutral sector of our model. It comprises eight fields, the twin top and bottom quarks, and six of the composite fermions contained in the
We conclude by noticing that the masses of the particles in different charged sectors are not unrelated to each other, but must be connected according to the action of the twin symmetry. In particular, it is obvious that the two singlets
In this Appendix we describe the computation of the Higgs effective potential and its RG improvement. First of all, the UV threshold correction can be computed with a standard Coleman-Weinberg (CW) procedure, from which we can easily derive the function
The IR contribution to the Higgs mass can be organized using a joint expansion in
We present now a procedure for computing the aforementioned contributions to the Higgs mass based on the approach of Ref.
In order to compute the improved potential to
On integrating out the heavy degrees of freedom, the composite twin Higgs model at the scale
The relevant light degrees of freedom are the SM fermion doublet,
Since the background field calculation at leading-log order requires corrections to the fermion masses and Higgs wave function at only one loop, we can neglect in the effective Lagrangian at the scale In any case, these do not contribute to the effective potential at this order.
The Wilson coefficients at the scale
We now expand the effective Lagrangian in the background of the Higgs field
We can now compute the running masses of the top and twin top in the background:
We can now substitute Eqs.
A numerical determination of the IR contribution
In this Appendix we discuss the UV threshold contribution to
An adjoint of
In order to analyze such effect, we make use of an operator approach. We classify the operators that can be generated at the scale
The effective operators contributing to Additional structures constructed in terms of
In this Appendix we report the results of our calculation of the electroweak precision observables, in particular we collect here the explicit expression of the coefficients
Let us start considering the
The derivation of
Diagrams with a loop of fermions and NGBs contributing to the
The diagram contributing to
Finally, we report the contribution to
For our analysis of the electroweak observables we make use of the fit to the parameters
This Appendix contains details on the derivation of the perturbative limits discussed in Sec.
The perturbative limits are obtained by first expressing the scattering amplitudes in terms of components with definite One has
Let us consider first the case
We analyze now the constraint from the scattering in the antisymmetric representation,
f.meinert@physik.uni-stuttgart.de
Laser cooling of single atoms in optical tweezers is a prerequisite for neutral atom quantum computing and simulation. Resolved sideband cooling comprises a well-established method for efficient motional ground-state preparation, but typically requires careful cancellation of light shifts in so-called magic traps. Here, we study a novel laser cooling scheme which overcomes such constraints, and applies when the ground state of a narrow cooling transition is trapped stronger than the excited state. We demonstrate our scheme, which exploits sequential addressing of red sideband transitions via frequency chirping of the cooling light, at the example of
Quantum control of individual atoms trapped in optical tweezers has seen a very rapid development in the last years
Large motional ground-state occupation is typically achieved using well-established sideband-resolved cooling protocols
In this paper, we demonstrate a method for motional ground-state cooling at the example of single trapped
We consider an atom with two internal electronic states Chirp-cooling in state-dependent optical tweezers. (a) Radial tweezer potential for the
Such persistent cooling conditions are no longer given when the trapping potential is state-dependent (
Before we turn to the experimental results, we briefly analyze the chirp-cooling approach numerically. To this end, we compute the quantum dynamics of a harmonically confined and laser-coupled (Rabi frequency
Our experiments start with loading a single optical tweezer with wavelength
We start investigating the cooling dynamics on the transition to
The temperature after the ramp is measured via the release-and-recapture technique Thermometry after chirp-cooling via release-and-recapture. (a) Atom-survival probability as a function of the release time
To extract a classical temperature
The minimal measured temperature is found close to the temperature-equivalent of the zero-point motion energy of the radial tweezer ground state
While the above results already provide evidence that the chirp-cooling method yields a large ground-state population, we complement the thermometry by resolved sideband spectroscopy on the (a) Scheme for sideband thermometry via shelving into metastable states. The atom in the ground state
Extracting the ground-state population from our data thus requires fitting with a full numerical simulation of the spectroscopy sequence. To this end, we compute the dynamics of the trapped two-level atom density matrix with an initial thermal trap population as above. The population in
Finally, we note that our data analysis neglects possible shifts and broadening of the carrier signal due to axial temperature orthogonal to the probe beam direction. Those can be present when probing at nonzero AC-Stark shift due to the dependence of the radial carrier transition frequency on the axial motional state, and allow us to infer also an upper limit estimate for the axial temperature. Indeed, since the observed linewidth is well compatible with our 1D analysis, we conclude that the axial temperature cannot be significantly higher than the measured radial temperature. This provides evidence that cooling acts simultaneously in axial trap direction, revealing additional information that is not accessible from release-and-recapture.
-Next, we study the dynamics of the atom number population in the tweezer during cooling. Most importantly, we find that chirp-cooling to low temperatures in the trap also causes light-induced losses which reliably remove pairs of atoms from the trap Light-induced losses and parity projection during cooling. (a) Histograms of photon counts before (yellow) and after (blue) cooling on
Finally, we demonstrate the possibility to apply our chirp-cooling scheme to an array of multiple tweezers. To this end, we generate a one-dimensional line of ten equally spaced ( Chirp-cooling in a one-dimensional tweezer array. (a) Averaged fluorescence image of a line of ten tweezers (10
Exemplary release-and-recapture thermometry data for the two outermost and a central tweezer are shown in Figs.
In conclusion, we have demonstrated a novel, broadly applicable ground-state cooling method for trapped atoms in optical tweezer arrays, which heavily releases constraints on magic trapping conditions for future experiments. This opens new routes for tweezer-based quantum technologies, requiring trapping wavelengths that have been so far incompatible with efficient in-trap cooling, specifically in view of the rapid developments with alkaline-earth(like) atoms
We thank Johannes Zeiher, Jacob Covey, and the QRydDemo team for fruitful discussions. We acknowledge funding from the Federal Ministry of Education and Research (BMBF) under the grants CiRQus and QRydDemo, the Carl Zeiss Foundation via IQST, and the Vector Foundation. MSS acknowledges support by ONR Grant No. N00014-20-1-2513. This research was supported in part through the use of University of Delaware HPC Caviness and DARWIN computing systems.
Tweezer loading starts with the preparation of a six-beam blue magneto-optical trap (MOT) of
For generating tweezers, we employ a frequency-doubled fiber laser system providing 10 W output power at 540 nm (TOPTICA Photonics). We send the trapping light through a 2D acousto-optical deflector (AA Opto-Electronic DTSXY-400) before focusing into the MOT region with a high-NA (0.5) microscope objective (Mitutoyo G Plan Apo 50X) to a waist of
For single-atom detection, we induce fluorescence on the
The images are classified into three categories depending on the number
To extract the survival probability in Fig.
Finally, we note that the accuracy to distinguish between single and multiple atoms in the trap is much lower than between zero and one atom, since the corresponding signals in the histograms [Fig.
In Fig.
More specifically, we consider the time-independent AC-Stark interaction Hamiltonian with an optical field
We evaluated the dynamic polarizabilities by solving the inhomogeneous equation in valence space
The wave functions are computed using the relativistic high-precision hybrid method that combines configuration interaction and coupled-cluster approaches (
The polarizability is given by
-
In this section, we provide details on the classical and quantum mechanical analysis of the release-and-recapture data shown in Fig.
This analysis does not capture the zero-point motion energy of the trapped atom, and the fitted classical temperature
In this section, we provide details of our method for describing the chirp-cooling and for modeling the sideband spectroscopy data. Our method is similar to the approach taken in Refs.
To derive the electronic Hamiltonian
The interaction
To incorporate the decay of the excited state with rate
To illustrate the chirp-cooling approach [Fig.
For modeling the sideband spectroscopy data, we extend our model by introducing an effective dark state. The sideband spectroscopy is a two-step process. First, we apply a probe pulse with Rabi frequency Measurement of the probe Rabi frequency
Finally, we discuss briefly the independent measurement of
The study of non-Abelian Majorana zero modes advances our understanding of the fundamental physics in quantum matter and pushes the potential applications of such exotic states to topological quantum computation. It has been shown that in two-dimensional (2D) and 1D chiral superconductors, the isolated Majorana fermions obey non-Abelian statistics. However, Majorana modes in a
Majorana fermions, whose existence in solid-state materials appears to have left discernible footprints in recent experiments, are strange particles in a number of ways. First, they are their own antiparticles. Second, when in isolation from each other, they obey a different interparticle rule, called “non-Abelian statistics,” from the usual fermions. For example, swapping two isolated Majorana fermions in space transforms one quantum state of the solid system that hosts them to another, while the system is rather robust against quantum noise. This second property actually lies at the heart of “topological quantum computing.”
A question of fundamental interest can be asked: What interparticle rule (statistics) do pairs of bound Majorana fermions obey? This fundamental question is motivated by what has been learned about topological superconductors—a new type of quantum matter. Topological superconductors with time-reversal symmetry are predicted to be able to host pairs of bound Majorana fermions, but the statistics of such bound pairs is far from clear. In fact, since two Majorana fermions can form a usual Dirac fermion, for a long time it was thought that such bound Majorana pairs should obey the usual statistics of normal fermions. In this theoretical paper, we show, for the first time, that a new type of non-Abelian statistics for Majorana-fermion pairs emerges in systems that satisfy certain symmetry conditions.
The time-reversal-invariant topological superconductors can be realized with heterostructure devices formed by quantum nanowires and conventional (
The search for exotic non-Abelian quasiparticles has been a focus of both theoretical and experimental studies in condensed matter physics, driven by both the exploration of the fundamental physics and the promising applications of such modes to a building block for a fault-tolerant topological quantum computer
Recently, a new class of topological superconductors with time-reversal symmetry, referred to as a DIII-symmetry-class superconductor and classified by the
With the practicability in realization, a fundamental question is, can the DIII-class topological superconductor be applied to topological quantum computation? The puzzle arises from the fact that braiding the end states in a DIII-class 1D superconductor always exchanges Majorana Kramers pairs rather than isolated Majorana modes. While braiding two pairs of Majoranas in chiral topological superconductors yields Abelian operations, in this work, we show interestingly that braiding Majorana end states in DIII-class topological superconductors is non-Abelian due to the protection of time-reversal symmetry. We further unveil an intriguing phenomenon in the Josephson effect, that the periodicity of Josephson currents depends on the fermion parity of the superconducting state, which provides direct measurements of all topological qubit states in the DIII-class 1D superconductors.
The article is organized as follows. In Sec.
Several interesting proposals have been considered to realize DIII-class 1D topological superconductors, including to use the proximity effects of
The DIII-class 1D topological superconductor realized by depositing a conducting nanowire (NW) on the top of a noncentrosymmetric superconductor (SC). Both
A single-channel 1D conducting quantum wire, being put along the
The induced superconductivity in the wire can be obtained by integrating out the degree of freedom of the superconductor substrate. We perform the integration in two steps. First, for the uniform noncentrosymmetric superconductor, we can determine its Green’s function
The logarithmic plot of the spectral function for the nanowire or noncentrosymmetric superconductor heterostructure. The dotted yellow curves show the bulk band structure of the nanowire system. The solid red areas (in the upper and lower positions of each panel) represent the bulk states of the substrate superconductor. (a) Topological regime with the chemical potential
(a) Two Majorana bound modes exist at each end of the nanowire in the topological regime with
It is interesting that the phase diagram in the nanowire does not depend on parameter details of the couplings between the nanowire and the substrate superconductor, and for
We note that the time-reversal symmetry is essential for the existence of the Majorana doublets in the topological phase. If time-reversal symmetry is broken, e.g., by introducing a Zeeman term
In this section, we show in detail that Majorana Kramers’s doublets obey non-Abelian statistics due to the protection of time-reversal symmetry. In the previous section, we have demonstrated that for the topological phase, at each end of the
Fermion parity measures the even and odd numbers of the fermions in a quantum system. Note that the non-Abelian statistics are properties of the ground-state subspace of a topological superconductor. We only need to consider the superconductor at zero temperature, in which case no thermally excited quasiparticles exist and the superconductor is characterized as a condensate. The fermion number of a ground state can only vary by pairs due to the presence of a pairing gap, which leads to the fermion-parity conservation for the condensate. For the DIII-class 1D topological superconductor, we prove here a central result that by grouping all the fermionic modes into two sectors that are time-reversed partners of each other, the fermion parity is conserved for each sector of the condensate, not only for the entire system. It is trivial to know that this result is true if the DIII-class topological superconductor is composed of two decoupled copies (e.g., corresponding to the spin-up and spin-down, respectively) of 1D chiral
According to the the previous section, the proximity effect induces
It can be found that the coefficients
Adiabatic condition and fermion-parity conservation for each sector of the superconductor. (a),(b) The couplings
It is noteworthy that for a realistic system at finite temperature, the quasiparticle poisoning may exist, which can change fermion parity and lead to the decoherence of Majorana qubit states. At low temperature, the dominant effect in the quasiparticle poisoning comes from the single-electron tunneling between the nanowire and the substrate superconductor
To ensure that the decoherence effect induced by quasiparticle poisoning does not lead to serious problems, one requires that the adiabatic manipulation time for Majorana modes should be much less than the decoherence time. For the DIII-class Majorana nanowires, the adiabatic time depends on the two characteristic time scales. One is determined by the bulk gap
On the other hand, when a weak time-reversal-breaking term is present, e.g., in the presence of Zeeman couplings induced by a stray field, the decoherence effect may also result due to the couplings between qubit states with the same total fermion parity. Note that Majorana doublets in a DIII-class Majorana wire are of Ising type in the spin degree of freedom; thus, the time-reversal-breaking couplings can be induced by a stray field only along specific directions, depending on the concrete setup used in the experimental realization
Note that braiding Majorana end modes is not well defined for a single 1D nanowire and, as first recognized by Alicea
(a)–(d) Braiding Majorana end modes through gating a
The fermion-parity conservation for each sector shown in the above subsection implies that the exchange of Majorana end modes in a DIII-class topological superconductor generically reduces to two independent processes of braiding Majoranas of two different sectors, respectively. The reason is because, first of all, braiding adiabatically the Majorana pairs, e.g.,
We consider two DIII-class wire segments with eight Majorana modes
Symmetry-protected non-Abelian statistics in a DIII-class 1D topological superconductor. (a) Majorana end modes
From the above discussion, we find that in the braiding, the Majorana modes
It is worthwhile to note that realizing a DIII-class superconductor applies no external magnetic field, which might be helpful to construct a realistic Majorana network to implement braiding operations. In comparison, for the chiral topological superconductor observed in a spin-orbit-coupled semiconductor nanowire using the
It is important to study how to detect the topological qubit states in a DIII-class Majorana quantum wire. The ground states of a single DIII-class Majorana quantum wire include two even- (
We consider a Josephson junction illustrated in Fig.
Josephson measurement of the topological qubit states in a DIII-class 1D topological superconductor. (a) The sketch of a Josephson junction with phase difference
Redefining the Majorana bases by
It is remarkable that the currents for odd-parity states are of
In summary, we have shown that Majorana doublets obtained in the DIII-class 1D topological superconductors obey non-Abelian statistics, due to the protection of time-reversal symmetry. The key results are that the fermion parity is conserved for each copy of the
We appreciate the very helpful discussions with P. A. Lee, J. Alicea, L. Fu, Z.-X. Liu, A. Potter, Z.-C. Gu, M. Cheng, C. Wang, and X. G. Wen. The authors thank HKRGC for support through DAG12SC01, Grants No. 602813, No. 605512, and No. HKUST3/CRF/13G.
We consider a single Majorana quantum wire, which hosts four Majorana end modes denoted by
Note that the coupling between the Majorana modes localized at the same end of the nanowire
With the proximity-induced
In the topological regime, at each end of the wire, we obtain two Majorana zero modes that are transformed to each other by a time-reversal operator. In terms of the electron operators, these bound modes take the form
The wave functions of Majorana bound modes decay exponentially as a function of the distance from the end of the nanowire, multiplying by an oscillatory function with the oscillating period equal to the Fermi wavelength in the nanowire. This property implies that
Adiabatic condition and fermion-parity conservation for each sector of time-reversal partners in the presence of disorder scattering. The energy splitting
It is worthwhile to note that for a fixed parameter
Now, we study how to measure the topological qubit states with the Josephson effect. It has been predicted that in the chiral 1D topological superconductor, the Josephson current has
We consider a Josephson junction formed by two Majorana nanowire ends with a phase difference
Josephson effect in a DIII-class 1D topological superconductor with the inclusion of random disorder scattering. (a) The sketch of a Josephson junction with phase difference
On the other hand, for
To this end, we combine
The Hamiltonian
From Eq.
The
With the above results, we can have different strategies in the experiment to distinguish
Almheiri, Dong, and Harlow [
A deep link might exist between two seemingly disparate but far-reaching ideas in physics: quantum error correction and the holographic principle. Quantum error correction concerns using redundant encoding to protect quantum information from damage, and it has been studied extensively because of its relevance to reliable operation of noisy quantum computers. The holographic principle, meanwhile, states that all information about a volume of space can be encoded on the surface area of that volume, much as a hologram encodes a 3D image on a 2D surface. Recent research suggests that how our physical space is structured may also correspond to redundantly represented information. We further explore this connection between redundant information and geometry.
-Specifically, we look at connections between quantum error-correcting codes and the “holographic correspondence,” which asserts that a suitably chosen quantum theory, without gravity, can be precisely equivalent to a theory of quantum gravity in a negatively curved spacetime. We analyze the properties of holographic quantum codes, quantum error-correcting codes that capture the essential features of the holographic correspondence. These codes provide an information-theoretic interpretation for physical notions such as points in space, black holes, and spacetime curvature.
-Our work provides a new paradigm for designing quantum error-correction schemes and secret sharing codes, from which we expect many new constructions, and it also clarifies the information-theoretic foundations of the holographic correspondence.
-Quantum error correction and the holographic principle are two of the most far-reaching ideas in contemporary physics. Quantum error correction provides a basis for believing that scalable quantum computers can be built and operated in the foreseeable future. The
The
To model holography faithfully, the quantum error-correcting code must have special properties that invite a geometrical interpretation. Code constructions that realize the ideas in Ref.
Our goal in this paper is to develop these ideas further. Our motivation is twofold. On one hand, holographic codes have opened a new avenue in quantum coding theory, and it is worthwhile to explore more deeply how geometric insights can provide new methods for deriving code properties. On the other hand, holographic codes provide a useful tool for sharpening the connections between holographic duality and quantum information theory. Specifically, as emphasized in Ref.
We view our work here as a step along the road toward answering a fundamental question about quantum gravity and holography: What is the bulk? The
In Sec.
In Sec.
In this section, we briefly review the principles of OAQEC
Since we formulate quantum error correction in an operator algebra framework, we begin by reviewing the structure of finite-dimensional von Neumann algebras. For a finite-dimensional complex Hilbert space
A nontrivial von Neumann algebra (with more than one summand) describes a quantum system with superselection sectors. We may regard
In OAQEC, we consider
The more general setting, with a nontrivial sum over
Another reason the OAQEC formalism is convenient in discussions of holography is that we can formulate the notion of complementary recovery
Quantum error correction is a way of protecting properly encoded quantum states from the potentially damaging effects of noise with suitable properties. The noise can be described by a completely positive trace-preserving map (CPTP map), also called a quantum channel. A channel is a linear map that takes density operators to density operators; saying that the channel is “completely” positive means that the positivity of the density operator is preserved even when the channel acts on a system that is entangled with other systems.
-A channel
Equation
We consider a quantum system with Hilbert space
In the Heisenberg picture language, we may consider an algebra of logical operators that act on the code space. We denote the set of linear operators mapping
Sometimes, we are only interested in how a logical operator
Now we can formulate the notion of error correction in the Heisenberg picture. (correctability). The noise channel
This means that the operator
In an important series of works (criterion for correctability). Given code subspace
If
A noise channel of particular interest is the erasure channel. To define the erasure channel, we consider a decomposition of the Hilbert space
As for any noise channel, we say that the erasure channel (correctable subsystem). Given a code subspace
Whether
For the special case of erasure, the criterion for correctability in Theorem 1 simplifies. We may choose (criterion for correctability of a subsystem). Given code subspace
Thus, erasure of
If erasure of
To see why this reconstruction is possible, we may consider the dual
To understand Eq. (reconstruction). Given code subspace
In the standard theory of quantum error correction, we consider the physical Hilbert space
The distance
As in the standard theory, we assume the physical Hilbert space is uniformly factorizable, (distance). Given code subspace
If
For a given code (price). Given code subspace
As already noted, if region (complementarity). Given code subspace Consider a region
We may anticipate that if a region (no free lunch). Given code subspace Consider two logical operators
If
For a traditional subspace code, we may define the price of the code as the price of its complete logical algebra, just as we define the code’s distance to be the distance of its complete logical algebra. The price and distance of a code are constrained by an inequality, which can be derived from the subadditivity of von Neumann entropy. This constraint on price is a corollary to the following theorem. (constraint on correctable regions). Consider a code subspace Let To proceed, we use properties of the entropy (strong quantum Singleton bound). Consider a code subspace In Eq. (quantum Singleton bound). Consider a code subspace Combine Corollary 1 and Lemma 3. □
Because of its resemblance to the Singleton bound
This strong quantum Singleton bound constrains the distance and price of a traditional subspace code, and it is natural to wonder what we can say about similar constraints on the distance and price of a logical subalgebra. In Sec.
The
A puzzling feature of the correspondence is that a single bulk operator can be faithfully represented by a boundary operator in multiple ways. In a very insightful paper, Almheiri, Dong, and Harlow
In Ref.
From the perspective of quantum coding theory, holographic codes are a family of quantum codes in which logical degrees of freedom have a pleasing geometrical interpretation, and as emphasized in Ref.
The precise sense in which the low-energy sector of a CFT realizes a quantum code remains rather murky. But loosely speaking, the logical operators are CFT operators that map low-energy states to other low-energy states. Operators that are logically equivalent act on the low-energy states in the same way, but they act differently on the high-energy states that are outside the code space. The algebra of logical operators needs to be truncated because acting on a state with a product of too many logical operators may raise the energy too high, and thus, the resulting state leaves the code space.
-From the bulk point of view, there is a logical algebra
The holographic dictionary determines how the logical operator subalgebra supported on a region in the bulk (a set of logical bulk sites) can be mapped to an operator algebra supported on a corresponding region on the boundary (a set of physical boundary sites). The geometrical interpretation of this relation between the bulk and boundary operator algebras will be elaborated in the following subsections.
-For holographic codes, whether a specified subsystem of the physical Hilbert space
The entanglement wedge hypothesis can be formulated for dynamical spacetimes, but for our purposes, it will suffice to consider a special case. We consider a smooth Riemannian manifold (Minimal surface). Given a Riemannian manifold
Geometric notions of minimal surface and entanglement wedge. In each diagram, we highlight a boundary region
For the most part, we assume that the minimal surface
Now, we can define the entanglement wedge. (entanglement wedge). Given a boundary region
Note that under the uniqueness assumption for the minimal surface (geometric complementarity). Given a region
As we will see, this geometric statement, which holds for a generic manifold
For a holographic code, the entanglement wedge hypothesis states a sufficient condition for a boundary region to be correctable with respect to the logical subalgebra supported at a site in the bulk. Because of Lemma 2, this condition also informs us that the logical subalgebra can be reconstructed on the complementary boundary region. Evoking the continuum limit of the regulated bulk theory, we will sometimes refer to a bulk site as a point in the bulk, though it will be implicit that associated logical subalgebra is finite dimensional and slightly smeared in space. (entanglement wedge hypothesis). If the bulk point
This connection between holographic duality and operator algebra quantum error correction has many implications worth exploring.
-For a holographic code corresponding to a regulated boundary theory, there are a finite number of boundary sites, each describing a finite-dimensional subsystem. Thus, we can speak of the length
The entanglement wedge hypothesis has notable consequences for the logical subalgebra
Now consider the price of
Geometric complementarity says that (price equals distance for a point). For a holographic code, let
Thus, in a holographic code, the bound
We can extend this reasoning to a bulk region
We emphasize again that these properties apply not only to AdS bulk geometry but also to other quantum code constructions satisfying geometric complementarity and the entanglement wedge hypothesis. Such codes were constructed in Ref.
In quantum gravity, there is an upper limit on the dimension of the Hilbert space that can be encoded in a physical region known as the Bousso bound
This feature of bulk quantum gravity can be captured by holographic codes, rather crudely, if we allow punctures in the bulk. A subsystem of the code space
In the continuum limit, we associate the holographic code with a Riemannian bulk manifold
As we take the continuum limit
A holographic code without punctures obeys the celebrated Ryu-Takayanagi formula
One way to visualize the purifying reference system
The left diagram illustrates the thermofield double construction, in which a bulk manifold
The Ryu-Takayanagi formula relating entanglement entropy
Up until now, we have implicitly assumed that the holographic code provides an isometric embedding of the logical system
Necessary condition
For a holographic code with punctures, we may consider the logical subalgebra associated with a bulk region
For bulk region
The geometrical interpretations for price and distance of
Next, we discuss a general property of holographic codes defined on bulk manifolds with asymptotically uniform negative curvature, which we call uberholography. The essence of uberholography is that both the distance and price of a logical subalgebra scale sublinearly with the length
Though uberholography applies more generally, to be concrete, we consider the bulk to have a two-dimensional hyperbolic geometry with radius of curvature
For some bulk region
Let us try punching a hole in
Two possible geometries for the entanglement wedge
If we choose
Now we repeat this construction recursively. In each round of the procedure, we start with a disconnected region
The procedure halts when the connected components are reduced in size to the lattice spacing
We can also consider codes with punctures in the bulk. To be specific, suppose
The scaling
This figure illustrates uberholography for the case of a two-dimensional hyperbolic bulk geometry. The inner logical boundary is contained inside the entanglement wedge, shaded in blue, of a boundary region
It is interesting to compare this universal exponent
For a holographic code, consider (as in Sec.
The quantum Markov condition provides a criterion for local correctability
In fact, Eq.
To apply the Markov condition to our holographic setting, consider a holographic code with no punctures, where the state of the physical boundary is pure. We choose
Using the Ryu-Takayanagi formula, this statement Eq.
We may also consider the case of a manifold with punctures, where the logical boundary
It is also notable that if the Markov condition Eq.
While holographic codes based on tensor networks can successfully reproduce the Ryu-Takayanagi relation satisfied by the von Neumann entanglement entropy of the boundary theory
The criterion for local correctability is satisfied by generic negatively curved bulk geometries but not by bulk geometries that are flat or positively curved. Consider, for example, a Euclidean two-dimensional disk with unit radius. For an interval
The failure of local correctability for holographic codes associated with flat and positively curved bulk manifolds suggests that, in these cases, the physics of the boundary system is highly nonlocal; in particular, the boundary state is not likely to be the ground state of a local Hamiltonian. This conclusion is reinforced by the observation that, according to the Ryu-Takayanagi formula, the entanglement entropy of a small connected region on the boundary of a flat ball scales linearly with the boundary volume of the region; this strong violation of the entanglement area law would not be expected in the ground state if the Hamiltonian is local.
-That flat bulk geometry implies nonlocal boundary physics also teaches us a valuable lesson about
This nonlocality of boundary physics is even more pronounced for holographic codes defined on positively curved manifolds. Consider the extreme case of a two-dimensional hemisphere
For
If we regulate bulk and boundary by introducing a lattice spacing
As noted in Sec.
In Sec.
We consider the case of a holographic code with punctures in the bulk; hence, there is a physical boundary
We consider a region
In what follows, for the sake of clarity, we denote the minimal surface associated with boundary region
Let
We may consider gradually “growing” a boundary region from
Now recall that
Now we can use the property that two complementary boundary regions share the same minimal bulk surface (where by the “complement” we mean the boundary complement rather than the physical complement; that is, we are simultaneously taking the complement with respect to the logical and physical boundaries). Let us denote by (holographic strong quantum Singleton bound). Consider a holographic code with logical boundary
It is intriguing that we used strong subadditivity of entropy in this holographic proof, which applies to logical subalgebras, while the proof of Corollary 1, which applies to the price and distance of a traditional code subspace, used only subadditivity. We have not found a proof of the strong quantum Singleton bound that applies to logical subalgebras and that does not use holographic reasoning; it is an open question whether Eq.
Our studies of holographic codes have only scratched the surface of this subject. More in-depth studies are needed, including searches, guided by geometrical intuition, for codes with improved parameters and investigations of the efficiency of decoding.
-Regarding the implications of holographic codes for quantum gravity, we have uncovered several hints that may help steer future research. We have seen that positive curvature of the bulk manifold can improve properties such as the code distance but at a cost—increasing distance is accompanied by enhanced nonlocality of the boundary system. The observation that the logical algebra of a bulk point has price equal to distance is a step toward characterizing bulk geometry using algebraic ideas, and we anticipate further advances in that direction. Uberholography, in bulk spacetimes with asymptotically negative curvature, illustrates how notions from quantum coding can elucidate the emergence of bulk geometry beyond the appearance of just one extra spatial dimension.
-Following Refs.
We are encouraged by recent progress connecting quantum error correction and quantum gravity, but much remains unclear. Most obviously, our discussion of the entanglement wedge and bulk reconstruction applies only to static spacetimes or very special spatial slices through dynamical spacetimes. Applying the principles of quantum coding to more general dynamical spacetimes is an important goal, which poses serious unresolved challenges.
-F. P. would like to thank Nicolas Delfosse, Henrik Wilming, and Jens Eisert for helpful discussions and comments. F. P. gratefully acknowledges funding provided by the Institute for Quantum Information and Matter, a NSF Physics Frontiers Center, with support from the Gordon and Betty Moore Foundation, as well as the Simons Foundation through the It from Qubit program and the FUB through the ERC project (TAQ). This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1125915.
-In Sec.
By a concatenated code, we mean a recursive hierarchy of codes within codes; these can be constructed in many ways. In the simplest case, we consider an
Suppose that
A more complicated recursive encoding scheme, based on an
A recursive coding network to which Eqs.
We may also consider the price and distance of holographic tensor network codes, which capture some of the features of full-blown
A nontrivial scaling exponent for the distance
Almheiri, Dong, and Harlow [
A deep link might exist between two seemingly disparate but far-reaching ideas in physics: quantum error correction and the holographic principle. Quantum error correction concerns using redundant encoding to protect quantum information from damage, and it has been studied extensively because of its relevance to reliable operation of noisy quantum computers. The holographic principle, meanwhile, states that all information about a volume of space can be encoded on the surface area of that volume, much as a hologram encodes a 3D image on a 2D surface. Recent research suggests that how our physical space is structured may also correspond to redundantly represented information. We further explore this connection between redundant information and geometry.
-Specifically, we look at connections between quantum error-correcting codes and the “holographic correspondence,” which asserts that a suitably chosen quantum theory, without gravity, can be precisely equivalent to a theory of quantum gravity in a negatively curved spacetime. We analyze the properties of holographic quantum codes, quantum error-correcting codes that capture the essential features of the holographic correspondence. These codes provide an information-theoretic interpretation for physical notions such as points in space, black holes, and spacetime curvature.
-Our work provides a new paradigm for designing quantum error-correction schemes and secret sharing codes, from which we expect many new constructions, and it also clarifies the information-theoretic foundations of the holographic correspondence.
-Quantum error correction and the holographic principle are two of the most far-reaching ideas in contemporary physics. Quantum error correction provides a basis for believing that scalable quantum computers can be built and operated in the foreseeable future. The
The
To model holography faithfully, the quantum error-correcting code must have special properties that invite a geometrical interpretation. Code constructions that realize the ideas in Ref.
Our goal in this paper is to develop these ideas further. Our motivation is twofold. On one hand, holographic codes have opened a new avenue in quantum coding theory, and it is worthwhile to explore more deeply how geometric insights can provide new methods for deriving code properties. On the other hand, holographic codes provide a useful tool for sharpening the connections between holographic duality and quantum information theory. Specifically, as emphasized in Ref.
We view our work here as a step along the road toward answering a fundamental question about quantum gravity and holography: What is the bulk? The
In Sec.
In Sec.
In this section, we briefly review the principles of OAQEC
Since we formulate quantum error correction in an operator algebra framework, we begin by reviewing the structure of finite-dimensional von Neumann algebras. For a finite-dimensional complex Hilbert space
A nontrivial von Neumann algebra (with more than one summand) describes a quantum system with superselection sectors. We may regard
In OAQEC, we consider
The more general setting, with a nontrivial sum over
Another reason the OAQEC formalism is convenient in discussions of holography is that we can formulate the notion of complementary recovery
Quantum error correction is a way of protecting properly encoded quantum states from the potentially damaging effects of noise with suitable properties. The noise can be described by a completely positive trace-preserving map (CPTP map), also called a quantum channel. A channel is a linear map that takes density operators to density operators; saying that the channel is “completely” positive means that the positivity of the density operator is preserved even when the channel acts on a system that is entangled with other systems.
-A channel
Equation
We consider a quantum system with Hilbert space
In the Heisenberg picture language, we may consider an algebra of logical operators that act on the code space. We denote the set of linear operators mapping
Sometimes, we are only interested in how a logical operator
Now we can formulate the notion of error correction in the Heisenberg picture. (correctability). The noise channel
This means that the operator
In an important series of works (criterion for correctability). Given code subspace
If
A noise channel of particular interest is the erasure channel. To define the erasure channel, we consider a decomposition of the Hilbert space
As for any noise channel, we say that the erasure channel (correctable subsystem). Given a code subspace
Whether
For the special case of erasure, the criterion for correctability in Theorem 1 simplifies. We may choose (criterion for correctability of a subsystem). Given code subspace
Thus, erasure of
If erasure of
To see why this reconstruction is possible, we may consider the dual
To understand Eq. (reconstruction). Given code subspace
In the standard theory of quantum error correction, we consider the physical Hilbert space
The distance
As in the standard theory, we assume the physical Hilbert space is uniformly factorizable, (distance). Given code subspace
If
For a given code (price). Given code subspace
As already noted, if region (complementarity). Given code subspace Consider a region
We may anticipate that if a region (no free lunch). Given code subspace Consider two logical operators
If
For a traditional subspace code, we may define the price of the code as the price of its complete logical algebra, just as we define the code’s distance to be the distance of its complete logical algebra. The price and distance of a code are constrained by an inequality, which can be derived from the subadditivity of von Neumann entropy. This constraint on price is a corollary to the following theorem. (constraint on correctable regions). Consider a code subspace Let To proceed, we use properties of the entropy (strong quantum Singleton bound). Consider a code subspace In Eq. (quantum Singleton bound). Consider a code subspace Combine Corollary 1 and Lemma 3. □
Because of its resemblance to the Singleton bound
This strong quantum Singleton bound constrains the distance and price of a traditional subspace code, and it is natural to wonder what we can say about similar constraints on the distance and price of a logical subalgebra. In Sec.
The
A puzzling feature of the correspondence is that a single bulk operator can be faithfully represented by a boundary operator in multiple ways. In a very insightful paper, Almheiri, Dong, and Harlow
In Ref.
From the perspective of quantum coding theory, holographic codes are a family of quantum codes in which logical degrees of freedom have a pleasing geometrical interpretation, and as emphasized in Ref.
The precise sense in which the low-energy sector of a CFT realizes a quantum code remains rather murky. But loosely speaking, the logical operators are CFT operators that map low-energy states to other low-energy states. Operators that are logically equivalent act on the low-energy states in the same way, but they act differently on the high-energy states that are outside the code space. The algebra of logical operators needs to be truncated because acting on a state with a product of too many logical operators may raise the energy too high, and thus, the resulting state leaves the code space.
-From the bulk point of view, there is a logical algebra
The holographic dictionary determines how the logical operator subalgebra supported on a region in the bulk (a set of logical bulk sites) can be mapped to an operator algebra supported on a corresponding region on the boundary (a set of physical boundary sites). The geometrical interpretation of this relation between the bulk and boundary operator algebras will be elaborated in the following subsections.
-For holographic codes, whether a specified subsystem of the physical Hilbert space
The entanglement wedge hypothesis can be formulated for dynamical spacetimes, but for our purposes, it will suffice to consider a special case. We consider a smooth Riemannian manifold (Minimal surface). Given a Riemannian manifold
Geometric notions of minimal surface and entanglement wedge. In each diagram, we highlight a boundary region
For the most part, we assume that the minimal surface
Now, we can define the entanglement wedge. (entanglement wedge). Given a boundary region
Note that under the uniqueness assumption for the minimal surface (geometric complementarity). Given a region
As we will see, this geometric statement, which holds for a generic manifold
For a holographic code, the entanglement wedge hypothesis states a sufficient condition for a boundary region to be correctable with respect to the logical subalgebra supported at a site in the bulk. Because of Lemma 2, this condition also informs us that the logical subalgebra can be reconstructed on the complementary boundary region. Evoking the continuum limit of the regulated bulk theory, we will sometimes refer to a bulk site as a point in the bulk, though it will be implicit that associated logical subalgebra is finite dimensional and slightly smeared in space. (entanglement wedge hypothesis). If the bulk point
This connection between holographic duality and operator algebra quantum error correction has many implications worth exploring.
-For a holographic code corresponding to a regulated boundary theory, there are a finite number of boundary sites, each describing a finite-dimensional subsystem. Thus, we can speak of the length
The entanglement wedge hypothesis has notable consequences for the logical subalgebra
Now consider the price of
Geometric complementarity says that (price equals distance for a point). For a holographic code, let
Thus, in a holographic code, the bound
We can extend this reasoning to a bulk region
We emphasize again that these properties apply not only to AdS bulk geometry but also to other quantum code constructions satisfying geometric complementarity and the entanglement wedge hypothesis. Such codes were constructed in Ref.
In quantum gravity, there is an upper limit on the dimension of the Hilbert space that can be encoded in a physical region known as the Bousso bound
This feature of bulk quantum gravity can be captured by holographic codes, rather crudely, if we allow punctures in the bulk. A subsystem of the code space
In the continuum limit, we associate the holographic code with a Riemannian bulk manifold
As we take the continuum limit
A holographic code without punctures obeys the celebrated Ryu-Takayanagi formula
One way to visualize the purifying reference system
The left diagram illustrates the thermofield double construction, in which a bulk manifold
The Ryu-Takayanagi formula relating entanglement entropy
Up until now, we have implicitly assumed that the holographic code provides an isometric embedding of the logical system
Necessary condition
For a holographic code with punctures, we may consider the logical subalgebra associated with a bulk region
For bulk region
The geometrical interpretations for price and distance of
Next, we discuss a general property of holographic codes defined on bulk manifolds with asymptotically uniform negative curvature, which we call uberholography. The essence of uberholography is that both the distance and price of a logical subalgebra scale sublinearly with the length
Though uberholography applies more generally, to be concrete, we consider the bulk to have a two-dimensional hyperbolic geometry with radius of curvature
For some bulk region
Let us try punching a hole in
Two possible geometries for the entanglement wedge
If we choose
Now we repeat this construction recursively. In each round of the procedure, we start with a disconnected region
The procedure halts when the connected components are reduced in size to the lattice spacing
We can also consider codes with punctures in the bulk. To be specific, suppose
The scaling
This figure illustrates uberholography for the case of a two-dimensional hyperbolic bulk geometry. The inner logical boundary is contained inside the entanglement wedge, shaded in blue, of a boundary region
It is interesting to compare this universal exponent
For a holographic code, consider (as in Sec.
The quantum Markov condition provides a criterion for local correctability
In fact, Eq.
To apply the Markov condition to our holographic setting, consider a holographic code with no punctures, where the state of the physical boundary is pure. We choose
Using the Ryu-Takayanagi formula, this statement Eq.
We may also consider the case of a manifold with punctures, where the logical boundary
It is also notable that if the Markov condition Eq.
While holographic codes based on tensor networks can successfully reproduce the Ryu-Takayanagi relation satisfied by the von Neumann entanglement entropy of the boundary theory
The criterion for local correctability is satisfied by generic negatively curved bulk geometries but not by bulk geometries that are flat or positively curved. Consider, for example, a Euclidean two-dimensional disk with unit radius. For an interval
The failure of local correctability for holographic codes associated with flat and positively curved bulk manifolds suggests that, in these cases, the physics of the boundary system is highly nonlocal; in particular, the boundary state is not likely to be the ground state of a local Hamiltonian. This conclusion is reinforced by the observation that, according to the Ryu-Takayanagi formula, the entanglement entropy of a small connected region on the boundary of a flat ball scales linearly with the boundary volume of the region; this strong violation of the entanglement area law would not be expected in the ground state if the Hamiltonian is local.
-That flat bulk geometry implies nonlocal boundary physics also teaches us a valuable lesson about
This nonlocality of boundary physics is even more pronounced for holographic codes defined on positively curved manifolds. Consider the extreme case of a two-dimensional hemisphere
For
If we regulate bulk and boundary by introducing a lattice spacing
As noted in Sec.
In Sec.
We consider the case of a holographic code with punctures in the bulk; hence, there is a physical boundary
We consider a region
In what follows, for the sake of clarity, we denote the minimal surface associated with boundary region
Let
We may consider gradually “growing” a boundary region from
Now recall that
Now we can use the property that two complementary boundary regions share the same minimal bulk surface (where by the “complement” we mean the boundary complement rather than the physical complement; that is, we are simultaneously taking the complement with respect to the logical and physical boundaries). Let us denote by (holographic strong quantum Singleton bound). Consider a holographic code with logical boundary
It is intriguing that we used strong subadditivity of entropy in this holographic proof, which applies to logical subalgebras, while the proof of Corollary 1, which applies to the price and distance of a traditional code subspace, used only subadditivity. We have not found a proof of the strong quantum Singleton bound that applies to logical subalgebras and that does not use holographic reasoning; it is an open question whether Eq.
Our studies of holographic codes have only scratched the surface of this subject. More in-depth studies are needed, including searches, guided by geometrical intuition, for codes with improved parameters and investigations of the efficiency of decoding.
-Regarding the implications of holographic codes for quantum gravity, we have uncovered several hints that may help steer future research. We have seen that positive curvature of the bulk manifold can improve properties such as the code distance but at a cost—increasing distance is accompanied by enhanced nonlocality of the boundary system. The observation that the logical algebra of a bulk point has price equal to distance is a step toward characterizing bulk geometry using algebraic ideas, and we anticipate further advances in that direction. Uberholography, in bulk spacetimes with asymptotically negative curvature, illustrates how notions from quantum coding can elucidate the emergence of bulk geometry beyond the appearance of just one extra spatial dimension.
-Following Refs.
We are encouraged by recent progress connecting quantum error correction and quantum gravity, but much remains unclear. Most obviously, our discussion of the entanglement wedge and bulk reconstruction applies only to static spacetimes or very special spatial slices through dynamical spacetimes. Applying the principles of quantum coding to more general dynamical spacetimes is an important goal, which poses serious unresolved challenges.
-F. P. would like to thank Nicolas Delfosse, Henrik Wilming, and Jens Eisert for helpful discussions and comments. F. P. gratefully acknowledges funding provided by the Institute for Quantum Information and Matter, a NSF Physics Frontiers Center, with support from the Gordon and Betty Moore Foundation, as well as the Simons Foundation through the It from Qubit program and the FUB through the ERC project (TAQ). This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1125915.
-In Sec.
By a concatenated code, we mean a recursive hierarchy of codes within codes; these can be constructed in many ways. In the simplest case, we consider an
Suppose that
A more complicated recursive encoding scheme, based on an
A recursive coding network to which Eqs.
We may also consider the price and distance of holographic tensor network codes, which capture some of the features of full-blown
A nontrivial scaling exponent for the distance
Jarryd Pla is a quantum engineer at the University of New South Wales, Sydney. He works on problems related to quantum information processing and more broadly to quantum technologies. Pla was instrumental in demonstrating the first quantum bits made from the electron and nucleus of a single impurity atom inside a silicon chip. His current research interests span spin-based quantum computation, superconducting quantum circuits, and hybrid quantum technologies. He is focused on developing new quantum technologies to aid the scaling of quantum computers and to advance capabilities in spectroscopy and sensing.
-A new quantum random-access memory device reads and writes information using a chirped electromagnetic pulse and a superconducting resonator, making it significantly more hardware-efficient than previous devices.
-Researchers have developed a RAM device from a superconducting circuit resonator and a silicon chip embedded with bismuth atoms. Chirped microwave pulses transfer quantum information back and forth between the resonator and the bismuth atoms, where the information is stored in the atoms’ spin states.
-Random-access memory (or RAM) is an integral part of a computer, acting as a short-term memory bank from which information can be quickly recalled. Applications on your phone or computer use RAM so that you can switch between tasks in the blink of an eye. Researchers working on building future quantum computers hope that such systems might one day operate with analogous quantum RAM elements, which they envision could speed up the execution of a quantum algorithm [
Just like quantum computers, experimental demonstrations of quantum memory devices are in their early days. One leading chip-based platform for quantum computation uses circuits made from superconducting metals. In this system, the central processing is done with superconducting qubits, which send and receive information via microwave photons. At present, however, there exists no quantum memory device that can reliably store these photons for long times. Luckily, scientists have a few ideas.
-One of those ideas is to use the spins of impurity atoms embedded in the superconducting circuit’s chip. Spin is one of the fundamental quantum properties of an atom. It acts like an internal compass needle, aligning with or against an applied magnetic field. These two alignments are analogous to the 0 and 1 of a classical bit and can be used to store quantum information [
For atomic spins, the information-storage times can be orders of magnitude longer than those of superconducting qubits. Researchers have shown, for example, that bismuth atoms placed inside silicon chips can store quantum information for times longer than a second [
O’Sullivan and his colleagues offer an elegant solution to microwave photon information storage and retrieval that uses a hardware-efficient approach. The team’s device consists of a superconducting circuit resonator that sits on a silicon chip embedded with bismuth atoms (Fig.
O’Sullivan and colleagues show that their memory device is able to simultaneously store multiple pieces of photonic information in the form of four weak microwave pulses. Importantly, they also demonstrate that the information can be read back in any order, making their device a true RAM.
-In this first demonstration, the team reports a 3% efficiency, indicating that most of the information is lost by the memory. Thus, their device is still some way from the faithful storage and retrieval required for a future quantum computer. However, an analysis of the potential sources of this low efficiency indicates that it does not come from the transfer process but instead arises from potentially resolvable limitations of the device. The team thinks that by increasing the number of spins they could significantly improve the device’s efficiency.
-As well as storing information, quantum RAM elements could help in increasing the density of qubits in a quantum processor. In September, IBM introduced Project Goldeneye, a large dilution refrigerator [