Adiabatic Quantum Computer

Quotes and formulae are taken from the D-Wave Documentation [M11] unless specified otherwise.


Principle

  • Combinatorial optimization problems can be reformulated as binary optimization problems. These binary optimization can in turn be mapped to the Hamiltonian of a physical quantum system.

  • Quantum annealing is based on the controlled evolution [36] of the Hamiltonian describing the quantum computer’s hardware. It starts with an initial Hamiltonian whose ground state is known and easy to prepare, and slowly evolves towards the final Hamiltonian that describes the optimization problem.

    \[{\cal H}_{ising} = \underbrace{- \frac{A({s})}{2} \left(\sum_i {\hat\sigma_{x}^{(i)}}\right)}_\text{Initial Hamiltonian} + \underbrace{\frac{B({s})}{2} \left(\sum_{i} h_i {\hat\sigma_{z}^{(i)}} + \sum_{i>j} J_{i,j} {\hat\sigma_{z}^{(i)}} {\hat\sigma_{z}^{(j)}}\right)}_\text{Final Hamiltonian}\]
  • “An annealing process that experiences no interference from outside energy sources and evolves the Hamiltonian slowly enough is called an adiabatic process, and this is where the name adiabatic quantum computing comes from.”

  • The minimum gap problem: During the annealing process, while the Hamiltonian slowly evolves, the ground state may become very close to the first excited state in terms of the energy gap between them. If the process occurs too fast this could lead the system to move to this excited state and therefore corrupt the solution of the optimization problem. This is the reason for the requirement of the evolution to be slow. One major theoretical odd of quantum annealing is the lack of a general statement about how slow this process has to occur. Nevertheless strategies are proposed to increase the energy gap 1.

Optimization Problems

  • The kind of optimization problems that can be easily mapped on Hamiltonians are Binary Quadratic Models 2. The common variants for describing the objective functions are the Ising Model and QUBO (Quadratic Unconstrained Binary Optimization).

    “In summary, to solve a problem on quantum samplers, you formulate the problem as an objective function, usually in Ising or QUBO format. Low energy states of the objective function represent good solutions to the problem.”

  • The Ising Model, one of the two common variants used to formulate the optimization problems, is also an immediate representation of the hardware (!)

    \[\text{E}_{Ising}(\boldsymbol{s}) = \sum_{i=1}^N h_i s_i + \sum_{i=1}^N \sum_{j=i+1}^N J_{i,j} s_i s_j\]
  • Problems need to be reformulated in terms of a binary quadratic model. See the section Reformulating Problems 3.

Hardware

  • Adiabatic quantum computers are made of a lattice of interconnected qubits 4. The qubits are not fully connected, what represents a limitation for the hamiltonians that can be described. Thus the topology i.e. the connection pattern is of great importance.
  • The original D-Wave architecture is described by Harris, 2010 [53]. A reason for the difference between quantum annealers and circuit-based quantum computers and therefore why the D-Wave quantum annealer has significantly more qubits then the current gate-based quantum computers is the following:

    “[…] one can provide uniform [flux] \(\Phi_{ccjj}^x\) to multiple qubits using a single global current bias line, as opposed to one bias line per qubit. […] In doing so, one substantially reduces the number of external biases that must be applied to the processor, thus significantly improving the scaling of this particular architecture. “

  • The problematic sensitivity to noise of the above described design is due to the “qubit physical layer not separated from classical control layer” [M20]. This leads to a not fully coherent mode of operation of current quantum annealers, as described together with “more-coherent” architectures in the next section State of the art.
  • To map any Binary Quadratic Model the specificity of the hardware topology must be considered. To make the best use of it, a strategy called minor embedding 5 optimizes this mapping.

    “The process of mapping variables in the problem formulation to qubits on the QPU is known as minor embedding.”

  • See also an alternative yet prototype coherent realization with Rydberg atoms [46].
  • See also the “quantum annealing architecture with all-to-all connectivity from local interactions” (LHZ architecture) [76] implemented by ParityQC.

For more details about the architecture and control see the notes below 6 7.

Embedding

The discrete optimization problems must be mapped on the QUBO formalism, and then the QUBO itself must be mapped to the hardware, what is referred to as “embedding”: the main obstacle is the limited connectivity of the hardware, and the problem’s connectivity graph has to be “embedded” on the hardware. This makes it necessary to group several physical qubits together to form one logical qubits.

In the case of a fully connected QUBO, the number of logical qubits that can be mapped on a specific hardware may be significantly smaller than the number of physical qubits.

The required connectivity is nicely represented by the “QUBO matrix”.

Let’s take the example of the Example: Traveling Salesman as stated in the Combinatorial Optimization section. We have seen that the QUBO formulation scales with \(N^2\) where \(N\) is the number of places to visit. The natural index for the variable was given by the pair \(v,i\) but we may express it by a single index \(\xi\) in the range of \(N^2\). The QUBO matrix is the representation of the coefficients in the quadratic expression of the Hamiltonian: the terms \(Q_{\xi, \eta}\) of the matrix are the coefficients in front of the terms \(x_{\xi} x_{\eta}\).

The qubits in the Quantum Annealer hardware form an Ising model. The one-to-one correspondence between QUBO and Ising model formulation is nicely described in the myQML documentation’s section about Formulating combinatorial problem. The strength of the coupling between two qubits \(\xi\) and \(\eta\) is directly proportional to the term \(Q_{\xi, \eta}\) of the QUBO.

We can see from the TSP Hamiltonian that the connectivity is not only due to the terms related to the edge weights, but that the penalty terms introduce a full connectivity because of the square of the sum of all variables. For that reason the TSP will scale much worse than \(N^2\) on real quantum hardware as it will require many physical qubits for just a few logical ones.

Lucas [83] tells us that only few classes can be easily embedded:

Another simple class of mappings from NP problems to Ising models: “covering” and “packing” problems. These are, by far, the most popular class of problems discussed in the AQO literature. As we mentioned in the introduction, this is because this is the only class of NP problems (discussed in this paper) for which it is easy to embed the problem via a graph which is not complete (or close to complete)

The embedding procedure as required on D-Wave system’s is referred to as minor-embedding. The limitation described above is also mentioned by Weinberg [133]:

One should recognize that the 5000-qubit processor cannot handle a problem of 5000 binary variables. The embedding requires multiple hardware qubits to be programmed as a logical node to represent each logical variable. For a fully-connected logical problem, one in which every binary variable interacts with all the others, one can only embed such a fully-connected problem of approximately 180 logical binary variables. Many problems of practical interest are not fully-connected logically, so larger problem sizes of hundreds of binary variables can often be embedded.

State of the art

  • A review: Adiabatic Quantum Computation [R1]

    • AQC equivalent to the circuit model (and of course requires full coherence)

    • “We finally devote considerable space to Stoquastic AQC, the setting of most AQC work to date, where we discuss obstructions to success and their possible resolutions.” (de facto D-Wave hardware (?))

  • When can Quantum Annealing win?, 2016 [31]

    “During the last two years, the Google Quantum AI team has made progress in understanding the physics governing quantum annealers. […] We found that for problem instances involving nearly 1000 binary variables, quantum annealing significantly outperforms its classical counterpart, simulated annealing. It is more than 108 times faster than simulated annealing running on a single core. […] While these results are intriguing and very encouraging, there is more work ahead to turn quantum enhanced optimization into a practical technology. The design of next generation annealers must facilitate the embedding of problems of practical relevance. For instance, we would like to increase the density and control precision of the connections between the qubits as well as their coherence.”

  • “Demonstration of a scaling advantage for a quantum annealer over simulated annealing”, 2018 [3]:

    “[We] establish the first example of a scaling advantage for an experimental quantum annealer over classical simulated annealing. […] However, we do not find evidence for a quantum speedup: SQA exhibits the best scaling for annealing algorithms by a significant margin. This is a finding of independent interest, since we associate SQA’s advantage with its ability to transverse energy barriers in the semiclassical energy landscape by mimicking tunneling. “

  • Perdomo et al., Readiness of Quantum Optimization Machines for Industrial Applications, 2019 [103]
    • “In this work we present a comprehensive benchmarking study on a concrete application, namely the diagnosis of faults in digital circuits, referred in the main text as CCFD.” […] “More specifically, we provide insights on the performance of QA in the context of the CCFD instances by performing an asymptotic scaling analysis involving five different approaches: QA experiments on the DW2X compared to three classical (SA, PTICM, and a CCFD-tailored SAT-based solver), and extensive QMC simulations.”

    • “We show that both, SQA and the DW2X have a limited quantum speedup by showing a scaling advantage over SA.” […] “These results confirm the presence of quantum tunneling in the DW2X; a quantum speedup restricted to sequential algorithms similar to the Google Inc. study on the weak-strong clusters instance.”

  • Yarkoni et al., Quantum annealing for industry applications: introduction and review, 2022 [139]
    • A very recommendable introduction to Quantum Annealing: theoretical foundation, hardware, QUBO, embedding etc.

    • Early applications are still to be seen as proof of concepts.

    • For more details see Optimization in Industry.

  • Some more details about possible applications: SAT-Problem [36], Quantum Chemistry [R9]
  • IARPA Quantum Enhanced Optimization, 2021 summary [M20] [29]

See also general section about State of the Art of Quantum Computing.


Notes:

1

Handbook > Energy Gap [M11]

2

Getting Started > Solving Problems with Quantum Samplers [M11]

3
4

Getting Started > QPU Architecture: Topologies [M11]

5
6

QPU Solver Datasheet [M11]

7

General considerations about the coupling [R11]:

Longitudinal coupling is an important type of interaction, because it can generate entanglement without energy exchange. Moreover, it is found a necessary ingredient in the application of quantum annealing, where certain hard combinatorial optimization problems can be modeled by the Ising Hamiltonian […] and finding its ground state would solve this problem.”

“In some applications, such as for quantum annealing, both longitudinal and transverse couplings are desired (\(\sigma_z \sigma_z\) coupling for mapping the problem and \(\sigma_x \sigma_x\) coupling for enhancing the annealing performance) and require independent control.”