Introduction
Quantum computers perform computation based on quantum bits or qubits with the phenomena of quantum superposition, quantum entanglement, quantum tunneling, and so on. A qubit exists in a superposition of both 0 and 1, and definitely reveals 0 or 1 when measured. In contrast, classical computers perform computation based on bits, each of which is either 0 or 1. Quantum computers have been attracting much research attention because they can offer computing power that classical computers can never offer, which is called quantum supremacy[1].
Some quantum computers, such as IBM Q [2] Google Sycamore [3], are universal, whereas some others, such as D-Wave Advantage [4], are non-universal. On the one hand, universal quantum computers rely on quantum gates to form quantum circuits to perform computation to solve general problems. On the other hand, other mechanisms rather than quantum gates are employed to perform computation for solving special problems on non-universal quantum computers. For example, the quantum annealing mechanism is used by D-Wave Advantage, a quantum annealer, to leverage quantum tunneling to traverse the energy landscape to find the globally lowest energy [5] for solving optimization problems.
Quantum annealers hold the promise of outperforming classical computers in solving intricate combinatorial optimization problems arising in different application areas, such as quantum chemistry [6], quantum machine learning [7], [8], [9], quantum deep learning [10], [11], quantum variational autoencoders [12], [13], fault detection and diagnosis [14], [15], online fraud detection [16], financial portfolio optimization [17], [18], operational planning [19], [20], data processing in high energy physics [21], [22], material microstructure equilibration [23], [24] and Monte Carlo sampling [25]. However, as mentioned in [26], there are ongoing scientific debates [27], [28], [29], [30] arguing whether quantum annealers can provide better problem solving capabilities than classical computers. The question still remains unclear regarding whether quantum annealing algorithms (QAAs) running on quantum annealers have computational advantages over classical algorithms (CAs) running on classical computers. We are thus motivated to write this paper aiming to clarify the question by classifying and benchmarking QAAs that use quadratic unconstrained binary optimization (QUBO) formulas to solve NP-hard problems.
This paper classifies QUBO formulas into four classes, as previously demonstrated in [31]. It also exemplifies each class with two QUBO formulas used by two QAAs for solving specific NP-hard problems. The solved problems are the subset sum problem (SSP), maximum cut problem (MCP), vertex cover problem (VCP), 0/1 knapsack problem (0/1 KP), graph coloring problem (GCP), Hamiltonian cycle problem (HCP), traveling salesperson problem (TSP), and job shop scheduling problem (JSP). It is believed that no CA can solve an NP-hard problem efficiently with a polynomial time complexity in the worst case. That is to say, for an NP-hard problem, the best CA to solve it needs a super-polynomial (e.g., exponential) time complexity in the worst case. The QAAs based on QUBO to solve the above-mentioned NP-hard problems are implemented and run on a D-Wave quantum annealer. They are compared with related CAs for benchmarking in terms of the quality of the solution and the time to the solution. The benchmarking results reveal which classes of QUBOs are likely to provide advantages to QAAs over CAs. Based on the benchmarking results, observations and suggestions are given for each QUBO formula class, so that proper actions can be adopted to improve the performance of QAAs. Compared with CAs, QAAs using QUBO formulas are competitive in the current NISQ era [32], in which quantum computers have only a moderate number of error-prone qubits with low-fidelity. It is believed many QAAs will have superior performance to CAs in the future when we go beyond the NISQ era to have quantum computers owning thousands or more qubits with high fidelity.
The contribution of this paper is fourfold. First, it proposes a four-class classification of QUBO formulas used by QAAs for solving NP-hard problems. It also exemplifies each class by QUBO formulas solving specific NP-hard problems. Second, QAAs are implemented and run on a D-Wave quantum annealer to solve the specific NP-hard problems. They are compared with related CAs for benchmarking in terms of the quality of the solution and the time to the solution. Third, this paper identifies which classes of QUBO formulas are likely to provide advantages to QAAs over CAs. Fourth, observations and suggestions are given for each QUBO formula class, so that appropriate actions can be taken to improve the performance of QAAs. Generally, the significance of this paper is (i) for people to determine if a QAA using a QUBO formula to solve a certain NP-hard problem is likely to outperform related CAs by checking the QUBO formula class, and (ii) for people to take proper actions to further improve the QAA performance.
The remainder of this paper is organized as follows. Section II introduces some preliminaries. The QUBO formula classification is described in Section III. QUBO formulas used by QAAs solving specific NP-hard problems are shown as examples of classification classes in Section IV. For the purpose of benchmarking, the QAAs are also compared with related CAs in terms of different performance metrics in Section V. Observations and suggestions based on QAA benchmarks are given in Section VI. Finally, Section VII concludes this paper.
Preliminaries
A. NP-hard Problems
In this subsection, we introduce the concept of NP-hard problems. As mentioned earlier, it is well believed that no CAs can solve NP-hard problems efficiently with a polynomial time complexity for any problem instances. Below, we first introduce the concepts of deterministic algorithms and nondeterministic algorithms [33] for realizing NP-hard problems.
The deterministic algorithm is the normal CA that can run on current classical general-purpose computers to find solutions to problems. On the contrary, according to [34], the nondeterministic algorithm cannot run on any current computers; it is conceptual and intended only for theoretical discussions. A nondeterministic algorithm to solve a given problem has two phases: choosing and checking. The choosing phase is to select one out of given options. The checking phase is to check whether the selected option leads to a correct solution to the problem or not. If so, it returns “success”, which means that a correct solution can be found; otherwise, it returns “failure”, which means that no correct solution can be found. The nondeterministic algorithm has a strong assumption that if there exist proper options leading to correct solutions, then the choosing phase is assumed to always select one of the proper options for the algorithm to return “success”.
On the one hand, problems that can be solved by the deterministic algorithm with a polynomial time complexity constitute a problem set called P; they are called P problems. On the other hand, problems that can be solved by the nondeterministic algorithm with a polynomial time complexity constitute a problem set called NP; they are called NP problems. It is believed, but has not yet been proven, that P is a proper subset of NP.
Cook proved that every NP problem can be “polynomially reducible to” the satisfiability (SAT) problem [34]. A problem X is said to be “polynomially reducible to” a problem Y if and only if X can be solved by (i) converting X’s input instance into Y’s input instance with a polynomial time complexity, (ii) getting an algorithm to solve the problem Y with the converted input instance to output Y’s solution, and (iii) converting Y’s solution into X’s solution with a polynomial time complexity. According to Cook’s proof, if the SAT problem belongs to P, then all NP problems also belong to P. A problem is called an NP-hard problem if every NP problem is polynomially reducible to it. Therefore, the SAT problem is an NP-hard problem. Many problems have been shown to be NP-hard problems. For example, Karp introduced 21 NP-hard problems, such as the SSP, MCP, VCP, 0/1 KP, HCP, and TSP [35]. Up to now, no NP-hard problem has been solved by any deterministic algorithm with a polynomial time complexity in the worst case. And it is believed that there will be no such algorithm. Thus, we may well say that NP-hard problems are very hard problems to solve.
B. Quantum Annealing and QUBO Formulas
Quantum annealing (QA) is a mechanism [36] leveraging quantum tunneling to solve optimization problems that optimize objective functions of very large solution spaces. Quantum tunneling [37] is a quantum mechanics phenomenon about energy levels. When a quantum particle encounters an energy barrier, it may pass through the barrier even if its momentum is less than the barrier potential.
For an objective function, QA first prepares states associated with the function. It then initiates an adiabatic process starting from a quantum superposition of all possible candidate states of the solution space with equal probability amplitudes. Afterwards, the amplitudes of all states keep changing at the same time, which is like traversing all states in parallel. If the changing rate is slow enough, then the adiabatic process stops with a state close to the ground state of the lowest Hamiltonian (or total system energy) associated with the objective function, which corresponds to the optimal solution. In summary, with the quantum tunneling phenomenon, the QA mechanism behaves as traversing the whole solution space in parallel to find the globally optimal solution to the objective function. The solution found does not get stuck in the local optimization (or minimum), but reaches the global optimization (or minimum), as shown in Figure 1.
Illustration of quantum annealing that leverages quantum tunneling to find the global minimum in the solution space.
Based on the QA mechanism, a QAA formulates an optimization problem as an objective function of a quadratic unconstrained binary optimization (QUBO) formula
\begin{equation*} f(x)=x^{T}Q x=\sum _{i} Q_{i,i} x_{i}^{2} + \sum _{i< j} Q_{i,j} x_{i} x_{j}, \tag{1}\end{equation*}
\begin{equation*} f(x)=\sum _{i} Q_{i,i} x_{i} + \sum _{i< j} Q_{i,j} x_{i} x_{j}, \tag{2}\end{equation*}
A QUBO formula should have no constraint term, as suggested by its name. However, some optimization problems have constraints of feasible solutions. Any equality constraint \begin{equation*} f(x)=\alpha (Rx-t)^{2} + x^{T}Q x \tag{3}\end{equation*}
Equation (3) sometimes is extended to be a more general formula, as shown in Equation (4) below.\begin{equation*} f(x)=\alpha (Rx-t)^{2} + \beta x^{T} Q x, \tag{4}\end{equation*}
The value of weights
C. Setting QUBO Penalty Weights
In this subsection, we elaborate weight setting methods (WSMs) [38], [39] to properly set the penalty weight \begin{equation*} f(x)=\alpha (Rx-t)^{2} + x^{T}Q x =\alpha g(x) + c(x), \tag{5}\end{equation*}
In Equation (5), \begin{equation*} c(y)< \alpha g(x) + c(x), \text {for every }x\in S. \tag{6}\end{equation*}
According to Equation (6), a valid penalty weight \begin{equation*} \alpha > \max _{x\in S} \left ({\frac {c(y)-c(x)}{g(x)}}\right). \tag{7}\end{equation*}
Based on Equation (7), different WSMs [38], [39] are proposed to set
Table 1 shows the penalty weights associated with the above-mentioned WSMs. In the table,
D. Running a QAA on a quantum annealer
Figure 2 shows the five major steps to design and run a QAA for solving an optimization problem on a quantum annealer, such as the D-Wave Advantage quantum computer. The five steps are elaborated below.
Step 1. Problem formulation (or definition): The optimization problem is formulated as the QUBO formula of a QAA. Note that some studies formulate the problem as the Ising model [40]. For example, the paper [41] formulates 21 NP-hard problems as Ising models. It has been shown in [42] that the QUBO formula and the Ising model are equivalent, and can be transformed to each other easily. Since this paper focuses on the QUBO formula, an optimization problem is formulated as a QUBO formula in the following context. The formula is in turn transformed into a graph in which a node (or vertex) stands for a binary variable, and the weight of an edge between two nodes represents the coupling strength between the two variables associated with the two nodes.
Step 2. Minor embedding: The graph corresponding to the QUBO formula is embedded in the quantum processor or quantum processing unit (QPU) of the quantum annealer, with qubits and couplers representing graph nodes and edges, respectively, as shown in Figure 2. Ideally, a qubit should be directly connected to every qubit that has a coupling relationship with it. However, due to the hardware limitation of qubit connectivities, a qubit can only be directly connected to a certain number of qubits. For example, a qubit can be directly connected to 6 and 15 other qubits in the D-wave quantum annealer Chimera and Pegasus architectures, respectively.
In order to maintain the coupling relationships of nodes in the graph, multiple qubits are used to represent a node, and couplers (or chains) of strong strength are set between each other of these qubits to make them maintain the same value. Note that a doubled line between two nodes of some graphs in Figure 2 represents a chain.
When the required number of qubits exceeds the upper limit of the device, the graph corresponding to the original problem should be decomposed into subgraphs to be properly embedded into the QPU. Currently known graph decomposition (or problem decomposition) methods include the iterative centrality halo method [43], which prioritizes nodes that have significant impacts on the global solution, and the DBK (Decomposition, Bounds, K-core) method, which recursively decomposes a graph into subgraphs of specific sizes [44]. Certainly, the performance of quantum annealing is greatly affected by graph decomposition methods [43], [44].
Step 3. Initialization: This step sets the initial Hamiltonian \begin{equation*} H(t)=A(t)H_{i} + B(t)H_{f}, \tag{8}\end{equation*}
Step 4. Annealing: In this step, the annealing process is performed to obtain the optimal (or minimum) value of the objective function. The system starts from the lowest initial Hamiltonian, where every qubit is in a superposition state. Then during annealing, the initial Hamiltonian decreases gradually, whereas the final Hamiltonian increases gradually. Finally, at the end of the annealing, the effect of the initial Hamiltonian drops to zero, and the system is in the lowest energy state of the final Hamiltonian associated with the QUBO formula of the objective function. Each qubit collapses from the superposition state to the state of 0 or 1, which corresponds to the binary variable value achieving the final global optimal objective function value.
The lower right part of Figure 2 shows the change in the energy scaling functions
Step 5. Reading: After the annealing process, the value (0 or 1) of each qubit is read out for post-processing. According to the corresponding relationship between the qubit and the graph node, a solution to the original problem can be derived. If the values of qubits representing the same node are inconsistent, post-processing mechanisms, such as the majority vote, are employed to determine what the value of the node is. Note that Steps 4 and 5 are repeated many times (shots), which is called the resampling process, to ensure that the optimal solution to the problem can be found with high probability.
QAAs Solving NP-Hard Problems
This Section presents QAAs using QUBO formulas to solve specific NP-hard problems, including the SSP, MCP, VCP, 0/1 KP, GCP, HCP, TSP, and JSP. The QAAs are elaborated one by one in the following subsections.
A. The QAA solving the SSP
The subset sum problem (SSP) is defined as follows:
Given a set
The QAA solving the SSP utilizes the following QUBO formula:\begin{equation*} H(x)=\left ({\sum _{i=1}^{n}{s_{i} x_{i}}-T}\right)^{2} \tag{9}\end{equation*}
In Equation (9),
B. The QAA solving the MCP
The maximum cut problem (MCP) is defined as follows:
Given an undirected graph
The QAA solving the MCP utilizes the following QUBO formula:\begin{equation*} H(x)=\sum _{(u,v)\in E} w_{uv}(-x_{u}-x_{v} + 2x_{u} x_{v}) \tag{10}\end{equation*}
In Equation (10),
C. The QAA solving the VCP
The vertex cover problem (VCP) is defined as follows:
Given an undirected graph
The QAA solving the VCP utilizes the following QUBO formula:\begin{equation*} H(x)=A\sum _{(u,v)\in E}(1-x_{u})(1-x_{v})+B\sum _{v\in V}x_{v} \tag{11}\end{equation*}
In Equation (11),
D. The QAA solving the 0/1 KP
The 0/1 knapsack problem (0/1 KP) is defined as follows:
Consider a knapsack with capacity
The QAA solving the 0/1 KP utilizes the following QUBO formula:\begin{align*} H(x)&=A\left ({1-\sum _{j}^{W} y_{j}}\right)^{2} \\ &\quad +A\left ({\sum _{j=1}^{W} jy_{j}-\sum _{i=1}^{n} w_{i}x_{i}}\right)^{2} \\ &\quad -B\sum _{i=1}^{n} c_{i} x_{i} \tag{12}\end{align*}
In Equation (12),
E. The QAA solving the GCP
The graph coloring problem (GCP) is defined as follows:
Given a chromatic number
The QAA solving the GCP utilizes the following QUBO formula:\begin{align*} H(x)&=\sum _{v\in V}\left ({1-\sum _{i=1}^{n}x_{v,i}}\right)^{2} \\ &\quad +\sum _{(u,v)\in E}\sum _{i=1}^{n}{x_{u,i}x_{v,i} } \tag{13}\end{align*}
In Equation (13),
F. The QAA solving the HCP
The Hamiltonian cycle problem (HCP) is defined as follows:
Given an undirected graph
The QAA solving the HCP utilizes the following QUBO formula:\begin{align*} H(x)&=\sum _{v=1}^{n}\left ({1-\sum _{j=1}^{n}x_{v,j}}\right)^{2} \\ &\quad +\sum _{j=1}^{n}\left ({1-\sum _{v=1}^{n}x_{v,j}}\right)^{2} \\ &\quad +\sum _{(u, v) \notin E}\sum _{j=1}^{n}x_{u,j}x_{v,j+1} \tag{14}\end{align*}
In Equation (14),
G. The QAA solving the TSP
The travelling salesman problem (TSP) is defined as follows:
Given a directed graph
The QAA solving the TSP utilizes the following QUBO formula:\begin{align*} H(x)&=A\sum _{v=1}^{n}\left ({1-\sum _{j=1}^{n}x_{v,j}}\right)^{2} \\ &\quad +A\sum _{j=1}^{n}\left ({1-\sum _{v=1}^{n}x_{v,j}}\right)^{2} \\ &\quad +A\sum _{(u,v)\notin E}\sum _{j=1}^{n}x_{u,j}x_{v,j+1} \\ &\quad +B\sum _{(u,v)\in E}W_{u,v}\sum _{j=1}^{n}x_{u,j}x_{v,j+1} \tag{15}\end{align*}
In Equation (15),
H. The QAA solving the JSP
The job shop scheduling problem (JSP) is defined as follows:
Consider a set
The QAA solving the JSP utilizes the following QUBO formula:\begin{align*} H(x)&=A\, h_{1}(x)+B\, h_{2}(x)+C\, h_{3}(x)+D\, h_{o}(x), \text {where} \\ h_{1}(x)&=\sum _{c=1}^{n}\left ({\sum _{\substack {k_{c-1}< i< k_{c}\\ t+p_{i}>t^{\prime} }} x_{i,t} x_{i+1,t^{\prime} }}\right) \\ h_{2}(x)&=\sum _{d=1}^{m}\left ({\sum _{(i,t,i^{\prime},t^{\prime}) \in R_{d}} x_{i,t} x_{i^{\prime},t^{\prime} }}\right) \\ h_{3}(x)&=\sum _{i=0}^{k_{n}}\left ({\sum _{t=1}^{T} x_{i,t} - 1}\right)^{2} \\ h_{o}(x)&=\sum _{i=0}^{k_{n}}\sum _{t=0}^{T}\left ({x_{i,t}\left ({k_{n}+1}\right)}\right)^{t+p_{i}} \tag{16}\end{align*}
In Equation (16),
The terms
The number of QUBO formula variables is of O(
The QUBO Formula Classification
As just shown in the last section, QUBO formulas are used by QAAs to solve the SSP, MCP, VCP, 0/1 KP, GCP, HCP, TSP, and JSP. The QUBO formulas can be classified into four classes according to the following two classification criteria.
Classification criterion 1 (CC1): Does the number of QUBO variables have a linear relationship with the problem input size?
Some QUBO formulas meet CC1 and maintain a linear relationship between the number of QUBO variables and the problem input size, whereas some QUBO formulas do not. For example, the QUBO formulas used by the QAAs to solve the SSP, MCP, VCP, and 0/1 KP meet CC1. On the contrary, the QUBO formulas used by the QAAs to solve the GCP, HCP, TSP, and JSP do not meet CC1.
Classification criterion 2 (CC2): Does the QUBO formula have both the constraint term and the optimization term?
Some QUBO formulas do not meet CC2 and have either the constraint term or the optimization term. However, some QUBO formulas meet CC2 and have both the constraint term and the optimization term. For example, the QUBO formulas used by the QAAs to solve the SSP, MCP, GCP, and HCP have either the constraint term or the optimization term. On the contrary, the QUBO formulas used by the QAAs to solve the VCP, 0/1 KP, TSP, and JSP have both the constraint term and the optimization term.
As shown in Table 2, QUBO formulas can be classified into four classes according to the two criteria CC1 and CC2. Table 2 also shows examples for each class. Specifically, QUBO formulas used by QAAs to solve the SSP and the MCP belong to Class-1. QUBO formulas used by QAAs to solve the VCP and the 0/1 KP belong to Class-2. QUBO formulas used by QAAs to solve the GCP and the HCP belong to Class-3. QUBO formulas used by QAAs to solve the TSP and the JSP belong to Class-4.
Benchmarks of QAAs Solving NP-Hard Problems
This Section benchmarks the above-mentioned QAAs against the CAs with the best solutions ever known (i.e., the best quality of the solution). It presents the performance comparisons of the QAAs and related CAs for solving the SSP, MCP, VCP, 0/1 KP, GCP, HCP, TSP, and JSP. The performance comparison experiments are conducted by accessing the D-Wave Advantage quantum annealer through the Amazon Braket service for running the QAAs. The performance information of the CAs solving the same problems is derived from research papers in the literature. Certainly, the QAAs and the CAs are applied to the same problem instances for the sake of fair comparisons. The hardware and software specifications that significantly influence the CA execution time are not the same for different problems, though. We still include the CA execution time derived from existing papers for reference in benchmarking.
A. Benchmarking the QAA Solving the SSP
The public problem instances, p01, p02, p03, p04, p05, p06, and p07, derived from [46] are used for benchmarking algorithms solving the SSP. Every integer element in set
As shown in Table 3, the CA can find correct solutions (indicated by “Y”) to all problem instances. However, the QAA cannot find the correct solution to the problem instance p03, in which set
B. Benchmarking the QAA solving the MCP
The problem instances, g22, g23, g24, g25, g27, g32, g33, g35, g36, and g37, derived from a publicly available database [47] are used for benchmarking algorithms solving the MCP. The instances include cyclic graphs, planar graphs, and random graphs with edge weights of 1, 0, and -1. The number of graph nodes in the instances ranges from 800 to 3000. The CA to be compared is the optimization software provided by Meta-Analytics [48].
As shown in Table 4, both the QA and the CA can find good solutions to the MCP instances, and the QA can even find better solutions to problem instances g33 and g36, improving the maximum cut solution of the weight sum from 1380 to 1382, and from 7674 to 7675, respectively. Due to the limited number of qubits, when faced with large-sized problem instances, it is necessary to first decompose the problem instance by a tool running on a classical computer before embedding it in the QPU for quantum annealing. Therefore, the QAA has longer execution time than the CA, as shown in Table 4. However, the QPU time of the QAA is shorter than the execution time of the CA. Note that EnergyImpactDecomposer, which is a problem decomposition tool provided in the D-Wave Ocean library [49], is used to decompose the problem into smaller subproblems based on the energy contributions of variables to the problem.
C. Benchmarking the QAA solving the VCP
The public problem instances, p-hat300-1, keller4, brock400-2, keller5, DSJC500.5, C1000.9, and keller6, derived from the DIMACS (Discrete Mathematics and Theoretical Computer Science) challenge [50] are used for benchmarking algorithms solving the VCP. The number of vertices in vertex set
There are two QAAs, QAA1 and QAA2, for benchmarking. The QAA1 sets the optimization weight
The penalty weights of the QAA2 are set by running an evolutionary algorithm, the genetic algorithm (GA) provided in [52], for many iterations. The GA encodes penalty weights as bit vectors to go through the population initialization, fitness evaluation, elitism selection, crossover, and mutation processes. The GA details are described as follows. The initial population size is 10. The QAA is used to evaluate the fitness value. The elite number of the roulette-wheel elitism selection process is 4. The crossover rate is 0.9, and the one-point crossover is adopted for the crossover process. The mutation rate is 0.2, and the one-point mutation is employed to flip the bit at the mutation point. The maximum number of iterations to run the GA is 50.
As shown in Table 5, the QAA1 uses the UB method to find solutions to the three problem instances, keller4, keller5, and keller6, whereas it uses the MOC method to find solutions to the other problem instances. QAA1 has equally good solutions as the CA to three problem instances, p-hat300-1, keller4, and DSJC500.5, and has worse solutions than the CA to other problem instances. The QAA2 has equally good solutions to five problem instances, and has worse solutions than the CA to two problem instances, keller 5 and keller 6. Furthermore, it can be observed that QAAs may be faster or slower than the CAs for the VCP instances.
D. Benchmarking the QAA solving the 0/1 KP
The public problem instances, L3, L4, L6, L7, L11, and L14, derived from [47] are used for benchmarking algorithms solving the 0/1 KP. The number of objects of the 0/1 KP instances is between 4 and 45, and the knapsack capacity is between 20 and 1000. The comparative CA is Google OR-Tools[53], which is a free and open-source toolkit, using the linear programming, mixed-integer programming, constraint programming, and vehicle routing algorithms, to solve optimization problems.
There are two QAAs, QAA1 and QAA2, for benchmarking. The QAA1 sets the optimization weight
As shown in Table 6, the QAA2 and the CA have the same solutions to most problem instances; however, the QAA has worse solutions to problem instances L11 and L14 than the CA has. Furthermore, it can be observed that the CA has better computation time than QAAs.
E. Benchmarking the QAA solving the GCP
The public problem instances, R125.1, DSJC125.1, DSJC125.5, R250.1, DSJC250.1, DSJC250.5, DSJC500.1, and le450_15d, derived from [54] are used for benchmarking algorithms solving the GCP. The number of vertices in vertex set
As shown in Table 7, the QAA cannot find solutions to GCP instances DSJC125.5, DSJC250.5, DSJC500.1, and le450_15d, whereas and the CA can find solutions to all problem instances. Furthermore, the QAA may be faster or slower than the CA for some problem instances.
F. Benchmarking the QAA solving the HCP
The public problem instances, 4_H, 6_H-1, 6_H-2, 8_H-1, 8_H-2, and alb1000, derived from [56] are used for benchmarking algorithms solving the HCP. The number of vertices in vertex set
As shown in Table 8, the QAA cannot find solutions to the problem instance alb1000 (indicated by “NF”), whereas and the CA can find solutions to all problem instances. The CA execution time is 0.023 for the alb1000 problem instance. However, the CA execution time cannot be found from research papers and indicated by “N/A” (i.e., “not available”) in Table 8. The QAA takes several seconds to run, and takes QPU time less than 0.202 seconds. However, the QAA cannot run properly for the alb1000 instance; it gets an error message of “Kernel died” (indicated by “X” in Table 8) and no solution is returned. The reason for getting such an error message is that the problem size is too large, causing too many QUBO variables.
G. Benchmarking the QAA solving the TSP
The public problem instances, br17, ftv33, ftv35, gr17, gr21, p43, ry48p, and kro124p, derived from TSPLIB [59] are used for benchmarking algorithms solving the TSP. The number of vertices of the TSP instances is between 17 and 124. The comparative CA is the algorithm provided by Google OR-Tools [53].
There are three QAAs, QAA1, QAA2, and QAA3, for benchmarking. Again, the QAA1 sets the optimization weight
As shown in Table 9, the QAA1, QAA2, QAA3, and the CA find an equally good solution to the problem instance br17, whereas the QAA1, QAA2, and QAA3 have worse solutions to all other problem instances than the CA. Furthermore, the QAA1, QAA2, and QAA3 are slower than the CA for all problem instances in terms of the total execution time. However, the QPU time of the QAA1, QAA2, and QAA3 is smaller than the execution time of the CA for some problem instances.
H. Benchmarking the QAA solving the JSP
The public problem instances, ft02, ft03, ft04, ft06, la03, and ft10, provided by OR-Library [60] are used for benchmarking algorithms solving the JSP. The comparative CA is the algorithm provided by Google OR-Tools [53].
The QUBO formula used by the QAA has three constraint terms and one optimization term, each of which has a different weight. So, none of MOC, MOMC, VLM, MQC, and UB is applicable to tune the penalty weights. Instead, GA is adopted as the WSM to tune the weights
As shown in Table 10, the QAA has the same solutions as the CA to the problem instances ft02, ft03, ft04, and ft06. However, the QAA cannot find any feasible solution (indicated by “NF”) to the problem instances la03 and ft10. Furthermore, both the execution time and the QPU time of the QAA are longer than the execution time of the CA.
Observations from Qaa Benchmarks
By the benchmarks of QAAs and CAs for solving different NP-hard problems, we have the following observations and suggestions.
The QAAs using Class-1 QUBO formulas are likely to have performance that is better than or similar to that of related CAs. This is because the number of QUBO variables of a Class-1 QUBO formula scales linearly with the problem input size. Furthermore, a Class-1 QUBO formula has either the constraint term or the optimization term. Thus, a Class-1 QUBO formula usually has fewer QUBO variables and simpler coupling relationships between variables than formulas of other classes. The graph associated with a Class-1 QUBO formula is thus easier to be embedded into the QPU properly, leading to short computation time and annealing time, along with good solutions.
A Class-2 QUBO formula has both the constraint term and the optimization term, but the number of QUBO variables of a Class-2 QUBO formula scales linearly with the problem input size. Thus, a Class-2 QUBO formula usually has fewer QUBO variables than formulas of other classes. However, it may have more complex coupling relationships. On the contrary, a Class-3 QUBO formula has either the constraint term or the optimization term, leading to simpler coupling relationships. However, the number of QUBO variables of a Class-3 QUBO formula does not scale linearly with the problem input size, which implies there are more QUBO variables. By observing benchmarks of QAAs and related CAs just shown earlier, QAAs using Class-2 or Class-3 QUBO formulas have similar or a little worse performance than related CAs.
A Class-4 QUBO formula has both the constraint term and the optimization term, and the number of QUBO variables of a Class-4 QUBO formula does not scale linearly with the problem input size. Thus, a Class-4 QUBO formula has more QUBO variables and more complex coupling relationships than formulas in other classes. Thus, the graph associated with a Class-4 QUBO formula is hard to be embedded into the QPU properly, leading to long computation time and annealing time, along with unfavorable solutions.
For QAAs using Class-2 or Class-4 QUBO formulas that contain both constraint terms and optimization terms, we need to set the penalty weight and the optimization weight before the quantum annealing process starts. It is suggested to set the optimization weight as 1 and then set the penalty weight by employing applicable WSMs in the following order: MOC, MOMC, VLM, MQC, and UB, to make the penalty weight larger and larger until feasible solutions are found. It is observed that small penalty weights may lead to infeasible solutions, whereas large penalty weights tend to cause feasible solutions. However, too large penalty weights may lead to feasible solutions with low qualities. This is the reason why it is suggested to set the penalty weight by employing applicable WSMs according to the order of MOC, MOMC, VLM, MQC, and UB.
It is also suggested to adopt GA for setting the optimization weight and the penalty term properly. The GA encodes weights as bit vectors to go through the population initialization, fitness evaluation, elitism selection, crossover, and mutation processes. The GA runs iteration by iteration until fitness value converges or the maximum iteration is reached. By the QAA benchmarks shown earlier, the GA is promising to set the penalty weight and the optimization weight properly for QAAs to find good solutions. However, the GA consumes much computation time, which is not included in the QAA benchmarks, to set weights properly. It is suggested to use the GA to set weights for a problem instance PI for the first time. When the problem instance PI is modified slightly, the same weights are still applied to the modified problem instance. Moreover, the same weights can also be applied to problem instances that are similar to PI. Therefore, the GA computation time can be regarded as just a one-time pre-consumption of time.
Conclusion
In this paper, QUBO formulas are classified into four classes according to two criteria: (i) Does the number of QUBO variables scale linearly with the problem input size? (ii) Does the QUBO formula have both the constraint term and the optimization term? The classification is exemplified by QUBO formulas used by QAAs to solve specific NP-hard problems, including the SSP, MCP, VCP, 0/1 KP, GCP, HCP, TSP, and JSP.
We benchmark the QAAs against related CAs. Observations from QAA benchmarks along with derived suggestions are further given for improving QAA performance. CAs and QAAs using Class-2 and Class-3 QUBO formulas provide solutions of similar qualities to NP-hard problems. QAAs may have longer or shorter execution time than CAs. QAAs using Class-1 (resp., Class-4) QUBO formulas are likely to be better (resp., worse) than CAs in terms of the quality of the solution and the time to the solution. It is believed many QAAs will have superior performance in the future when we go beyond the current NISQ era [32], in which quantum devices have only a moderate number of error-prone qubits.
In the future, we plan to investigate more QAAs using different QUBO formulas to solve more problems to exemplify the QUBO formula classification more thoroughly. In addition to the evolutionary algorithm GA, we also plan to study the possibilities of employing various evolutionary algorithms, such as the particle swarm optimization, ant colony optimization, and tabu search algorithms, to properly set weights of constraint and optimization terms for improving QAA performance. Furthermore, we plan to apply the QUBU formulas to developing algorithms for different computing machines that are similar to the quantum annealer, such as the digital annealer (DA) [61] and the coherent Ising machine (CIM) [62], to see if the DA or the CIM can obtain better solutions than the quantum annealer.