Loading web-font TeX/Math/Italic
Classifying and Benchmarking Quantum Annealing Algorithms Based on Quadratic Unconstrained Binary Optimization for Solving NP-Hard Problems | IEEE Journals & Magazine | IEEE Xplore

Classifying and Benchmarking Quantum Annealing Algorithms Based on Quadratic Unconstrained Binary Optimization for Solving NP-Hard Problems


The four-class classification of QUBO formulas exemplified by quantum annealing algorithms using QUBO formulas to solve different NP-hard problems, including the subset s...

Abstract:

Quantum annealing has the potential to outperform classical transistor-based computer technologies in tackling intricate combinatorial optimization problems. However, ong...Show More

Abstract:

Quantum annealing has the potential to outperform classical transistor-based computer technologies in tackling intricate combinatorial optimization problems. However, ongoing scientific debates cast doubts on whether quantum annealing devices (or quantum annealers) can genuinely provide better problem-solving capabilities than classical computers. The question of whether quantum annealing algorithms (QAAs) running on quantum annealers have computational advantages over classical algorithms (CAs) running on classical computers still remains unclear. This paper aims to clarify the question by classifying and benchmarking QAAs that utilize quadratic unconstrained binary optimization (QUBO) formulas to solve NP-hard problems. It proposes a four-class classification of QUBO formulas and exemplifies each class by QUBO formulas used by QAAs for solving specific NP-hard problems, such as the subset sum, maximum cut, vertex cover, 0/1 knapsack, graph coloring, Hamiltonian cycle, traveling salesperson, and job shop scheduling problems. The classification is based on the following two criteria: (i) Does the number of QUBO variables scale linearly with the problem input size? (ii) Does the QUBO formula have both the constraint term and the optimization term? QAAs are implemented and run on a D-Wave quantum annealer for benchmarking. They are benchmarked against related CAs in terms of the quality of the solution and the time to the solution. The benchmarking results reveal which classes of QUBO formulas are likely to provide advantages to QAAs over CAs. Furthermore, based on the benchmarking results, observations and suggestions are given for each class of QUBO formulas, facilitating the adoption of appropriate actions to improve the performance of QAAs.
The four-class classification of QUBO formulas exemplified by quantum annealing algorithms using QUBO formulas to solve different NP-hard problems, including the subset s...
Published in: IEEE Access ( Volume: 11)
Page(s): 104165 - 104178
Date of Publication: 22 September 2023
Electronic ISSN: 2169-3536

Funding Agency:

Author image of Jehn-Ruey Jiang
Department of Computer Science and Information Engineering, National Central University, Taoyuan, Taiwan
Jehn-Ruey Jiang (Member, IEEE) received the Ph.D. degree in computer science from National Tsing Hua University, Hsinchu, Taiwan, in 1995. He joined Chung-Yuan Christian University as an Associate Professor, in 1995. Then, he joined Hsuan-Chuang University, in 1998, and became a Full Professor, in 2004. He is currently with the Department of Computer Science and Information Engineering, National Central University, Taoyua...Show More
Jehn-Ruey Jiang (Member, IEEE) received the Ph.D. degree in computer science from National Tsing Hua University, Hsinchu, Taiwan, in 1995. He joined Chung-Yuan Christian University as an Associate Professor, in 1995. Then, he joined Hsuan-Chuang University, in 1998, and became a Full Professor, in 2004. He is currently with the Department of Computer Science and Information Engineering, National Central University, Taoyua...View more
Author image of Chun-Wei Chu
Department of Computer Science and Information Engineering, National Central University, Taoyuan, Taiwan
Chun-Wei Chu received the B.S. degree from the Department of Applied Mathematics, National Chung Hsing University, Taichung, Taiwan, in 2020, and the M.S. degree from the Department of Computer Science and Information Engineering, National Central University, Taoyuan, Taiwan, in 2022. His research interests include quantum annealing algorithms and the Internet of Things (IoT).
Chun-Wei Chu received the B.S. degree from the Department of Applied Mathematics, National Chung Hsing University, Taichung, Taiwan, in 2020, and the M.S. degree from the Department of Computer Science and Information Engineering, National Central University, Taoyuan, Taiwan, in 2022. His research interests include quantum annealing algorithms and the Internet of Things (IoT).View more

SECTION I.

Introduction

Quantum computers perform computation based on quantum bits or qubits with the phenomena of quantum superposition, quantum entanglement, quantum tunneling, and so on. A qubit exists in a superposition of both 0 and 1, and definitely reveals 0 or 1 when measured. In contrast, classical computers perform computation based on bits, each of which is either 0 or 1. Quantum computers have been attracting much research attention because they can offer computing power that classical computers can never offer, which is called quantum supremacy[1].

Some quantum computers, such as IBM Q [2] Google Sycamore [3], are universal, whereas some others, such as D-Wave Advantage [4], are non-universal. On the one hand, universal quantum computers rely on quantum gates to form quantum circuits to perform computation to solve general problems. On the other hand, other mechanisms rather than quantum gates are employed to perform computation for solving special problems on non-universal quantum computers. For example, the quantum annealing mechanism is used by D-Wave Advantage, a quantum annealer, to leverage quantum tunneling to traverse the energy landscape to find the globally lowest energy [5] for solving optimization problems.

Quantum annealers hold the promise of outperforming classical computers in solving intricate combinatorial optimization problems arising in different application areas, such as quantum chemistry [6], quantum machine learning [7], [8], [9], quantum deep learning [10], [11], quantum variational autoencoders [12], [13], fault detection and diagnosis [14], [15], online fraud detection [16], financial portfolio optimization [17], [18], operational planning [19], [20], data processing in high energy physics [21], [22], material microstructure equilibration [23], [24] and Monte Carlo sampling [25]. However, as mentioned in [26], there are ongoing scientific debates [27], [28], [29], [30] arguing whether quantum annealers can provide better problem solving capabilities than classical computers. The question still remains unclear regarding whether quantum annealing algorithms (QAAs) running on quantum annealers have computational advantages over classical algorithms (CAs) running on classical computers. We are thus motivated to write this paper aiming to clarify the question by classifying and benchmarking QAAs that use quadratic unconstrained binary optimization (QUBO) formulas to solve NP-hard problems.

This paper classifies QUBO formulas into four classes, as previously demonstrated in [31]. It also exemplifies each class with two QUBO formulas used by two QAAs for solving specific NP-hard problems. The solved problems are the subset sum problem (SSP), maximum cut problem (MCP), vertex cover problem (VCP), 0/1 knapsack problem (0/1 KP), graph coloring problem (GCP), Hamiltonian cycle problem (HCP), traveling salesperson problem (TSP), and job shop scheduling problem (JSP). It is believed that no CA can solve an NP-hard problem efficiently with a polynomial time complexity in the worst case. That is to say, for an NP-hard problem, the best CA to solve it needs a super-polynomial (e.g., exponential) time complexity in the worst case. The QAAs based on QUBO to solve the above-mentioned NP-hard problems are implemented and run on a D-Wave quantum annealer. They are compared with related CAs for benchmarking in terms of the quality of the solution and the time to the solution. The benchmarking results reveal which classes of QUBOs are likely to provide advantages to QAAs over CAs. Based on the benchmarking results, observations and suggestions are given for each QUBO formula class, so that proper actions can be adopted to improve the performance of QAAs. Compared with CAs, QAAs using QUBO formulas are competitive in the current NISQ era [32], in which quantum computers have only a moderate number of error-prone qubits with low-fidelity. It is believed many QAAs will have superior performance to CAs in the future when we go beyond the NISQ era to have quantum computers owning thousands or more qubits with high fidelity.

The contribution of this paper is fourfold. First, it proposes a four-class classification of QUBO formulas used by QAAs for solving NP-hard problems. It also exemplifies each class by QUBO formulas solving specific NP-hard problems. Second, QAAs are implemented and run on a D-Wave quantum annealer to solve the specific NP-hard problems. They are compared with related CAs for benchmarking in terms of the quality of the solution and the time to the solution. Third, this paper identifies which classes of QUBO formulas are likely to provide advantages to QAAs over CAs. Fourth, observations and suggestions are given for each QUBO formula class, so that appropriate actions can be taken to improve the performance of QAAs. Generally, the significance of this paper is (i) for people to determine if a QAA using a QUBO formula to solve a certain NP-hard problem is likely to outperform related CAs by checking the QUBO formula class, and (ii) for people to take proper actions to further improve the QAA performance.

The remainder of this paper is organized as follows. Section II introduces some preliminaries. The QUBO formula classification is described in Section III. QUBO formulas used by QAAs solving specific NP-hard problems are shown as examples of classification classes in Section IV. For the purpose of benchmarking, the QAAs are also compared with related CAs in terms of different performance metrics in Section V. Observations and suggestions based on QAA benchmarks are given in Section VI. Finally, Section VII concludes this paper.

SECTION II.

Preliminaries

A. NP-hard Problems

In this subsection, we introduce the concept of NP-hard problems. As mentioned earlier, it is well believed that no CAs can solve NP-hard problems efficiently with a polynomial time complexity for any problem instances. Below, we first introduce the concepts of deterministic algorithms and nondeterministic algorithms [33] for realizing NP-hard problems.

The deterministic algorithm is the normal CA that can run on current classical general-purpose computers to find solutions to problems. On the contrary, according to [34], the nondeterministic algorithm cannot run on any current computers; it is conceptual and intended only for theoretical discussions. A nondeterministic algorithm to solve a given problem has two phases: choosing and checking. The choosing phase is to select one out of given options. The checking phase is to check whether the selected option leads to a correct solution to the problem or not. If so, it returns “success”, which means that a correct solution can be found; otherwise, it returns “failure”, which means that no correct solution can be found. The nondeterministic algorithm has a strong assumption that if there exist proper options leading to correct solutions, then the choosing phase is assumed to always select one of the proper options for the algorithm to return “success”.

On the one hand, problems that can be solved by the deterministic algorithm with a polynomial time complexity constitute a problem set called P; they are called P problems. On the other hand, problems that can be solved by the nondeterministic algorithm with a polynomial time complexity constitute a problem set called NP; they are called NP problems. It is believed, but has not yet been proven, that P is a proper subset of NP.

Cook proved that every NP problem can be “polynomially reducible to” the satisfiability (SAT) problem [34]. A problem X is said to be “polynomially reducible to” a problem Y if and only if X can be solved by (i) converting X’s input instance into Y’s input instance with a polynomial time complexity, (ii) getting an algorithm to solve the problem Y with the converted input instance to output Y’s solution, and (iii) converting Y’s solution into X’s solution with a polynomial time complexity. According to Cook’s proof, if the SAT problem belongs to P, then all NP problems also belong to P. A problem is called an NP-hard problem if every NP problem is polynomially reducible to it. Therefore, the SAT problem is an NP-hard problem. Many problems have been shown to be NP-hard problems. For example, Karp introduced 21 NP-hard problems, such as the SSP, MCP, VCP, 0/1 KP, HCP, and TSP [35]. Up to now, no NP-hard problem has been solved by any deterministic algorithm with a polynomial time complexity in the worst case. And it is believed that there will be no such algorithm. Thus, we may well say that NP-hard problems are very hard problems to solve.

B. Quantum Annealing and QUBO Formulas

Quantum annealing (QA) is a mechanism [36] leveraging quantum tunneling to solve optimization problems that optimize objective functions of very large solution spaces. Quantum tunneling [37] is a quantum mechanics phenomenon about energy levels. When a quantum particle encounters an energy barrier, it may pass through the barrier even if its momentum is less than the barrier potential.

For an objective function, QA first prepares states associated with the function. It then initiates an adiabatic process starting from a quantum superposition of all possible candidate states of the solution space with equal probability amplitudes. Afterwards, the amplitudes of all states keep changing at the same time, which is like traversing all states in parallel. If the changing rate is slow enough, then the adiabatic process stops with a state close to the ground state of the lowest Hamiltonian (or total system energy) associated with the objective function, which corresponds to the optimal solution. In summary, with the quantum tunneling phenomenon, the QA mechanism behaves as traversing the whole solution space in parallel to find the globally optimal solution to the objective function. The solution found does not get stuck in the local optimization (or minimum), but reaches the global optimization (or minimum), as shown in Figure 1.

FIGURE 1. - Illustration of quantum annealing that leverages quantum tunneling to find the global minimum in the solution space.
FIGURE 1.

Illustration of quantum annealing that leverages quantum tunneling to find the global minimum in the solution space.

Based on the QA mechanism, a QAA formulates an optimization problem as an objective function of a quadratic unconstrained binary optimization (QUBO) formula f of binary variables, as described in the following Equation (1).

\begin{equation*} f(x)=x^{T}Q x=\sum _{i} Q_{i,i} x_{i}^{2} + \sum _{i< j} Q_{i,j} x_{i} x_{j}, \tag{1}\end{equation*}

View SourceRight-click on figure for MathML and additional features. where Q is an n \times n upper triangular matrix with real-number coefficients, and x = (x_{1}\, x_{2}\, \ldots \, x_{n})^{T} is a column vector of n binary variables of either value 0 or 1. The minimum value of the objective function corresponds to the optimal solution to the problem. Note that x_{i} is either 0 or 1, so Equation (1) can be rewritten as Equation (2) shown below. Also note that Equation (1) and Equation (2) are true for the Ising formula of variables of either value -1 or 1, which is a well-known model equivalent to the QUBO formula.\begin{equation*} f(x)=\sum _{i} Q_{i,i} x_{i} + \sum _{i< j} Q_{i,j} x_{i} x_{j}, \tag{2}\end{equation*}
View SourceRight-click on figure for MathML and additional features.

A QUBO formula should have no constraint term, as suggested by its name. However, some optimization problems have constraints of feasible solutions. Any equality constraint Rx = t can be transformed into a constraint term or penalty term of the form \alpha (Rx-t)^{2} , where R is a 1 \times n matrix (or row vector) with real-number coefficients, x is a column vector of variables with binary values, t is a constant, and \alpha is called a constraint weight or penalty weight of a real-number value. By integrating the penalty term \alpha (Rx-t)^{2} and the optimization term x^{T}Q x , the objective function f(x) is extended to be the formula shown in Equation (3) below.\begin{equation*} f(x)=\alpha (Rx-t)^{2} + x^{T}Q x \tag{3}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

Equation (3) sometimes is extended to be a more general formula, as shown in Equation (4) below.\begin{equation*} f(x)=\alpha (Rx-t)^{2} + \beta x^{T} Q x, \tag{4}\end{equation*}

View SourceRight-click on figure for MathML and additional features. where \beta is called the optimization weight of a real-number value associated with the optimization term x^{T} Q x .

The value of weights \alpha and \beta should be set properly so that the whole QUBO objective function can be minimized to return an optimal and feasible solution to the problem. Some weight setting methods are proposed to set the weights properly. They are elaborated in the next subsection.

C. Setting QUBO Penalty Weights

In this subsection, we elaborate weight setting methods (WSMs) [38], [39] to properly set the penalty weight \alpha of the QUBO formula shown in the following Equation (5).\begin{equation*} f(x)=\alpha (Rx-t)^{2} + x^{T}Q x =\alpha g(x) + c(x), \tag{5}\end{equation*}

View SourceRight-click on figure for MathML and additional features. where \alpha is the penalty weight with a real-number value, R is a row vector with real-number coefficients, x is a column vector of binary variables with either value 0 or 1, t is a constant, Q is an upper triangular matrix with real-number coefficients, g(x) is the constraint function, and c(x) is the cost function or the optimization function.

In Equation (5), g(x)>0 if x represents an infeasible solution, whereas g(x)=0 if x represents a feasible solution. Let y be the optimal solution that makes f(y) have the minimal value, and S be the space of all infeasible solutions. We have the following inequality:\begin{equation*} c(y)< \alpha g(x) + c(x), \text {for every }x\in S. \tag{6}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

According to Equation (6), a valid penalty weight \alpha must satisfy the following inequality:\begin{equation*} \alpha > \max _{x\in S} \left ({\frac {c(y)-c(x)}{g(x)}}\right). \tag{7}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

Based on Equation (7), different WSMs [38], [39] are proposed to set \alpha as various values. The methods are Verma and Lewis Method (VLM), Upper Bound (UB), Maximum QUBO Coefficient (MQC), Maximum change in Objective function divided by Minimum Constraint function of infeasible solutions (MOMC), and Maximum value derived from dividing each change in Objective function with the corresponding change in Constraint function (MOC).

Table 1 shows the penalty weights associated with the above-mentioned WSMs. In the table, G and C are n\times n matrices representing g(x) and c(x) , respectively. Moreover, z is a column vector with all 1’s, so \alpha _{_{UB}} is an upper bound of the objective function with all positive QUBO formula coefficients. \alpha _{_{MQC}} is the maximum QUBO formula coefficient. \alpha _{_{VLM}} is a good estimation of the numerator (i.e., c(y)-c(x) ) of Equation (7) without considering the denominator (i.e., g(x) ). \alpha _{_{MOMC}} considers g(x) to improve \alpha _{_{VLM}} by estimating g(x) as \gamma , the minimum change in the constraint function that is larger than 0. \alpha _{_{MOC}} tries to further improve \alpha _{_{VLM}} by considering a possible increase in the constraint function as a result of a change in the objective function which can be achieved by flipping any bit from 0 to 1 or vice versa. The readers are referred to [38] and [39] for details of setting the penalty weight \alpha .

TABLE 1 Different weight setting methods (WSMs) and their associated penalty weights
Table 1- 
Different weight setting methods (WSMs) and their associated penalty weights

D. Running a QAA on a quantum annealer

Figure 2 shows the five major steps to design and run a QAA for solving an optimization problem on a quantum annealer, such as the D-Wave Advantage quantum computer. The five steps are elaborated below.

FIGURE 2. - The workflow of running a QAA on a D-Wave quantum annealer (adapted from [36]).
FIGURE 2.

The workflow of running a QAA on a D-Wave quantum annealer (adapted from [36]).

Step 1. Problem formulation (or definition): The optimization problem is formulated as the QUBO formula of a QAA. Note that some studies formulate the problem as the Ising model [40]. For example, the paper [41] formulates 21 NP-hard problems as Ising models. It has been shown in [42] that the QUBO formula and the Ising model are equivalent, and can be transformed to each other easily. Since this paper focuses on the QUBO formula, an optimization problem is formulated as a QUBO formula in the following context. The formula is in turn transformed into a graph in which a node (or vertex) stands for a binary variable, and the weight of an edge between two nodes represents the coupling strength between the two variables associated with the two nodes.

Step 2. Minor embedding: The graph corresponding to the QUBO formula is embedded in the quantum processor or quantum processing unit (QPU) of the quantum annealer, with qubits and couplers representing graph nodes and edges, respectively, as shown in Figure 2. Ideally, a qubit should be directly connected to every qubit that has a coupling relationship with it. However, due to the hardware limitation of qubit connectivities, a qubit can only be directly connected to a certain number of qubits. For example, a qubit can be directly connected to 6 and 15 other qubits in the D-wave quantum annealer Chimera and Pegasus architectures, respectively.

In order to maintain the coupling relationships of nodes in the graph, multiple qubits are used to represent a node, and couplers (or chains) of strong strength are set between each other of these qubits to make them maintain the same value. Note that a doubled line between two nodes of some graphs in Figure 2 represents a chain.

When the required number of qubits exceeds the upper limit of the device, the graph corresponding to the original problem should be decomposed into subgraphs to be properly embedded into the QPU. Currently known graph decomposition (or problem decomposition) methods include the iterative centrality halo method [43], which prioritizes nodes that have significant impacts on the global solution, and the DBK (Decomposition, Bounds, K-core) method, which recursively decomposes a graph into subgraphs of specific sizes [44]. Certainly, the performance of quantum annealing is greatly affected by graph decomposition methods [43], [44].

Step 3. Initialization: This step sets the initial Hamiltonian H_{i} and the final Hamiltonian H_{f} of the entire system. For a particular state of the system, the Hamiltonian represents the total energy of the system in that state. The initial Hamiltonian H_{i} is set to make every qubit stay in a superposition state. The final Hamiltonian H_{f} is set according to the QUBO formula, so that the minimum final Hamiltonian corresponds to the optimal solution to the problem. The system Hamiltonian H(t) at time t of a quantum annealer can be expressed as the following Equation (8).\begin{equation*} H(t)=A(t)H_{i} + B(t)H_{f}, \tag{8}\end{equation*}

View SourceRight-click on figure for MathML and additional features. where A(t) and B(t) are Hamiltonian scaling functions that evolve with the annealing time t . On the one hand, A(t) gradually goes from 1 to close to 0; on the other hand, B(t) gradually goes from 0 to close to 1.

Step 4. Annealing: In this step, the annealing process is performed to obtain the optimal (or minimum) value of the objective function. The system starts from the lowest initial Hamiltonian, where every qubit is in a superposition state. Then during annealing, the initial Hamiltonian decreases gradually, whereas the final Hamiltonian increases gradually. Finally, at the end of the annealing, the effect of the initial Hamiltonian drops to zero, and the system is in the lowest energy state of the final Hamiltonian associated with the QUBO formula of the objective function. Each qubit collapses from the superposition state to the state of 0 or 1, which corresponds to the binary variable value achieving the final global optimal objective function value.

The lower right part of Figure 2 shows the change in the energy scaling functions A(t) and B(t) in Equation (8). During the annealing process, the energy scaling function A(t) and B(t) gradually grow smaller and larger with time t , respectively. Thus, the influence of the initial Hamiltonian H_{i} becomes more significant, whereas the influence of the final Hamiltonian H_{f} becomes less significant. And at the end of annealing, the system Hamiltonian is mostly reflected by the final Hamiltonian H_{f} .

Step 5. Reading: After the annealing process, the value (0 or 1) of each qubit is read out for post-processing. According to the corresponding relationship between the qubit and the graph node, a solution to the original problem can be derived. If the values of qubits representing the same node are inconsistent, post-processing mechanisms, such as the majority vote, are employed to determine what the value of the node is. Note that Steps 4 and 5 are repeated many times (shots), which is called the resampling process, to ensure that the optimal solution to the problem can be found with high probability.

SECTION III.

QAAs Solving NP-Hard Problems

This Section presents QAAs using QUBO formulas to solve specific NP-hard problems, including the SSP, MCP, VCP, 0/1 KP, GCP, HCP, TSP, and JSP. The QAAs are elaborated one by one in the following subsections.

A. The QAA solving the SSP

The subset sum problem (SSP) is defined as follows:

Given a set S=\{s_{1},s_{2},\ldots,s_{n}\} with n integers, and a target integer T , the SSP is to find a subset S^{\prime} of S such that the sum of the integers in the subset S^{\prime} is exactly T .

The QAA solving the SSP utilizes the following QUBO formula:\begin{equation*} H(x)=\left ({\sum _{i=1}^{n}{s_{i} x_{i}}-T}\right)^{2} \tag{9}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

In Equation (9), s_{i} is an integer in S , x_{i}=1 represents that s_{i} is in S^{\prime} , and x_{i}=0 represents that s_{i} is not in S^{\prime} , where 1\leq i \leq n . The number of QUBO variables is of O(n ), which scales linearly with the problem size n , and the QUBO formula has only the constraint term.

B. The QAA solving the MCP

The maximum cut problem (MCP) is defined as follows:

Given an undirected graph G=(V, E) with the vertex set V and the edge set E , a cut in G is a subset S \in V . The MCP is to find a cut S such that EWS(S, S^{\prime}) is maximized, where S^{\prime} = V - S , and EWS(S, S^{\prime}) stands for the edge weight s \mu {\mathrm{ m}} of edges between S and S' .

The QAA solving the MCP utilizes the following QUBO formula:\begin{equation*} H(x)=\sum _{(u,v)\in E} w_{uv}(-x_{u}-x_{v} + 2x_{u} x_{v}) \tag{10}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

In Equation (10), x_{u} = 1 (resp., x_{u} = 0 ) indicates that u is (resp., is not) in S , x_{v} = 1 (resp., x_{v} = 0 ) indicates that v is (resp., is not) in S , and w_{uv} is the weight associated with the edge (u, v) . The number of QUBO variables is of O(n ), which scales linearly with the problem size n , and the QUBO formula has only the optimization term.

C. The QAA solving the VCP

The vertex cover problem (VCP) is defined as follows:

Given an undirected graph G=(V, E) with the vertex set V and the edge set E , the VCP is to find a minimum-sized subset V^{\prime} of V , such that for every edge (u, v) in E , either u or v is in V^{\prime} .

The QAA solving the VCP utilizes the following QUBO formula:\begin{equation*} H(x)=A\sum _{(u,v)\in E}(1-x_{u})(1-x_{v})+B\sum _{v\in V}x_{v} \tag{11}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

In Equation (11), x_{u} = 1 (resp., x_{u} = 0 ) indicates that u is (resp., is not) in V^{\prime} , and x_{v} = 1 (resp., x_{v} = 0 ) indicates that v is (resp., is not) in V^{\prime} . The first term is the constraint term whose weight is A , whereas the second term is the optimization term whose weight is B . The number of QUBO variables is of O(n ), which scales linearly with the problem size n , and the QUBO formula has both the constraint term and the optimization term.

D. The QAA solving the 0/1 KP

The 0/1 knapsack problem (0/1 KP) is defined as follows:

Consider a knapsack with capacity W and n objects o_{1},\ldots, o_{n} whose weights are w_{1},\ldots, w_{n} and whose costs are c_{1},\ldots, c_{n} . The 0/1 KP is to select objects to form a set S such that \sum _{o_{i} \in S} c_{i} is maximized under the constraint \sum _{o_{i} \in S} w_{i} \leq W (i.e., selected objects can be accommodated by the knapsack and have the maximum total cost).

The QAA solving the 0/1 KP utilizes the following QUBO formula:\begin{align*} H(x)&=A\left ({1-\sum _{j}^{W} y_{j}}\right)^{2} \\ &\quad +A\left ({\sum _{j=1}^{W} jy_{j}-\sum _{i=1}^{n} w_{i}x_{i}}\right)^{2} \\ &\quad -B\sum _{i=1}^{n} c_{i} x_{i} \tag{12}\end{align*}

View SourceRight-click on figure for MathML and additional features.

In Equation (12), x_{i}=1 if o_{i}\in S (i.e., if o_{i} is selected to be put in the knapsack), where 1\leq i\leq n . Furthermore, y_{j}=1 if the total weight of the selected objects in S to be put in the knapsack is j , where 1\leq j\leq W . The number of QUBO variables is of O(n+W ), which scales linearly with the problem size n plus W , and the QUBO formula has both the constraint term and the optimization term. Note that if we consider the number b=\log W of bits representing W as the input size, then the number of QUBO variables is of O(n+2^{b} ), which is no longer scales linearly with the problem size. This situation is like that 0/1 KP is regarded as a pseudo polynomial time-complexity problem, as a dynamic programming algorithm can solve the 0/1 KP with the time complexity of O(n\times W ) [45]. However, we still take the value of W as the problem input size and assume that the number of QUBO variables is of O(n+W ), which scales linearly with the problem size n and W .

E. The QAA solving the GCP

The graph coloring problem (GCP) is defined as follows:

Given a chromatic number n , and an undirected graph G=(V, E) with the vertex set V and the edge set E of m edges, the GCP is to decide if it is possible to color all vertices in V such that for every edge (u, v) in E , vertices u and v have different colors.

The QAA solving the GCP utilizes the following QUBO formula:\begin{align*} H(x)&=\sum _{v\in V}\left ({1-\sum _{i=1}^{n}x_{v,i}}\right)^{2} \\ &\quad +\sum _{(u,v)\in E}\sum _{i=1}^{n}{x_{u,i}x_{v,i} } \tag{13}\end{align*}

View SourceRight-click on figure for MathML and additional features.

In Equation (13), x_{v,i}=1 indicates that vertex v is colored with color i, 1\leq i\leq n . The first term and the second term are both constraint terms. The first constraint term means that every vertex should be colored with only one color. The second constraint term means that adjacent vertices should be colored with different colors. The number of QUBO variables is of O(m\times n ), which does not scale linearly with the problem size m and n , and the QUBO formula has only the constraint term.

F. The QAA solving the HCP

The Hamiltonian cycle problem (HCP) is defined as follows:

Given an undirected graph G=(V, E) with the vertex set V of n vertices denoted by numbers 1,\ldots,n and the edge set E , the HCP is to decide if there exists a Hamiltonian cycle starting at an arbitrary vertex s , visiting very other vertex exactly once, and going back to vertex s .

The QAA solving the HCP utilizes the following QUBO formula:\begin{align*} H(x)&=\sum _{v=1}^{n}\left ({1-\sum _{j=1}^{n}x_{v,j}}\right)^{2} \\ &\quad +\sum _{j=1}^{n}\left ({1-\sum _{v=1}^{n}x_{v,j}}\right)^{2} \\ &\quad +\sum _{(u, v) \notin E}\sum _{j=1}^{n}x_{u,j}x_{v,j+1} \tag{14}\end{align*}

View SourceRight-click on figure for MathML and additional features.

In Equation (14), x_{v,j}=1 represents that vertex v is the j^{th} vertex visited, where 1 \leq v, j\leq n . There are three terms in the QUBO formula. They are all constraint terms. The first term restricts that every vertex can be visited only once. The second term restricts that only one vertex can be visited at a time. The third term restricts that if vertex v is visited after vertex u is visited, then there must be an edge (u,v)\in E . The number of QUBO variables is of O(n^{2} ), which does not scale linearly with the problem size n , and the QUBO formula has only the constraint term.

G. The QAA solving the TSP

The travelling salesman problem (TSP) is defined as follows:

Given a directed graph G=(V, E) with the vertex set V of n vertices denoted by numbers 1,\ldots,n and the edge set E , the TSP is to find a cycle that first visits an arbitrary vertex s , then sequentially visits every other vertex exactly once, and at last visits vertex s , such that the sum of weights of edges included in the cycle is minimized.

The QAA solving the TSP utilizes the following QUBO formula:\begin{align*} H(x)&=A\sum _{v=1}^{n}\left ({1-\sum _{j=1}^{n}x_{v,j}}\right)^{2} \\ &\quad +A\sum _{j=1}^{n}\left ({1-\sum _{v=1}^{n}x_{v,j}}\right)^{2} \\ &\quad +A\sum _{(u,v)\notin E}\sum _{j=1}^{n}x_{u,j}x_{v,j+1} \\ &\quad +B\sum _{(u,v)\in E}W_{u,v}\sum _{j=1}^{n}x_{u,j}x_{v,j+1} \tag{15}\end{align*}

View SourceRight-click on figure for MathML and additional features.

In Equation (15), x_{v,j}=1 represents that vertex v is the j^{th} vertex to be visited, where 1 \leq v, j\leq n . There are four terms in the QUBO formula. The first three terms are constraint terms whose weights are A . The first constraint term indicates that each vertex should be visited only once, the second term means that only one vertex should be visited at a time, and the third term means that if the visit of vertex u is followed by the visit of vertex v , then there should exist an edge (u, v) \in E . The fourth term is an optimization term, in which W_{u,v} is the weight associated with the edge (u,v) . The number of QUBO variables is of O(n^{2} ), which does not scale linearly with the problem size n , and the QUBO formula has both the constraint term and the optimization term.

H. The QAA solving the JSP

The job shop scheduling problem (JSP) is defined as follows:

Consider a set \mathcal M=\{M_{1},\ldots,M_{m}\} of m machines and a set \mathcal J=\{J_{1},\ldots,J_{n}\} of n jobs, where job J_{c} consists of a sequence JO_{c} of operations for 1\leq c\leq n . Let JO_{1}=(o_{1},\ldots,o_{k_{1}}), JO_{2}=(o_{k_{1}+1},\ldots,o_{k_{2}}),\ldots,JO_{n}=(o_{k_{n-1}+1},\ldots,o_{k_{n}}) , where k_{n} is the total number of operations. Note that k_{0} is set as 0 to be used later. Each operation is associated with an index i, 1\leq i\leq k_{n} , and its processing time is denoted as p_{i} . An operation needs to be processed on a specific machine, operations of a job should be processed sequentially, and only one operation of a job can be processed at a given time. The JSP is to find a schedule to assign operations to machines for minimizing the makespan, that is, the total length of the schedule, or the latest finish time of operations.

The QAA solving the JSP utilizes the following QUBO formula:\begin{align*} H(x)&=A\, h_{1}(x)+B\, h_{2}(x)+C\, h_{3}(x)+D\, h_{o}(x), \text {where} \\ h_{1}(x)&=\sum _{c=1}^{n}\left ({\sum _{\substack {k_{c-1}< i< k_{c}\\ t+p_{i}>t^{\prime} }} x_{i,t} x_{i+1,t^{\prime} }}\right) \\ h_{2}(x)&=\sum _{d=1}^{m}\left ({\sum _{(i,t,i^{\prime},t^{\prime}) \in R_{d}} x_{i,t} x_{i^{\prime},t^{\prime} }}\right) \\ h_{3}(x)&=\sum _{i=0}^{k_{n}}\left ({\sum _{t=1}^{T} x_{i,t} - 1}\right)^{2} \\ h_{o}(x)&=\sum _{i=0}^{k_{n}}\sum _{t=0}^{T}\left ({x_{i,t}\left ({k_{n}+1}\right)}\right)^{t+p_{i}} \tag{16}\end{align*}

View SourceRight-click on figure for MathML and additional features.

In Equation (16), x_{i,t}=1 represents that operation o_{i} starts at time t\leq T , where T is the maximum time. Furthermore, R_{d}=A_{d} \cup B_{d}, A_{d}=\{(i,t,i^{\prime},t^{\prime}): (i,i^{\prime}) \in I_{d} \times I_{d}, i\neq i^{\prime}, 0\leq t, t^{\prime} \leq T, 0< t^{\prime} {-} t < p_{i}\}, B_{d}=\{(i,t,i^{\prime},t^{\prime}): (i,i^{\prime}) \in I_{d} \times I_{d}, i< i^{\prime}, t=t^{\prime}, p_{i}>0, p_{i^{\prime} }>0\} , and I_{d} is the set of indices of all operations that are restricted to be executed on machine M_{d}, 1\leq d\leq m .

The terms h_{1}(x), h_{2}(x) , and h_{3}(x) are constraint terms. The term h_{1}(x) restricts that all operations of a job should be processed sequentially. The term h_{2}(x) restricts that a machine can process only one operation at a time. The term h_{3}(x) restricts that every operation should be processed exactly once. The term h_{o}(x) is the optimization term for minimizing the makespan, since h_{o}(y)< h_{o}(z) , where y is the optimal solution, and z is any non-optimal (but feasible) solution.

The number of QUBO formula variables is of O(k_{n}\times T ), which does not scale linearly with the problem size k_{n} and T , and the QUBO formula has both the constraint term and the optimization term.

SECTION IV.

The QUBO Formula Classification

As just shown in the last section, QUBO formulas are used by QAAs to solve the SSP, MCP, VCP, 0/1 KP, GCP, HCP, TSP, and JSP. The QUBO formulas can be classified into four classes according to the following two classification criteria.

  • Classification criterion 1 (CC1): Does the number of QUBO variables have a linear relationship with the problem input size?

    Some QUBO formulas meet CC1 and maintain a linear relationship between the number of QUBO variables and the problem input size, whereas some QUBO formulas do not. For example, the QUBO formulas used by the QAAs to solve the SSP, MCP, VCP, and 0/1 KP meet CC1. On the contrary, the QUBO formulas used by the QAAs to solve the GCP, HCP, TSP, and JSP do not meet CC1.

  • Classification criterion 2 (CC2): Does the QUBO formula have both the constraint term and the optimization term?

    Some QUBO formulas do not meet CC2 and have either the constraint term or the optimization term. However, some QUBO formulas meet CC2 and have both the constraint term and the optimization term. For example, the QUBO formulas used by the QAAs to solve the SSP, MCP, GCP, and HCP have either the constraint term or the optimization term. On the contrary, the QUBO formulas used by the QAAs to solve the VCP, 0/1 KP, TSP, and JSP have both the constraint term and the optimization term.

As shown in Table 2, QUBO formulas can be classified into four classes according to the two criteria CC1 and CC2. Table 2 also shows examples for each class. Specifically, QUBO formulas used by QAAs to solve the SSP and the MCP belong to Class-1. QUBO formulas used by QAAs to solve the VCP and the 0/1 KP belong to Class-2. QUBO formulas used by QAAs to solve the GCP and the HCP belong to Class-3. QUBO formulas used by QAAs to solve the TSP and the JSP belong to Class-4.

TABLE 2 The four-class classification of QUBO formulas
Table 2- 
The four-class classification of QUBO formulas

SECTION V.

Benchmarks of QAAs Solving NP-Hard Problems

This Section benchmarks the above-mentioned QAAs against the CAs with the best solutions ever known (i.e., the best quality of the solution). It presents the performance comparisons of the QAAs and related CAs for solving the SSP, MCP, VCP, 0/1 KP, GCP, HCP, TSP, and JSP. The performance comparison experiments are conducted by accessing the D-Wave Advantage quantum annealer through the Amazon Braket service for running the QAAs. The performance information of the CAs solving the same problems is derived from research papers in the literature. Certainly, the QAAs and the CAs are applied to the same problem instances for the sake of fair comparisons. The hardware and software specifications that significantly influence the CA execution time are not the same for different problems, though. We still include the CA execution time derived from existing papers for reference in benchmarking.

A. Benchmarking the QAA Solving the SSP

The public problem instances, p01, p02, p03, p04, p05, p06, and p07, derived from [46] are used for benchmarking algorithms solving the SSP. Every integer element in set S of the SSP instance is between 5 and 30, and the target integer T is between 20 and 200. The comparative CA is a dynamic programming algorithm [46].

As shown in Table 3, the CA can find correct solutions (indicated by “Y”) to all problem instances. However, the QAA cannot find the correct solution to the problem instance p03, in which set S contains many large integers. In other words, the CA can find solutions to all problem instances, so does the QAA except for one problem instance. Thus, the QAA is almost as good as the CA in terms of the quality to solutions, but the CA is a little better. However, the QAA usually consumes less time, either in terms of the total execution time (ET) or the QPU time (QT). For problem instances p02 and p03, the QAA is faster than the CA by a factor around 30, and 130, respectively. Note that below a result in blue with a superscript =, a result in red with a superscript +, and a result in green with a superscript - are used to indicate that the QAAs’ solutions (Sol.) are equal to, better than, and worse than those of CAs, respectively. Also note that the time unit is “second” in the tables of benchmarks.

TABLE 3 Benchmarks of algorithms solving the SSP
Table 3- 
Benchmarks of algorithms solving the SSP

B. Benchmarking the QAA solving the MCP

The problem instances, g22, g23, g24, g25, g27, g32, g33, g35, g36, and g37, derived from a publicly available database [47] are used for benchmarking algorithms solving the MCP. The instances include cyclic graphs, planar graphs, and random graphs with edge weights of 1, 0, and -1. The number of graph nodes in the instances ranges from 800 to 3000. The CA to be compared is the optimization software provided by Meta-Analytics [48].

As shown in Table 4, both the QA and the CA can find good solutions to the MCP instances, and the QA can even find better solutions to problem instances g33 and g36, improving the maximum cut solution of the weight sum from 1380 to 1382, and from 7674 to 7675, respectively. Due to the limited number of qubits, when faced with large-sized problem instances, it is necessary to first decompose the problem instance by a tool running on a classical computer before embedding it in the QPU for quantum annealing. Therefore, the QAA has longer execution time than the CA, as shown in Table 4. However, the QPU time of the QAA is shorter than the execution time of the CA. Note that EnergyImpactDecomposer, which is a problem decomposition tool provided in the D-Wave Ocean library [49], is used to decompose the problem into smaller subproblems based on the energy contributions of variables to the problem.

TABLE 4 Benchmarks of algorithms solving the MCP
Table 4- 
Benchmarks of algorithms solving the MCP

C. Benchmarking the QAA solving the VCP

The public problem instances, p-hat300-1, keller4, brock400-2, keller5, DSJC500.5, C1000.9, and keller6, derived from the DIMACS (Discrete Mathematics and Theoretical Computer Science) challenge [50] are used for benchmarking algorithms solving the VCP. The number of vertices in vertex set V of the VCP instances is between 300 and 1000. The comparative CA is a branch-and-bound algorithm [51].

There are two QAAs, QAA1 and QAA2, for benchmarking. The QAA1 sets the optimization weight B as 1 and then sets the penalty weight A by employing applicable WSMs in the following order: MOC, MOMC, VLM, MQC, and UB, to make the penalty weight larger and larger until feasible solutions are found. With our experiences, small penalty weights usually lead to infeasible solutions, whereas large penalty weights tend to cause feasible solutions. However, too large penalty weights may lead to feasible solutions with low qualities. This is the reason why QAA1 sets the penalty weight by employing applicable WSMs according to the order of MOC, MOMC, VLM, MQC, and UB. Note that a WSM may not be applicable to a QUBO formula. For example, the MOC cannot be applied to the QUBO formula using matrices of different dimensions.

The penalty weights of the QAA2 are set by running an evolutionary algorithm, the genetic algorithm (GA) provided in [52], for many iterations. The GA encodes penalty weights as bit vectors to go through the population initialization, fitness evaluation, elitism selection, crossover, and mutation processes. The GA details are described as follows. The initial population size is 10. The QAA is used to evaluate the fitness value. The elite number of the roulette-wheel elitism selection process is 4. The crossover rate is 0.9, and the one-point crossover is adopted for the crossover process. The mutation rate is 0.2, and the one-point mutation is employed to flip the bit at the mutation point. The maximum number of iterations to run the GA is 50.

As shown in Table 5, the QAA1 uses the UB method to find solutions to the three problem instances, keller4, keller5, and keller6, whereas it uses the MOC method to find solutions to the other problem instances. QAA1 has equally good solutions as the CA to three problem instances, p-hat300-1, keller4, and DSJC500.5, and has worse solutions than the CA to other problem instances. The QAA2 has equally good solutions to five problem instances, and has worse solutions than the CA to two problem instances, keller 5 and keller 6. Furthermore, it can be observed that QAAs may be faster or slower than the CAs for the VCP instances.

TABLE 5 Benchmarks of algorithms solving the VCP
Table 5- 
Benchmarks of algorithms solving the VCP

D. Benchmarking the QAA solving the 0/1 KP

The public problem instances, L3, L4, L6, L7, L11, and L14, derived from [47] are used for benchmarking algorithms solving the 0/1 KP. The number of objects of the 0/1 KP instances is between 4 and 45, and the knapsack capacity is between 20 and 1000. The comparative CA is Google OR-Tools[53], which is a free and open-source toolkit, using the linear programming, mixed-integer programming, constraint programming, and vehicle routing algorithms, to solve optimization problems.

There are two QAAs, QAA1 and QAA2, for benchmarking. The QAA1 sets the optimization weight B as 1 and then sets the penalty weight A by employing applicable WSMs in the following order: MOC, MOMC, VLM, MQC, and UB, to make the penalty weight larger and larger until feasible solutions are found. Unfortunately, none of the WSMs can properly set the penalty weights to find feasible solutions. The acronym “NF” representing “Not Found” is used to indicate such cases in Table 6. The penalty weights of the QAA2 are set by running the GA provided in [52]. The GA parameters are set as follows. The number of bits to represent the penalty weight as an integer is 14. The initial population size is 10. The elite number of the selection process is 4. The crossover rate is 0.5, and the one-point crossover is adopted for the crossover process. The mutation rate is 0.2, and the one-point mutation is employed to flip the bit at the mutation point. The maximum number of iterations to run the GA is 100.

TABLE 6 Benchmarks of algorithms solving the 0/1 KP
Table 6- 
Benchmarks of algorithms solving the 0/1 KP

As shown in Table 6, the QAA2 and the CA have the same solutions to most problem instances; however, the QAA has worse solutions to problem instances L11 and L14 than the CA has. Furthermore, it can be observed that the CA has better computation time than QAAs.

E. Benchmarking the QAA solving the GCP

The public problem instances, R125.1, DSJC125.1, DSJC125.5, R250.1, DSJC250.1, DSJC250.5, DSJC500.1, and le450_15d, derived from [54] are used for benchmarking algorithms solving the GCP. The number of vertices in vertex set V of the GCP instances is between 125 and 450. The comparative CA is a memetic algorithm combining the teaching-learning concept and the tabu-search concept [55].

As shown in Table 7, the QAA cannot find solutions to GCP instances DSJC125.5, DSJC250.5, DSJC500.1, and le450_15d, whereas and the CA can find solutions to all problem instances. Furthermore, the QAA may be faster or slower than the CA for some problem instances.

TABLE 7 Benchmarks of algorithms solving the GCP
Table 7- 
Benchmarks of algorithms solving the GCP

F. Benchmarking the QAA solving the HCP

The public problem instances, 4_H, 6_H-1, 6_H-2, 8_H-1, 8_H-2, and alb1000, derived from [56] are used for benchmarking algorithms solving the HCP. The number of vertices in vertex set V of the HCP instances is between 1000 and 5000. The comparative CA is an algorithm using the “snakes and ladders” heuristic [57]. The algorithm is implemented in Python and C++ [58].

As shown in Table 8, the QAA cannot find solutions to the problem instance alb1000 (indicated by “NF”), whereas and the CA can find solutions to all problem instances. The CA execution time is 0.023 for the alb1000 problem instance. However, the CA execution time cannot be found from research papers and indicated by “N/A” (i.e., “not available”) in Table 8. The QAA takes several seconds to run, and takes QPU time less than 0.202 seconds. However, the QAA cannot run properly for the alb1000 instance; it gets an error message of “Kernel died” (indicated by “X” in Table 8) and no solution is returned. The reason for getting such an error message is that the problem size is too large, causing too many QUBO variables.

TABLE 8 Benchmarks of algorithms solving the HCP
Table 8- 
Benchmarks of algorithms solving the HCP

G. Benchmarking the QAA solving the TSP

The public problem instances, br17, ftv33, ftv35, gr17, gr21, p43, ry48p, and kro124p, derived from TSPLIB [59] are used for benchmarking algorithms solving the TSP. The number of vertices of the TSP instances is between 17 and 124. The comparative CA is the algorithm provided by Google OR-Tools [53].

There are three QAAs, QAA1, QAA2, and QAA3, for benchmarking. Again, the QAA1 sets the optimization weight B as 1 and then sets the penalty weight A by employing applicable WSMs in the following order: MOC, MOMC, VLM, MQC, and UB, to make the penalty weight larger and larger until feasible solutions are found. The QAA2 uses a method called “total edge weight adjustment (TEWA)” to set the optimization weight B as 1 and then set the penalty weight A as the value of \frac {n}{m}\sum _{e\in E} w_{e} (i.e., A is the summation of all edge weights times the number n of vertices and divided by the number m of edges). QAA3 sets the optimization weight B as 1 and uses the GA to set the penalty weight A with the following GA parameters. The number of bits to represent the penalty weight is 16. The initial population size is 4. The elite number of the selection process is 2. The crossover rate is 0.9, and the one-point crossover is adopted for the crossover process. The mutation rate is 0.2, and the one-point mutation is employed to flip the bit at the mutation point. The maximum number of iterations to run the GA is 3.

As shown in Table 9, the QAA1, QAA2, QAA3, and the CA find an equally good solution to the problem instance br17, whereas the QAA1, QAA2, and QAA3 have worse solutions to all other problem instances than the CA. Furthermore, the QAA1, QAA2, and QAA3 are slower than the CA for all problem instances in terms of the total execution time. However, the QPU time of the QAA1, QAA2, and QAA3 is smaller than the execution time of the CA for some problem instances.

TABLE 9 Benchmarks of algorithms solving the TSP
Table 9- 
Benchmarks of algorithms solving the TSP

H. Benchmarking the QAA solving the JSP

The public problem instances, ft02, ft03, ft04, ft06, la03, and ft10, provided by OR-Library [60] are used for benchmarking algorithms solving the JSP. The comparative CA is the algorithm provided by Google OR-Tools [53].

The QUBO formula used by the QAA has three constraint terms and one optimization term, each of which has a different weight. So, none of MOC, MOMC, VLM, MQC, and UB is applicable to tune the penalty weights. Instead, GA is adopted as the WSM to tune the weights A, B, C , and D . The GA parameters are set as follows. The number of bits to represent the weights is 5. The initial population size is 10 (for ft02, ft03, ft04, and ft06) or 4 (for la03 and ft10). The elite number of the selection process is 4 (for ft02, ft03, ft04, and ft06) or 2 (for la03 and ft10). The crossover rate is 0.9, and the one-point crossover is adopted for the crossover process. The mutation rate is 0.2, and the one-point mutation is employed to flip the bit at the mutation point. The maximum number of iterations to run the GA is 10 (for ft02, ft03, ft04, and ft06) or 3 (for la03 and ft10).

As shown in Table 10, the QAA has the same solutions as the CA to the problem instances ft02, ft03, ft04, and ft06. However, the QAA cannot find any feasible solution (indicated by “NF”) to the problem instances la03 and ft10. Furthermore, both the execution time and the QPU time of the QAA are longer than the execution time of the CA.

TABLE 10 Benchmarks of algorithms solving the JSP
Table 10- 
Benchmarks of algorithms solving the JSP

SECTION VI.

Observations from Qaa Benchmarks

By the benchmarks of QAAs and CAs for solving different NP-hard problems, we have the following observations and suggestions.

The QAAs using Class-1 QUBO formulas are likely to have performance that is better than or similar to that of related CAs. This is because the number of QUBO variables of a Class-1 QUBO formula scales linearly with the problem input size. Furthermore, a Class-1 QUBO formula has either the constraint term or the optimization term. Thus, a Class-1 QUBO formula usually has fewer QUBO variables and simpler coupling relationships between variables than formulas of other classes. The graph associated with a Class-1 QUBO formula is thus easier to be embedded into the QPU properly, leading to short computation time and annealing time, along with good solutions.

A Class-2 QUBO formula has both the constraint term and the optimization term, but the number of QUBO variables of a Class-2 QUBO formula scales linearly with the problem input size. Thus, a Class-2 QUBO formula usually has fewer QUBO variables than formulas of other classes. However, it may have more complex coupling relationships. On the contrary, a Class-3 QUBO formula has either the constraint term or the optimization term, leading to simpler coupling relationships. However, the number of QUBO variables of a Class-3 QUBO formula does not scale linearly with the problem input size, which implies there are more QUBO variables. By observing benchmarks of QAAs and related CAs just shown earlier, QAAs using Class-2 or Class-3 QUBO formulas have similar or a little worse performance than related CAs.

A Class-4 QUBO formula has both the constraint term and the optimization term, and the number of QUBO variables of a Class-4 QUBO formula does not scale linearly with the problem input size. Thus, a Class-4 QUBO formula has more QUBO variables and more complex coupling relationships than formulas in other classes. Thus, the graph associated with a Class-4 QUBO formula is hard to be embedded into the QPU properly, leading to long computation time and annealing time, along with unfavorable solutions.

For QAAs using Class-2 or Class-4 QUBO formulas that contain both constraint terms and optimization terms, we need to set the penalty weight and the optimization weight before the quantum annealing process starts. It is suggested to set the optimization weight as 1 and then set the penalty weight by employing applicable WSMs in the following order: MOC, MOMC, VLM, MQC, and UB, to make the penalty weight larger and larger until feasible solutions are found. It is observed that small penalty weights may lead to infeasible solutions, whereas large penalty weights tend to cause feasible solutions. However, too large penalty weights may lead to feasible solutions with low qualities. This is the reason why it is suggested to set the penalty weight by employing applicable WSMs according to the order of MOC, MOMC, VLM, MQC, and UB.

It is also suggested to adopt GA for setting the optimization weight and the penalty term properly. The GA encodes weights as bit vectors to go through the population initialization, fitness evaluation, elitism selection, crossover, and mutation processes. The GA runs iteration by iteration until fitness value converges or the maximum iteration is reached. By the QAA benchmarks shown earlier, the GA is promising to set the penalty weight and the optimization weight properly for QAAs to find good solutions. However, the GA consumes much computation time, which is not included in the QAA benchmarks, to set weights properly. It is suggested to use the GA to set weights for a problem instance PI for the first time. When the problem instance PI is modified slightly, the same weights are still applied to the modified problem instance. Moreover, the same weights can also be applied to problem instances that are similar to PI. Therefore, the GA computation time can be regarded as just a one-time pre-consumption of time.

SECTION VII.

Conclusion

In this paper, QUBO formulas are classified into four classes according to two criteria: (i) Does the number of QUBO variables scale linearly with the problem input size? (ii) Does the QUBO formula have both the constraint term and the optimization term? The classification is exemplified by QUBO formulas used by QAAs to solve specific NP-hard problems, including the SSP, MCP, VCP, 0/1 KP, GCP, HCP, TSP, and JSP.

We benchmark the QAAs against related CAs. Observations from QAA benchmarks along with derived suggestions are further given for improving QAA performance. CAs and QAAs using Class-2 and Class-3 QUBO formulas provide solutions of similar qualities to NP-hard problems. QAAs may have longer or shorter execution time than CAs. QAAs using Class-1 (resp., Class-4) QUBO formulas are likely to be better (resp., worse) than CAs in terms of the quality of the solution and the time to the solution. It is believed many QAAs will have superior performance in the future when we go beyond the current NISQ era [32], in which quantum devices have only a moderate number of error-prone qubits.

In the future, we plan to investigate more QAAs using different QUBO formulas to solve more problems to exemplify the QUBO formula classification more thoroughly. In addition to the evolutionary algorithm GA, we also plan to study the possibilities of employing various evolutionary algorithms, such as the particle swarm optimization, ant colony optimization, and tabu search algorithms, to properly set weights of constraint and optimization terms for improving QAA performance. Furthermore, we plan to apply the QUBU formulas to developing algorithms for different computing machines that are similar to the quantum annealer, such as the digital annealer (DA) [61] and the coherent Ising machine (CIM) [62], to see if the DA or the CIM can obtain better solutions than the quantum annealer.

Author image of Jehn-Ruey Jiang
Department of Computer Science and Information Engineering, National Central University, Taoyuan, Taiwan
Jehn-Ruey Jiang (Member, IEEE) received the Ph.D. degree in computer science from National Tsing Hua University, Hsinchu, Taiwan, in 1995. He joined Chung-Yuan Christian University as an Associate Professor, in 1995. Then, he joined Hsuan-Chuang University, in 1998, and became a Full Professor, in 2004. He is currently with the Department of Computer Science and Information Engineering, National Central University, Taoyuan, Taiwan. He also leads the Advanced Computing And Networking (ACAN) Laboratory, which focuses on investigating advanced technologies about computing and networking. His research interests include quantum annealing algorithms, universal quantum algorithms, quantum computing, quantum networking, the Internet of Things, the quantum Internet of Things, machine learning/deep learning, and quantum machine learning/deep learning.
Jehn-Ruey Jiang (Member, IEEE) received the Ph.D. degree in computer science from National Tsing Hua University, Hsinchu, Taiwan, in 1995. He joined Chung-Yuan Christian University as an Associate Professor, in 1995. Then, he joined Hsuan-Chuang University, in 1998, and became a Full Professor, in 2004. He is currently with the Department of Computer Science and Information Engineering, National Central University, Taoyuan, Taiwan. He also leads the Advanced Computing And Networking (ACAN) Laboratory, which focuses on investigating advanced technologies about computing and networking. His research interests include quantum annealing algorithms, universal quantum algorithms, quantum computing, quantum networking, the Internet of Things, the quantum Internet of Things, machine learning/deep learning, and quantum machine learning/deep learning.View more
Author image of Chun-Wei Chu
Department of Computer Science and Information Engineering, National Central University, Taoyuan, Taiwan
Chun-Wei Chu received the B.S. degree from the Department of Applied Mathematics, National Chung Hsing University, Taichung, Taiwan, in 2020, and the M.S. degree from the Department of Computer Science and Information Engineering, National Central University, Taoyuan, Taiwan, in 2022. His research interests include quantum annealing algorithms and the Internet of Things (IoT).
Chun-Wei Chu received the B.S. degree from the Department of Applied Mathematics, National Chung Hsing University, Taichung, Taiwan, in 2020, and the M.S. degree from the Department of Computer Science and Information Engineering, National Central University, Taoyuan, Taiwan, in 2022. His research interests include quantum annealing algorithms and the Internet of Things (IoT).View more

References

References is not available for this document.