Introduction
Optimization problems are omnipresent in all areas of science and industry. In particular, combinatorial optimization problems, which consist of searching for the global minimum of an objective function over discrete variables within a very large space of possible solutions, appear in many practical applications in every industry, including sectors, such as finance, logistics, and manufacturing. For these problems, the size of solution space usually grows exponentially with the number of variables. Although specialized algorithms can be used to find (often-approximate) solutions for specific use cases, most optimization problems are intractable for sufficiently large systems.
In recent years, the state of quantum computing technology has advanced to become practically relevant [1], [2], [3]. These are machines that exploit quantum behavior (special phenomena that can occur at small scales and low temperature) to perform certain calculations, which cannot be simulated efficiently on traditional, or classical, computers. For example, D-Wave Systems, an early player in the field, provides access to a particular type of analog quantum computer, referred to as a quantum annealer [4]. Quantum annealers have already been used with great success to solve hard optimization problems in multiple industrial applications, such as large-scale production [3], [5], [6] and research and development [7], [8] in manufacturing, financial investment strategy [9], shipping logistics [10], [11], [12], and mobility services [13], [14]. There is an abundance of excitement around the advances that quantum computing is expected to yield in many areas. In this article, we focus on a challenging application to a particular manufacturing quality-control problem, which is difficult to solve classically and is among the first studies applying quantum computing to a manufacturing problem (along with, for example, [5]). The problem is used as a platform to demonstrate how quantum computing can provide an edge in current industrial applications and to show its potential in the future as the technology continues to advance to evermore-powerful generations. In clutch manufacturing, one goal of shape or design optimization is to improve the friction of clutches, and it plays a key role in large-scale production. Previous works have considered such improvements by analyzing different designs and material combinations [15], [16]. The complementary approach in this article addresses the same goal, which we refer to as disk optimization, but with a different view and formulation of it. More specifically, our approach to improve the friction in a multidisk clutch consists of finding the optimal orientation for each of the disks. For this purpose, we formulate the problem as a quadratic unconstrained binary optimization (QUBO) problem [17] and introduce a quantum optimization algorithm to solve it using the D-Wave quantum annealer. First, we show that the quantum solver is able to match the performance of current classical algorithms for small problems by making direct use of the quantum processing unit (QPU). Second, we demonstrate that a hybrid quantum–classical solver shows an exceptional performance for large-scale problems, greatly outperforming classical benchmarks. Therefore, our application of a quantum optimization algorithm provides an edge over classical solutions for large problem sizes when deployed on a hybrid quantum–classical solver.
The rest of this article is organized as follows. Section II provides details on the optimization problem and its mathematical formulation. Sections III and IV describe the techniques used in this study (quantum solvers and classical benchmarks) and the testing approach, respectively. Results are presented and elaborated on, respectively, in Sections V and VI. Finally, Section VII concludes this article.
Problem Overview
In the ZF Group, one optimization problem arises in the manufacturing of a multidisk clutch (see Fig. 1). The multidisk clutch contains the following two alternating sets of disks.
Friction disks (metal disks padded with many friction elements); see brown paddings on lined clutch disk (label 2) in Fig. 1.
Smooth metal disks; see outer multidisk (label 3) in Fig. 1.
The combinatorial optimization problem we are interested in arises due to manufacturing thickness variations of the disks (f) and the attached friction paddings. During manufacturing, the friction disks (f) can each be stacked in 42 different rotational positions, which are then locked in. Thickness variations of each of the friction disks in a stack (e.g., seven friction disks) add up and can lead to decreased performance of the clutch under load. Such a decreased performance would be rejected in quality control tests after manufacturing.
The optimization question is therefore: in which of the 42 possible rotations should each of the disks with friction paddings be placed to optimize the clutch performance and therefore maximize quality according to quality control?
Two of the relevant metrics to increase the clutch performance are as follows.
M1—The standard deviation of the thickness deviations of the friction disks stack from the mean.
M2—The range is defined as the difference between the highest and the lowest total thickness of the friction disks stack.
Both metrics are derived in Section II-A and are given in (3) and (4), respectively.
A. Problem Formulation
The optimization problem focuses specifically on the clutch's set of friction disks (f) (see Fig. 1) and a stack of
\begin{equation*}
\bar{A} = \frac{1}{N_{D} N_{S}}\sum _{k=0}^{N_{D}-1}\sum _{i=0}^{N_{S}-1}A_{k, i}. \tag{1}
\end{equation*}
Disk sets in a multidisk clutch contained in the ZF 9HP transmissions. Disk set 2 (lined clutch disk) consists of the friction disks (f), which are padded with brown friction elements (see magnification). Disk set 3 (outer multidisk) comprises smooth metal disks (s).
Illustration of the problem setup where the clutch is made up of a stack of rotatable disks (specifically the friction disks (f) in Fig. 1), with this example showing a three-disk, five-segment problem. Disks are made up of discrete elements, which line up with fixed segments. The disks are labeled along the left and segments along the bottom, with disk shift numbers indicated on the right. Element heights
The rotation of the
\begin{equation*}
\delta h_{i} = \sum _{k=0}^{N_{D}-1} B_{k, i+s_{k}}. \tag{2}
\end{equation*}
The goal of this optimization problem is to find the combination of shift numbers, which achieves as-close-as-possible-to uniformity in the full-stack height in line with the performance metrics standard deviation 1 and range 2. From the vector of segment height variations
\begin{align*}
\sigma (\mathbf {\delta h}) &= \sqrt{ \frac{1}{N_{S}} \sum\nolimits _{i=0}^{N_{S}-1} \left(\delta h_{i}\right)^{2} } \tag{3}\\
\mathrm{range}(\mathbf {\delta h}) &= \max (\mathbf {\delta h}) - \min (\mathbf {\delta h}) . \tag{4}
\end{align*}
To form this as a programmable optimization problem, we can consider all segments in tandem, which yields
\begin{equation*}
\min _{s_{0},\ldots, s_{N_{D}-1}} \, \Vert {\mathbf {\delta h}}\Vert _{n}. \tag{5}
\end{equation*}
\begin{align*}
\Vert {f}\Vert _\infty &= \max \left(\Vert {f_+}\Vert _\infty, \, \Vert {f_-}\Vert _\infty \right) \tag{6}\\
\mathrm{range}(f) &= \Vert {f_+}\Vert _\infty + \Vert {f_-}\Vert _\infty. \tag{7}
\end{align*}
B. QUBO Formulation
To make use of the D-Wave solvers (discussed in Section III), it is necessary to formulate the optimization problem as a QUBO [17]. That is, as an objective function with only linear or quadratic dependence on binary problem variables
\begin{equation*}
\mathcal {H} = x^{T} Q x \tag{8}
\end{equation*}
\begin{equation*}
\Vert {\mathbf {\delta h}}\Vert _{2}^{2} = \sum _{i=0}^{N_{S}-1}\left(\delta h_{i}\right)^{2}. \tag{9}
\end{equation*}
To map the shift numbers
\begin{align*}
x_{k, j} = {\begin{cases}1 & \mathrm{if}\; j=s_{k} \\
0 & \mathrm{otherwise}. \end{cases}} \tag{10}
\end{align*}
\begin{equation*}
\delta h_{i} = \sum _{k=0}^{N_{D}-1}\sum _{j=0}^{N_{S}-1}B_{k,i+j}\,x_{k,j}. \tag{11}
\end{equation*}
Using the unary encoding requires the introduction of
\begin{equation*}
\rho \left(\sum _{j=0}^{N_{S}-1}x_{k, j}-1\right)^{2}\quad \forall k \in \lbrace 0,\ldots, N_{D}-1\rbrace \tag{12}
\end{equation*}
Combining (9), (11), and (12) yields the cost function
\begin{equation*}
\begin{split} \mathcal {H} &= \sum _{i=0}^{N_{S}-1}\left(\sum _{k=0}^{N_{D}-1}\sum _{j=0}^{N_{S}-1}B_{k, i+j}\, x_{k,j}\right)^{2} \\
& \phantom{xxxxx} + \rho \sum _{k=0}^{N_{D}-1}\left(\sum _{j=0}^{N_{S}-1}x_{k, j}-1\right)^{2} \end{split} \tag{13}
\end{equation*}
One final adjustment can be made by exploiting a global rotational symmetry in the problem whereby rotating all disks by the same shift number leaves the cost function (13) invariant. This operation is equivalent to changing the arbitrary reference point in the indexing (i.e., the choice of which segment is labeled as zero has no actual bearing on the physical clutch). As such, we are free to leave the first disk frozen in its initial configuration (a shift number of zero), reducing the problem to finding the optimal rotation of all disks relative to the first. Then, by setting
\begin{equation*}
\begin{split} \mathcal {H} &= \sum _{i=0}^{N_{S}-1}\left(B_{0, i} + \sum _{k=1}^{N_{D}-1}\sum _{j=0}^{N_{S}-1}B_{k, i+j}\, x_{k,j}\right)^{2} \\
& \phantom{xxxxx} + \rho \sum _{k=1}^{N_{D}-1}\left(\sum _{j=0}^{N_{S}-1}x_{k, j}-1\right)^{2}. \end{split} \tag{14}
\end{equation*}
Methods Overview
We used a variety of methods and hardware implementations to address the disk optimization problem, with the goal of ascertaining the quantum solvers' performance against selected benchmarks on a classical computer. These are listed below and each solver is detailed in the following.
ZF exact classical solver.
ZF approximate classical solver.
D-Wave simulated thermal annealer.
D-Wave quantum annealer.
D-Wave Leap hybrid.
A. ZF Exact Classical Solver
This solver optimizes the range 2 and is guaranteed to find one of the optimal configurations (there could be multiple) of the discrete shift numbers
B. ZF Approximate Classical Solver
As a result of the scaling mentioned above, large-scale problems are hard or even impossible to solve with the ZF exact classical solver in a reasonable time. To tackle this challenge, an approximate classical solver is proposed, subdividing the large-scale case into smaller problems, which may be optimized efficiently. In short, the
C. D-Wave Simulated Annealer
Simulated thermal annealing (typically referred to as “simulated annealing”) is a random algorithm, which seeks to find the global minimum of an optimization problem while avoiding getting trapped in local minima. By making use of a parameter analogous to temperature, the system is “hot” in the beginning to allow it jump to other states in successive iterations, so that it can escape local minima by hill climbing, and progressively “cooled” to converge to a local minimum (think of a popcorn kernel jumping around a hot kettle with many divots, which is gradually cooled). An advantage of this approach is that it allows the system to escape local minima when the temperature is sufficiently high. However, it can struggle when there are many thin but deep minima, meaning that this random algorithm is not guaranteed to converge to the optimal solution. Further details on this method can be found, for example, in [22].
In this work, we made use of the D-Wave simulated annealer available in the
D. D-Wave Quantum Annealer
Quantum annealing is named in analogy to thermal annealing in that the physical process uses random quantum fluctuations (as opposed to thermal fluctuations). The algorithm seeks out the optimal solution as the system is slowly transitioned such that its energy landscape (determined by the Hamiltonian) reflects the cost function to be optimized. By leveraging quantum behavior, the system begins in the lowest energy configuration of a “mixing Hamiltonian,” [24], which places the quantum bits (qubits) in a complete superposition of all possible bitstring combinations at once. By slowly transitioning to the “problem Hamiltonian,” the quantum adiabatic theorem demonstrates that the quantum system will remain in the lowest energy configuration, ultimately arriving at that which corresponds to the optimal solution of the cost function of interest [25]. Unwanted thermal fluctuations in the hardware can introduce noise to the system so that the optimal solution is not always obtained. Hence, many samples are typically made in order to increase the likelihood of finding that optimum.
In this work, we made use of the D-Wave Advantage QPU, introduced in 2020. Here, over 5000 physical qubits (superconducting loops held at the extremely low temperature of 15 mK) are laid out in a Pegasus graph architecture [26] (where each qubit is generally connected to 15 other qubits in the system), so that it is possible to embed a fully connected graph with 180 vertices.1 This is a major improvement compared to the previous generation D-Wave 2000Q processor, based on more than 2000 qubits on a Chimera architecture [27] (connectivity per qubit of 6) where the largest fully connected graph embedding is of size 65. In addition, the Advantage QPU is said to introduce less noise than earlier generation QPU's [28]. In the context of the QPU with reference to the minor graph embedding, we refer to a logical qubit (not to be confused with an error-corrected qubit [29]) as a single binary variable appearing in the objective function. A physical qubit is a single superconducting loop in the QPU hardware. Following the embedding, many physical qubits may be chained together to represent a single logical qubit.
E. D-Wave Leap Hybrid
The final solver we explored from D-Wave is their Leap hybrid. This uses a hybrid quantum–classical approach, whereby the submitted problem (14) gets strategically decomposed into subproblems, which are then passed in parallel to the D-Wave Advantage QPU described in Section III-D, as well as classical solvers [24]. Following several iterations, Leap hybrid then outputs the best solution obtained within a set run time. As with the simulated annealer and the QPU, the Leap hybrid is a probabilistic solver and is thus not guaranteed to produce the optimal solution, but has a high level of performance (as presented in Sections V and VI). For the moment, it can handle problem sizes of up to
As a proprietary solver, the Leap hybrid is understandably not entirely transparent. This can make it difficult to compare against other solvers in some respects. For example, a single run of this hybrid solver may make limited use of the QPU depending on the length of the queue for that quantum machine. As such, it can be somewhat unclear exactly how much the quantum component contributed to finding a solution. Nevertheless, we show below that the Leap hybrid has exceptional performance for large problem sizes. Alternative hybrid solvers are available, which provide more transparency, such as the Kerberos solver available in the
Test Approach
A. Problem Instances
To test the various methods, quantum and classical, we generate random matrices corresponding to
Relevant for the current manufacturing process in ZF are problem sizes with
B. Benchmark Testing
Each of the random-algorithm solvers (simulated annealing, quantum annealing, and hybrid) was run repeatedly a certain number of times in order to increase the probability of sampling the best solution. These solvers, in contrast to the ZF solvers, aim to minimize the QUBO in (14), which corresponds to optimizing the standard deviation in (3). The simulated annealing method acts as a random-algorithm benchmark for comparison against the quantum solvers, and the ZF solvers provide deterministic classical benchmarks. In contrast to the QUBO formulation of the optimization problem, the ZF solvers focus on minimizing the range in (4). We find, however, that an optimal solution with respect to standard deviation corresponds to a low (but not necessarily optimal) range, and vice versa. In the following, we detail some of the specifications used in the three random-algorithm solvers.
Simulated Annealer—35 samples were taken for each problem, using 1500 as the number of sweeps.
Quantum Annealer—From 20 up to 2000 samples were taken for each problem, depending on the problem size, and annealing pauses were used to enhance the success probability as in [31].2
Hybrid Solver—Three samples were taken for each problem. Note, however, that in a single run of the hybrid solver, the backend runs iteratively over several subproblems and takes many samples directly from the quantum annealer in the process.
Results
In this section, we show the different results obtained by the considered solvers described in Section III under the approach described in Section IV. For varying problem sizes, the main quantities of interest are the problem metrics standard deviation, (3), and range, (4), associated with the solution that each solver outputs, as well as the runtimes. All of these results are provided in Fig. 3(a)–(c), respectively. Note that all of the solutions presented in the figure were confirmed to satisfy the constraints in (12). For the quantum annealing solver, in particular, obtaining feasible solutions without sacrificing solution quality (i.e., low values for the metrics 1 and 2) required careful tuning of the penalty strength
Comparison of (a) standard deviation [see (3)], (b) range [see (4)], and (c) runtimes for the different solvers across independent random disk optimization problem instances. For added clarity, solvers which optimized the standard deviation (range) are connected by solid (dashed) lines. Each problem corresponds to a fully connected graph with a certain number of logical qubits (e.g., the problem with
A. Problem Metric Comparisons
To begin with, it is important to recall which quantities have been optimized by the different solvers. For the simulated annealing (pink triangles in Fig. 3), quantum annealing (blue circles), and the hybrid (orange crosses) solvers, the
It is also important to note that the optimal metric value is instance dependent. Any perceived trends with varying problem sizes (such as oscillations) are mostly coincidental. However, the magnitude of the metrics M1 and M2 tend to increase with the problem size.
In Fig. 3(a), we see that for all problems considered, the hybrid solver yields solutions with the lowest standard deviation compared with all other solvers. While there is no guarantee from this solver that it finds the optimal solution, its consistently strong results across all investigated problems indicate that these solutions are at least close to optimal (especially for the smaller problem sizes where sampling errors are lower). Direct use of the quantum annealing solver yields good solutions, which are close to the hybrid's solutions in standard deviation (although not necessarily optimal) up to problem sizes of 40 logical qubits. Beyond that, sampling errors lead to results that are far above optimal, even for 50 qubits (well below the embedding limit of 180 qubits). We further discuss these limitations of the quantum annealing solver in Section VI. As for the benchmarks, the ZF Exact solver yields solutions that are close to optimal (if not optimal) in the standard deviation, despite its optimization of the range, see (4). The high quality of solutions is consistent for all problem sizes up to its limit around 340-qubit equivalent [recall that the ZF solvers do not use the QUBO formulation of the problem and hence do not use the binary encoding of the problem variables in (10)]. Beyond this limit, the ZF Approximate solver is applied to large problems, which yield solutions well separated from those obtained from the hybrid solver. Finally, the simulated annealing solver consistently yields suboptimal results (indicating the presence of deep local minima in the cost function). In a practical setting, the results from the simulated annealing could be sufficient to pass quality-control requirements depending on the allowed tolerance, with the advantage that this solver is run locally. However, it is also limited in that it cannot handle large problem sizes at about 400 qubits or greater, similar to the ZF Exact solver. In such instances, the hybrid solver is clearly preferred.
In Fig. 3(b), the observations in the comparison of solutions' ranges are quite similar to the standard deviation results. The main difference here is that the ZF Exact solver yields solutions with the guaranteed optimal range compared with the other solvers. Interestingly, in many instances, the hybrid solver manages to find a solution with the same minimal value of the range, despite that solver's optimization of the standard deviation. There are even instances, such as the one with 50 qubits, where the hybrid solver matches the minimal range and still manages to find a lower standard deviation as compared to the ZF Exact solver.
B. Runtime Comparisons
Fig. 3(c) provides the total runtimes of each solver for the various problem sizes. Note that comparisons of the absolute runtimes (i.e., the levels) are not fundamentally relevant given the different platforms each solver was run on, but are still useful for practical reasons. For example, the simulated annealing solver was run locally on a MacBook Pro laptop, the ZF solvers on a ThinkPad, and the classical components of the Hybrid solver are run on supercomputing clusters. More relevant is the relative scaling of each solver's runtime with the problem size that does not depend on the underlying hardware (beyond the classical versus quantum qualities). Furthermore, for the random-algorithm solvers (simulated annealing, quantum annealing, and hybrid), a different, often more-relevant, measure is time to solution (which considers the probability of finding the optimal solution relative to a given level of confidence), rather than total runtime. However, since the simulated and quantum annealing solvers did not find the optimal solution in most instances, we chose to instead present total runtime for a fixed number of samples.
For both the ZF Exact and ZF Approximate solvers, the runtime appears to be super exponential (increasing slope in the log–log plot), although the runtime of these approaches is very instance dependent. For problem sizes corresponding to about 200 qubits or more (or even earlier, depending on usage requirements), the runtimes in the ZF Exact solver become prohibitive at the order of an hour or more.
At all problem sizes (small, medium, and large), the simulated annealing presents among the longest runtimes and scales quadratically (according to theory). From a runtime perspective in practical applications, this approach becomes infeasible for medium-size problems and beyond.
Making direct use of the quantum annealing solver has very low runtimes, even for a problem size approaching its embedding limit. For problem sizes below 40 qubits, where this solver was able to find close-to-optimal solutions (if not optimal), it had the fastest runtime compared to all other solvers (even when including internet latency). This is particularly promising in that the quantum computing technology is still rather nascent and the solution quality is only expected to improve. This topic is discussed further in Section VI.
The runtimes for the hybrid solver are fairly consistent on the order of 10 s, based on the minimum fixed runtime (3 s) for each of the three samples made per problem. Fluctuations seen in the runtime are due to internet latency and other time-costing overhead processes. There is a shallow, almost imperceptible, increase in runtime (beyond 1024 variables, the minimum runtime of the solver per sample gradually grows from the baseline 3 s). For small- and medium-size problems, the ZF Exact solver is faster and yields similar-quality results as the hybrid. However, for large problem sizes, the hybrid solver is incredibly fast while still yielding high-quality results. For a problem corresponding to 336 qubits, the ZF Exact solver took nearly 3.5 h (11 659 s) while the hybrid solved it in only 29 s. The ZF Solver was unable to handle problems larger than this, while the hybrid continued to yield high-quality results with a runtime of only 56 s on the largest problem considered (1176 qubits). The specifications indicate that it can handle problem sizes of up to
Discussion
Based on the results presented in Section V, the different scales of problems show differences in the performance of the solvers. For small problems, below 50 qubits, all solvers find high-quality solutions except maybe the simulated annealing. However, in terms of runtime, the quantum annealing solver is fastest—by several orders of magnitude than most solvers, and slightly better than the ZF Exact solver. For small problem sizes, where sampling errors are reduced, taking 20 to 100 samples typically suffices to find (near-)optimal results, and so runtimes could be reduced without sacrificing solution quality as compared to larger problem sizes.
For medium-size problems, 50 to 350 qubits, the quantum annealing solver no longer performs well. This is due to noise in the machine arising from thermal fluctuations, which lead to sampling errors. In particular, as the number of problem variables increases linearly, embedding of the fully connected graph problem onto the Pegasus architecture (16-qubit connectivity) requires that evermore physical qubits be chained together to represent a single logical qubit. This relationship is demonstrated in Fig. 4, which shows the number of physical qubits used for each of the problem sizes considered with this solver in blue (as well as the embedding limit assuming an availability of 5000 physical qubits). The physical qubit chain is achieved by imposing a strong ferromagnetic coupling across all qubits in the chain in an attempt to fix them all at the same value (either 0 or 1), which then corresponds to the value of the corresponding logical qubit. However, for large problem sizes, chain breaks are more likely to occur (where the strong coupling fails to ensure all physical qubits in the chain are equal valued, potentially leading to an incorrect assignment in the corresponding logical qubit) because the chains are longer and are thus more susceptible to sampling errors. In addition, there is more competition between the chains' couplings and the couplings that encode the optimization problem itself, obfuscating the details of the problem and making it more difficult for the solver to discern the optimal solution.
Number of physical qubits required for the embedding of fully connected graph problems on the Pegasus architecture (relevant to the advantage QPU used in this study) and the Chimera architecture (in the previous-generation 2000Q QPU, included for reference). The data points included here correspond to the problem sizes tested with the quantum annealing solver in Fig. 3. Numbers assigned to a subset of the data points indicate the runtime in seconds for calculating the minor embedding.
While there are clear limitations to the performance when making direct use of the quantum annealing solver, we point out again that this technology is still in its early stages and that there are major improvements with each new generation of quantum annealer from D-Wave. To demonstrate this, Figs. 3 and 4 indicate the limits of embedding fully connected graphs onto the previous generation QPU, the 2000Q, compared with the Advantage QPU used in this study. The latter figure shows that the increased connectivity in the Advantage QPU architecture means that fewer physical qubits are required to represent each logical qubit in the same size problem. Had we made use of the 2000Q QPU, we would have only been able to run problem sizes up to about 65 qubits and the quality of the solutions would have been reduced relative to the Advantage QPU results.
As an aside, we also include the runtimes (in seconds) in Fig. 4 for determining the embedding of a subset of the problems considered (using the
It is in the latter half of this medium-size region (at about 200 qubits and greater) that the hybrid solver becomes the preferred approach. For all problem sizes, it yields consistently high-quality results, and at this scale of problems, it has a faster runtime than the ZF Exact solver. Also, in this region, the runtimes of the ZF Exact solver increase rapidly, to a point where its use is prohibitively expensive in both runtime and memory constraints. For large problem sizes, 350 qubits or more, the ZF Approximate solver has reasonable runtimes (up until about 800 qubits), but the solution quality is much poorer than in the hybrid solver.
To rank the different solvers considered in this work, direct use of the quantum annealing solver is preferred for small problem sizes due to its capacity to find high-quality solutions in very short times. However, hardware limitations in that solver mean that both the hybrid and ZF Exact solvers outperform it for medium-sized problems. In this region, these two algorithms yield similar results in terms of solution quality. In particular, we observe that the hybrid solver finds solutions with slightly better standard deviation, whereas the ZF Exact solver finds solutions with a smaller range. For large-size problems, the hybrid solver shows an impressive performance, clearly outperforming the ZF Approximate solver in terms of solution quality and runtime. In this region, the ZF Exact solver has no data points as it becomes prohibitively expensive in both runtime and memory, highlighting the benefits of the hybrid solver.
Conclusion
We have successfully applied a quantum optimization algorithm to tackle the optimization problem of improving the configuration of friction pads in a clutch, a challenging quality-control problem in manufacturing. As far as we are aware, this is among the first studies applying quantum computing to a manufacturing assembly problem (in league with, for example, [5]). We presented the mathematical formulation of the problem in Section II-A, a minimization of the range (for the deterministic classical benchmark algorithms) of height variations or the
From the results presented in Section V and elaborated on in Section VI, we found that for small problem sizes (below about 50 qubits), the quantum annealing solver yields high-quality solutions (although not always optimal) with very short runtimes. The performance of this solver is only expected to improve over time with further advances in the underlying technology. For medium-size problems up to about 350 qubits, the hybrid solver and the ZF exact solver were on par in solution quality. However, in terms of runtime, the latter rapidly scales upward as the number of variables is increased. In contrast, the hybrid solver remarkably has only limited scaling in runtime while maintaining high quality of solutions. Using the hybrid solver, we were able to solve large problems of up to 1200 fully connected variables, with relatively short runtime. Furthermore, much larger problem sizes could have been run using the hybrid solver, but without a useful benchmark for comparison we limited our investigations there.
With further advances in the quantum technology (extending, for example, the evolution of the D-Wave quantum annealer's capabilities highlighted in Fig. 4), the benefits of making direct use of the QPU in combinatoric optimization are only expected to improve. In addition to such improvements from technological advances, we believe that the current generation's performance could be made better with techniques available now that were outside the scope of this project. For one, the penalty strength in (14) could be optimized in a better way: an ideal penalty strength would limit the sampling of infeasible solutions picked by the solver, while preserving the details of the energy landscape of feasible solutions. While we tuned the penalty strength empirically in this work, there are ideas available in the literature that suggest more systematic approaches (for example, in [33] and [34]).
From this work, we point out the strong performance of quantum–classical hybrid approaches, such as in D-Wave's leap hybrid solver used here, which can lead to significantly better results than only classical or only quantum approaches. The results presented in this work demonstrate the competitive advantage that quantum computing can already offer in challenging problems encountered in the manufacturing sector. In our point of view, this study underlines the revolutionary potential of quantum computing in the manufacturing sector and the role it could play in the reshaping of industry.
In addition to our findings, future work could explore the use of domain-wall encoding to further enhance the efficiency of encoding optimization problems into physical qubits. Domain-wall encoding represents a variable with
ACKNOWLEDGMENT
This work is a collaboration between ZF Group and Multiverse Computing. The authors would like to acknowledge and thank Amy Friske, Samuel Palmer, Serkan Sahin, and Devansh Trivedi for their helpful discussions.