Loading [MathJax]/extensions/MathMenu.js
Quantum Topology Optimization via Quantum Annealing | IEEE Journals & Magazine | IEEE Xplore

Quantum Topology Optimization via Quantum Annealing


Abstract:

We present a quantum annealing-based solution method for topology optimization (TO). In particular, we consider TO in a more general setting, i.e., applied to structures ...Show More

Abstract:

We present a quantum annealing-based solution method for topology optimization (TO). In particular, we consider TO in a more general setting, i.e., applied to structures of continuum domains where designs are represented as distributed functions, referred to as continuum TO problems. According to the problem's properties and structure, we formulate appropriate subproblems that can be solved on an annealing-based quantum computer. The methodology established can effectively tackle continuum TO problems formulated as mixed-integer nonlinear programs. To maintain the resulting subproblems small enough to be solved on quantum computers currently accessible with small numbers of qubits and limited connectivity, we further develop a splitting approach that splits the problem into two parts: the first part can be efficiently solved on classical computers, and the second part with a reduced number of variables is solved on a quantum computer. By such, a practical continuum TO problem of varying scales can be handled on the D-Wave quantum annealer. More specifically, we concern the minimum compliance, a canonical TO problem that seeks an optimal distribution of materials to minimize the compliance with desired material usage. The superior performance of the developed methodology is assessed and compared with the state-of-the-art heuristic classical methods, in terms of both solution quality and computational efficiency. The present work hence provides a promising new avenue of applying quantum computing to practical designs of topology for various applications.
Published in: IEEE Transactions on Quantum Engineering ( Volume: 4)
Article Sequence Number: 3100515
Date of Publication: 11 April 2023
Electronic ISSN: 2689-1808

Funding Agency:


CCBY - IEEE is not the copyright holder of this material. Please follow the instructions via https://creativecommons.org/licenses/by/4.0/ to obtain full-text articles and stipulations in the API documentation.
SECTION I.

Introduction

Topology optimization (TO) is a computational design method that aims to find the optimal distribution of material to improve part performance under the governing physical equilibrium equations [1]. It originates in structural mechanics [2] and has been applied to structures of continuum domains where designs are represented as distributed functions and to discrete truss structures where designs are of finite dimensions and represented via discrete parameters. In this article, we focus on the design problems of distributed parameter systems, a.k.a. continuum problems. Continuum TO has received growing attention in a variety of fields, including multiscale structures [3], fluid mechanics [4], [5], [6], [7], electromagnetics [8], [9], photonics [10], [11], quantum devices [12], and coupled multiphysics problems [13], [14], [15], [16]. Topologically optimized designs often exhibit complex free-form shapes, and additive manufacturing can produce parts of complex shapes. The recent wide adoption of additive manufacturing technologies has led to further development of TO for a host of applications [17], including imposing additive manufacturability constraints during the optimization process [18], [19].

As TO has been expanded from structural mechanics to general multiphysics, it is accompanied by quickly growing complexity and a vast searching space, resulting in grand computational challenges [12], [17], [20]. Quantum computing (QC) is emerging as a new computing paradigm that could be superior to classical computing for a variety of problems, optimization being one of them. In literature, only a few attempts have been made to apply quantum algorithms for solving TO problems. Maruo et al. [21] concerned the TO of electromagnetic devices. For that, a linear optimization problem was formulated, which can be directly cast into a quadratic unconstrained binary optimization (QUBO) formulation through the Biot–Savart's law. The formulated QUBO problem was then solved through simulated annealing. Their results have indicated that the proposed QUBO formulation can significantly reduce the number of iterations required for the optimization process compared with the commonly used classical method, the normalized Gaussian network [22]. Thereby, the overall computational cost can be significantly reduced, because much fewer forward evaluations of the objective functions are invoked, which usually dominate the computational cost. However, the resultant QUBO formulation was not implemented in quantum annealers. Thus, the additional overhead due to quantum embedding and the limitations of quantum hardware currently accessible were not addressed in the analysis of computational cost. Also, the benefit of quantum annealing to converge to the optimal or ground state with larger probability than simulated annealing was not utilized. The work by Sato et al. [23] reported the TO for a simple, discrete truss structure with three edges. The objective functions of all possible configurations composed of three edges were simultaneously evaluated through quantum entanglement. The optimal configuration with the minimum objective function value was determined by applying the variational quantum algorithms (VQAs) that are suitable for gate-based quantum processors. The VQAs and forward evaluations of the objective function via finite element analysis (FEA) were integrated together when implemented on a quantum computer, eliminating the need of amplitude estimation for the solution of FEA. In those two works, the proposed quantum algorithms can only be applied to linear TO problems, where the field variables such as magnetic flux density and temperature linearly depend on the design variables. Given that, the TO can be greatly simplified when the objective function is just a linear [23] or quadratic [22] function of the field variables. In addition, the volume or material usage constraint is not strictly imposed by any equality or inequality constraint; as a result, one cannot set a target optimal volume or material usage for the optimization, which is not common in practical TO applications. Further, discrete truss structures only represent a small portion of applications of TO. More generally, TO considers structures of continuum domains where designs are represented as distributed functions.

In the present work, we concern how to apply quantum algorithms to continuum, nonlinear TO, beyond the discrete truss structures or the linear TO problems tackled in literature [22], [23]. In addition, the volume constraint is strictly imposed by an equality or inequality constraint, such that any target optimal volume can be practically achieved. To determine the optimal distribution of materials, continuum TO essentially seeks the numerical solution of a partial differential equation (PDE)-constrained optimization problem. To solve it, the solution domain is first discretized with discrete nodes or elements. The distributed design problem hence involves a large number of binary (0/1) optimization variables, typically one variable per discrete node or element. As the field variables nonlinearly depend on the design variables, it results in a large-scale, mixed integer nonlinear programming (MINLP) problem, subject to equality and/or inequality constraints. This type of problem is intrinsically NP-hard [24], namely, unable to be solved in polynomial time, and hence is challenging to be handled using classical approaches [25], [26], [27], [28].

There are two main implementations of the noisy intermediate-scale quantum (NISQ) devices: quantum annealers and gate-based quantum processors. Quantum annnealers embrace quantum annealing techniques, and the gate-based quantum circuits provide universal quantum computation capabilities. Due to the noisy nature of quantum gates and the high overhead cost of noise reduction and error mitigation, the quantum circuit that can be reliably executed must be limited to short. Thus, hybrid quantum-classical algorithms such as VAQs must be developed for solving optimization on gate-based quantum circuits, where only a short parameterized circuit is executed, and classical optimizers are leveraged to train the quantum circuit by optimizing the expectation value over the circuit's parameters [29]. However, in light of the noise-induced barren plateau [30], [31], [32], the training process calls for an exponentially scaled computational cost [30], and the classical optimization problems corresponding to the training of parameters are NP-hard [33]. Therefore, in the present work, we exploit quantum annealing for solving the TO problem of our interest, which has been demonstrated for its advantages and speedups in solving a variety of combinatorial optimization problems [34], [35], [36], [37], [38]. For an n-qubit system, it works in the binary discrete space, with the operator defined as a Hamiltonian that can correspond to the objective function of an optimization problem [39], [40]. Quantum annealing can effectively escape local minima by tunneling through the barriers when exploring the objective function's landscape and seeking the global minimum [39], [40]. Thus, it can efficiently solve large-scale nonlinear optimization problems over binary discrete spaces. The final state of the n-qubit system at the end of annealing could be the ground state of the Hamiltonian, with each qubit a classical object standing for the solution to the binary optimization problem.

The currently accessible quantum computers that are designed to implement quantum annealing are the D-Wave systems [41]. To embed problems on the D-Wave quantum annealer, they need to be formulated in the form of QUBO. However, the optimization problem corresponding to the TO of our interest cannot be directly cast into QUBO formulations. Therefore, we first decompose the original problem into a sequence of mixed integer linear programs (MILPs) via the generalized Benders' decomposition (GBD). Next, in each MILP the continuous variables are represented by a series of auxiliary binary variables, such that each MILP can be mapped onto a binary optimization problem and then through the penalty method transformed into the QUBO formulation. Finally, from the solution of quantum annealing, we obtain the solution of each MILP, and in turn the ultimate optimal material layout after the sequence of MILPs are completely solved. If sufficient, fully connected quantum qubits are available [42], quantum advantages in computational efficiency can be straightforwardly obtained and demonstrated, owing to the efficiency of quantum annealing for solving QUBO. However, currently accessible quantum hardware only offers a small number of qubits with sparse connectivity. To solve practical TO problems in current and near-term quantum annealing devices, we further develop a splitting approach, by which a reduced problem that needs to be solved on a quantum annealer is separated from other subproblems that can be efficiently handled by classical computers. The resultant, greatly reduced QUBO problem is then implemented on a D-Wave Advantage quantum processing unit (QPU). Both the solution quality and computational cost (including the cost for embedding the problem onto the quantum computer, the time of quantum annealing, and the number of optimization iterations) are analyzed and assessed systematically in a series of problems of variable sizes.

SECTION II.

Preliminary

A. Continuum Topology Optimization Problem

Continuum TO optimizes the material layout \rho (\mathbf {x}) within a continuum design domain \Omega. A continuum TO problem is usually constrained by a set of partial differential equations (PDEs) subject to boundary conditions (BCs) which describe the physical laws governing the design, as well as the target material usage or volume of material layout. In general, the problem can be stated as \begin{equation*} \begin{aligned} \min _{\rho } & \quad \int _{\Omega } F[\mathbf {u}(\rho (\mathbf {x}))] \mathrm{d} \Omega \\ \text{s.t.} & \quad G_{0}(\rho (\mathbf {x})) = \frac{\int _{\Omega } \rho (\mathbf {x}) \mathrm{d} \Omega }{\int _{\Omega } \mathrm{d} \Omega } - V_{T} = 0 \\ & \quad \mathcal {L}(\mathbf {u}(\rho (\mathbf {x}))) + \mathbf {b} = 0 & & \forall \mathbf {x} \in \Omega \\ & \quad \mathbf {u}(\rho (\mathbf {x})) = \mathbf {u}_\Gamma (\mathbf {x}) & & \forall \mathbf {x} \in \Gamma _{D} \\ & \quad \mathbf {n} \cdot \nabla \mathbf {u}(\rho (\mathbf {x})) = \mathbf {h}_\Gamma (\mathbf {x}) & & \forall \mathbf {x} \in \Gamma _{N} \\ & \quad G_{j}(\mathbf {u}(\rho (\mathbf {x})), \rho (\mathbf {x})) = 0, \quad j = 1, \dots m \\ & \quad H_{k}(\mathbf {u}(\rho (\mathbf {x})), \rho (\mathbf {x})) \leq 0, \quad k = 1,\ldots, n \\ \end{aligned} \tag{1} \end{equation*} View SourceRight-click on figure for MathML and additional features.

where \mathbf {u} denotes the field variable defined in the design domain \Omega; \rho is the design variable; \mathcal {L} denotes some differential operator; \mathbf {b} is the source term; and \Gamma = \partial \Omega represents the boundary of the design domain with \Gamma _{D} denoting the boundary where Dirichlet BC is imposed and \Gamma _{N} the boundary where Neumann BC is enforced. In (1), G_{0}(\rho)=0 serves to constrain the volume of optimal material layout to the target value V_{T}. The following three lines after G_{0}(\rho)=0 outline the governing PDE and BCs. G_{j}(\mathbf {u}(\rho), \rho)=0 includes all equality constraints in addition to the volume constraint; and H_{k}(\mathbf {u}(\rho), \rho)\leq 0 comprises all inequality constraints imposed to the optimization problem. In practice, these constraints are usually related to certain manufacture limitations and/or material property requirements. For simplicity, we neglect them in the following discussion. However, they can be easily included in the proposed approach, following the way how we deal with the constraint G_{0}(\rho)=0. The objective function F is convex with respect to the design variable \rho, and hence, if the differential operator \mathcal {L} is positive definite, the optimization problem defined in (1) is convex. In case that the TO of interest leads to a nonconvex optimization problem, a sequential approximation program may be employed such that a series of local, convex problems are solved to update the design variable locally in a sequence [43].

To solve the continuum TO problem as in (1), the design domain \Omega is usually discretized, and the design and field variables are represented in discrete settings. For example, \Omega can be discretized with a uniform mesh, and FEA is used for the numerical discretization of the governing PDEs and BCs. Taking the minimum compliance as an example, the discretized TO problem can be expressed as \begin{equation*} \begin{aligned} \min _{\mathbf {u}, \boldsymbol{\rho }} & \quad \mathbf {f}^\intercal \mathbf {u} \\ \text{s.t.} & \quad \mathbf {K}(\boldsymbol{\rho }) \mathbf {u} = \mathbf {f} \\ & \quad \sum _{i=1}^{n_{\rho }} \frac{\rho _{i}}{n_{\rho }} = V_{T} \\ & \quad \mathbf {u} \in \mathbb {R}^{n_{u}}, \boldsymbol{\rho } \in \lbrace 0, 1 \rbrace ^{n_{\rho }} \end{aligned} \tag{2} \end{equation*} View SourceRight-click on figure for MathML and additional features.where \mathbf {f} is a known external load exerted on the material; the material is subject to linear elasticity; the superscript ^\intercal denotes transpose; \mathbf {u} is the discretized displacement field defined on the nodes of the mesh; and \rho _{i} is the discretized design variable defined on each mesh element i. In (2) \begin{equation*} \mathbf {K}(\boldsymbol{\rho }) = \mathbf {K}_{0} + \sum _{i=1}^{n_{\rho }} \rho _{i} \mathbf {K}_{i}(E, \nu) \; \tag{3} \end{equation*} View SourceRight-click on figure for MathML and additional features.where \mathbf {K}_{i} \in \mathbb {R}^{{n_{u}} \times {n_{u}}}, i = 1, 2, \dots n_{\rho } is the predefined element stiffness matrix; E is the Young's modulus; \nu is the Poisson ratio; and \mathbf {K}_{0} is a symmetric positive definite matrix \begin{equation*} \mathbf {K}_{0} = \varepsilon \sum _{i = 1}^{n_{\rho }} \mathbf {K}_{i} \; \tag{4} \end{equation*} View SourceRight-click on figure for MathML and additional features.with \varepsilon a small number, e.g. \varepsilon = 10^{-9}, such that it would not affect the resulting optimal material layout and the values estimated for the field variable on the solid elements with \rho _{i} = 1. The equality constraint \sum _{i=1}^{n_{\rho }} \frac{\rho _{i}}{n_{\rho }} = V_{T} in (2) is referred to as the volume constraint that drives the resulting material layout toward the target volume or material usage. The inclusion of \mathbf {K}_{0} in \mathbf {K} ensures that for any \boldsymbol{\rho }, \mathbf {K}(\boldsymbol{\rho }) is a symmetric positive definite matrix.

The challenge for solving the optimization problem defined in (2) lies in the fact that while \mathbf {u} \in \mathbb {R}^{n_{u}} is a continuous variable, \boldsymbol{\rho } \in \lbrace 0, 1 \rbrace ^{n_{\rho }} is a binary variable, and the filed variable \mathbf {u} nonlinearly depends on the design variable \boldsymbol{\rho }, resulting in an MINLP problem. To tackle the challenge, existing classical approaches in TO fall into either of the two categories.

In the first category, the most widely used method is the solid isotropic material with penalization (SIMP) method [25]. Its strategy is to relax the binary variable into a continuous variable such that the MINLP problem can be converted into a continuous optimization problem. Specifically, the SIMP method approximates the binary design variable with a high-order polynomial function of a continuous variable that smooths out the discontinuity of the binary variable [25]. Hence, the density \rho _{i} continuously varies from 0 to 1 for each element. From the design perspective, such an interpolated representation (with “gray” density) does not allow easy imposition of design-dependent loading. To recover the desired binary (black/white) representation of material layout in the optimal design, additional numerical treatment is required [44], which is not always mathematically justifiable. Also, the relaxation from binary to continuous breaks the convexity of the original problem (2) and makes the optimization hard to converge [44]. Further, the resultant continuous optimization problem is not suitable for quantum annealing to solve. Therefore, we do not follow the SIMP method herein, but use its solution as the baseline for comparison with the proposed quantum annealing solution, as discussed in Section IV-C.

The second category of methods separate the binary and continuous variables into different subproblems, by iteratively determining one of them with the other fixed in a subproblem. The representative methods include the discrete variable topology optimization via canonical relaxation algorithm (DVTOCRA) [26], the topology optimization of binary structures (TOBS) method [27], and the GBD method [28]. The DVTOCRA method employs a sequential linear/quadratic approximation to separate the binary and continuous variables into different subproblems [26]. The subproblem associated with the binary variable is a constrained, quadratic integer programming problem. Due to its NP-hardness, the canonical relaxation algorithm is applied, by which new continuous variables are introduced to replace the binary variable through an approximation function (different from that used in the SIMP method), and the binary optimization programming problem is transformed into a convex, continuous optimization problem in terms of the newly introduced continuous variables. Similarly to the SIMP method, the resultant continuous optimization problems are not suitable for quantum annealing to solve. The TOBS method utilizes sequential linear programming to separate the binary and continuous variables into different subproblems [27]. The resultant subproblem involving the binary variable is a linear integer programming problem. With the uniform mesh discretization and the exclusion of nonvolumetric constraints [i.e., G_{j}=0 and H_{k}\leq 0 in (1)], the corresponding integer programming problem is not NP-hard. Hence, the TOBS method does not serve as an ideal candidate for investigating the potential advantages of quantum annealing in TO.

The GBD method follows a different route. It decomposes the original problem into a sequence of MILPs. In contrast to the other methods that have no guarantee of convergence, the GBD method provides a deterministic optimality criterion to warrant convergence within a finite number of iterations [24]. It is also anticipated to converge faster than the SIMP or TOBS method, because the GBD formulation permits to use all the material layouts generated from previous iterations in each new iteration, while other methods like the SIMP or TOBS can only take into account the material layout yield from the last iteration. The GBD method was first introduced to TO by Muñoz et al. [28] for solving discrete TO problems. In the present article, we extend it to address a continuum TO problem, as discussed in Section II-B. In the subproblem related to the binary design variable after decomposition, additional constraints (associated with the PDEs), other than the volumetric constraint, are introduced and can bring NP-hardness to the optimization problem. Such a subproblem is well suited to be accelerated by quantum annealing, and hence we focus on the GBD method in the present article to investigate and demonstrate how QC, particularly via quantum annealing, can be leveraged for solving a continuum, nonlinear TO problem as stated in (2).

B. Generalized Benders' Decomposition

The field variable \mathbf {u} and the design variable \bm {\rho } are separated into different subproblems via decomposition and are iteratively updated until reaching the convergence.

At the kth iteration, we have a sequence of feasible integer solutions \bm {\rho }^{j} that satisfy the volume constraint with any desired volume V, i.e., \sum _{i = 1}^{n_{\rho }} \rho _{i}^{j} = V for \forall {j}, and have nonsingular stiffness matrices \mathbf {K}(\bm {\rho }^{j}), where j loops over all previous iterations until the current iteration k. We first form the subproblem with respect to \mathbf {u}, i.e., the so-called primal problem, which is given by \begin{equation*} \begin{aligned} \min _{\mathbf {u}} & \quad \mathbf {f}^\intercal \mathbf {u} \\ \text{s.t.} & \quad \mathbf {K}(\boldsymbol{\rho }^{k}) \mathbf {u} = \mathbf {f} \\ & \quad \mathbf {u} \in \mathbb {R}^{n_{u}}. \end{aligned} \tag{5} \end{equation*} View SourceRight-click on figure for MathML and additional features.This primal problem can be easily solved for \mathbf {u}^{k} by using a linear system solver, as \begin{equation*} \mathbf {u}^{k} = \mathbf {K}^{-1}(\boldsymbol{\rho }^{k}) \mathbf {f} \;. \tag{6} \end{equation*} View SourceRight-click on figure for MathML and additional features.The primal problem is a restriction to the original problem (2), and any \mathbf {f}^\intercal \mathbf {u}^{j} serves as an upper bound to the problem (2). We denote the lowest upper bound as U such that U = \min _{j} (\mathbf {f}^\intercal \mathbf {u}^{j}), j = 1,\ldots, k.

Next, we derive the subproblem with respect to \boldsymbol{\rho }, referred to as the master problem. To proceed, we first use a Lagrange multiplier \bm {\lambda } to move the equality constraint to the objective function in problem (2), such that the resulting problem is only subject to the constraints involving the binary variables, as \begin{equation*} \begin{aligned} \min _{\mathbf {u}, \boldsymbol{\rho }} & \quad \mathbf {f}^\intercal \mathbf {u} + \bm {\lambda }^\intercal \mathbf {K}(\bm {\rho }) \mathbf {u} \\ \text{s.t.} & \quad \sum _{i=1}^{n_{\rho }} \frac{\rho _{i}}{n_{\rho }} = V \\ & \quad \mathbf {u} \in \mathbb {R}^{n_{u}}, \boldsymbol{\rho } \in \lbrace 0, 1 \rbrace ^{n_{\rho }}. \end{aligned} \tag{7} \end{equation*} View SourceRight-click on figure for MathML and additional features.The Lagrange multiplier \bm {\lambda } satisfies \begin{equation*} \bm {\lambda }^\intercal \mathbf {K} (\bm {\rho }) = -\mathbf {f}^\intercal \;. \tag{8} \end{equation*} View SourceRight-click on figure for MathML and additional features.Then, we introduce an auxiliary variable \eta and replace the objective function in (7) with an inequality constraint, as \begin{equation*} \begin{aligned} \min _{\bm {\rho }, \mathbf {u}, \eta } & \quad \eta \\ \text{s.t.} & \quad \mathbf {f}^\intercal \mathbf {u} + \bm {\lambda }^\intercal \mathbf {K}(\bm {\rho }) \mathbf {u} \leq \eta \\ & \quad \sum _{i=1}^{n_{\rho }} \frac{\rho _{i}}{n_{\rho }} = V \\ & \quad \mathbf {u} \in \mathbb {R}^{n_{u}}, \boldsymbol{\rho } \in \lbrace 0, 1 \rbrace ^{n_{\rho }}. \end{aligned} \tag{9} \end{equation*} View SourceRight-click on figure for MathML and additional features.Applying the first-order Taylor expansion at (\mathbf {u}^{k}, \bm {\rho }^{k}) and due to the convexity of \mathbf {f}^\intercal \mathbf {u} + \bm {\lambda }^\intercal \mathbf {K}(\bm {\rho }) \mathbf {u}, the inequality constraint in (9) can be relaxed as \begin{equation*} \mathbf {f}^\intercal \mathbf {u}^{k} + \sum _{i = 1}^{n_{\rho }} {(\bm {\lambda }^{k})}^\intercal \mathbf {K}_{i} \mathbf {u}^{k} (\rho _{i} - \rho _{i}^{k}) \leq \mathbf {f}^\intercal \mathbf {u} \leq \eta. \tag{10} \end{equation*} View SourceRight-click on figure for MathML and additional features.By doing so, the problem (9) can be relaxed with multiple cuts as \begin{equation*} \begin{aligned} \min _{\boldsymbol{\rho }, \eta } & \quad \eta \\ \text{s.t.} & \quad \mathbf {f}^\intercal \mathbf {u}^{j} + \sum _{i=1}^{n_{\rho }} {(\bm {\lambda }^{j})}^\intercal \mathbf {K}_{i} \mathbf {u}^{j} (\rho _{i} - \rho ^{j}_{i}) \leq \eta \\ & \qquad \text{j = 1},\ldots\text{, k} \\ & \quad \sum _{i=1}^{n_{\rho }} \frac{\rho _{i}}{n_{\rho }} = V \\ & \quad \boldsymbol{\rho } \in \lbrace 0, 1\rbrace ^{n_{\rho }}. \end{aligned} \tag{11} \end{equation*} View SourceRight-click on figure for MathML and additional features.Equation (11) is the derived master problem for the original TO problem (2). The term {(\bm {\lambda }^{j})}^\intercal \mathbf {K}_{i} \mathbf {u}^{j} denotes the so-called sensitivity in TO. Note that \bm {\lambda }^{j} = -\mathbf {u}^{j}, due to the symmetry of \mathbf {K}, and hence the sensitivity is just -{(\mathbf {u}^{j})}^\intercal \mathbf {K}_{i} \mathbf {u}^{j}. The optimal solution of (11) is denoted as \bm {\rho }^{k+1} and added into the sequence of \bm {\rho }^{j}. The optimum of (11), denoted as \eta ^{k}, serves as the lower bound of the problem (2). The iterations continue until the upper and lower bounds meet such that (U - \eta ^{k}) / U < \xi, where \xi is a predefined tolerance for convergence.

Since the continuum TO is essentially a PDE-constrained optimization problem [as in (1)], the solution quality also depends on the accuracy of the numerical approximation of the differential operator (e.g., gradient) in the PDE, which becomes especially challenging at the interface between solid (\rho _{i}=1) and void (\rho _{i}=0) elements. The FEA with uniform meshing can lead to the so-called “checkerboard” artifact, as illustrated in Fig. 1.

FIGURE 1. - Effect of filtering on eliminating the checkerboard artifact. The two subfigures report the resulting optimal material layouts with the same problem setup, using the same optimization procedure, and corresponding to the same region of the solution domain. (a) Optimal material layout yield without filtering exhibits a checkerboard pattern. (b) After applying filtering, the checkerboard artifact is effectively eliminated. (a) Material layout with “checkerboard” artifact. (b) Material layout after filtering.
FIGURE 1.

Effect of filtering on eliminating the checkerboard artifact. The two subfigures report the resulting optimal material layouts with the same problem setup, using the same optimization procedure, and corresponding to the same region of the solution domain. (a) Optimal material layout yield without filtering exhibits a checkerboard pattern. (b) After applying filtering, the checkerboard artifact is effectively eliminated. (a) Material layout with “checkerboard” artifact. (b) Material layout after filtering.

To remedy for that, filtering is required for the sensitivity, following the literature [27], [45], as \begin{equation*} \widetilde{w}_{i}^{j} = \left\lbrace \begin{array}{l} \frac{\sum _{l \in \mathcal {N}_{i}^{r}} h_{i, l}^{r} (\mathbf {u}^{j})^\intercal \mathbf {K}_{l} \mathbf {u}^{j}}{\sum _{l \in \mathcal {N}_{i}^{r}} h_{i, l}^{r}}, \quad \rho _{i}^{j} = 1 \\ \varepsilon \frac{\sum _{l \in \mathcal {N}_{i}^{r}} h_{i, l}^{r} (\mathbf {u}^{j})^\intercal \mathbf {K}_{l} \mathbf {u}^{j}}{\sum _{l \in \mathcal {N}_{i}^{r}} h_{i, l}^{r}}, \quad \rho _{i}^{j} = 0 \end{array} \right. \tag{12} \end{equation*} View SourceRight-click on figure for MathML and additional features.where h_{i, l}^{r} = \max (0, r - \Vert \mathbf {x}_{i} - \mathbf {x}_{l}\Vert _{2}). Therefore, the master problem with filtering is rewritten as \begin{equation*} \begin{aligned} \min _{\boldsymbol{\rho }, \eta } & \quad \eta \\ \text{s.t.} & \quad \mathbf {f}^\intercal \mathbf {u}^{j} - \sum _{i=1}^{n_{\rho }} \widetilde{w}_{i}^{j} (\rho _{i} - \rho ^{j}_{i}) \leq \eta \\ & \qquad \text{j = 1},\ldots\text{, k} \\ & \quad \sum _{i=1}^{n_{\rho }} \frac{\rho _{i}}{n_{\rho }} = V \\ & \quad \boldsymbol{\rho } \in \lbrace 0, 1\rbrace ^{n_{\rho }}. \end{aligned} \tag{13} \end{equation*} View SourceRight-click on figure for MathML and additional features.

As the iterations proceed, the number of inequality constraints (j = 1,\ldots, k) included in the master problem (13) increases. However, those inequality constraints are not independent, and we do not need to include them all when solving the master problem. To accelerate the solution process, we further introduce the Pareto optimal cuts [28], as defined by \begin{equation*} \mathcal {P}(k) = \lbrace j | \mathbf {f}^\intercal \mathbf {u}^{j} \leq \mathbf {f}^\intercal \mathbf {u}^{k}\quad \ \forall j = 1, \dots k \rbrace \; \tag{14} \end{equation*} View SourceRight-click on figure for MathML and additional features.which is based on the Pareto dominance relationship and selects only the previous cuts with the objective function values no greater than that of the new cut generated in the current iteration. Such selection of optimal cuts has been numerically proven to improve the rate of convergence to the optimal solution [28]. Thus, the master problem further with the Pareto optimal cuts can be written as \begin{equation*} \begin{aligned} \min _{\boldsymbol{\rho }, \eta } & \quad \eta \\ \text{s.t.} & \quad \mathbf {f}^\intercal \mathbf {u}^{j} - \sum _{i=1}^{n_{\rho }} \widetilde{w}_{i}^{j} (\rho _{i} - \rho ^{j}_{i}) \leq \eta \quad \ \quad \forall j \in \mathcal {P}(k) \\ & \quad \sum _{i=1}^{n_{\rho }} \frac{\rho _{i}}{n_{\rho }} = V \\ & \quad \boldsymbol{\rho } \in \lbrace 0, 1\rbrace ^{n_{\rho }}. \end{aligned} \tag{15} \end{equation*} View SourceRight-click on figure for MathML and additional features.If |P(k)| = 1, the master problem in (15) can be simplified as \begin{equation*} \begin{aligned} \min _{\boldsymbol{\rho }} & \quad \mathbf {f}^\intercal \mathbf {u}^{j} - \sum _{i=1}^{n_{\rho }} \widetilde{w}_{i}^{j} (\rho _{i} - \rho ^{j}_{i}), \quad j \in \mathcal {P}(k) \\ \text{s.t.} & \quad \sum _{i=1}^{n_{\rho }} \frac{\rho _{i}}{n_{\rho }} = V \\ & \quad \boldsymbol{\rho } \in \lbrace 0, 1\rbrace ^{n_{\rho }} \end{aligned} \tag{16} \end{equation*} View SourceRight-click on figure for MathML and additional features.with \eta ^{k} = \mathbf {f}^\intercal \mathbf {u}^{j} - \sum _{i=1}^{n_{\rho }} \widetilde{w}_{i}^{j} (\rho _{i}^{k+1} - \rho ^{j}_{i}).

In summary, for any given volume constraint V and the initial feasible integer solution \bm {\rho }^{1}, the iterative procedure based on the GBD method to solve the TO problem (2) is outlined in Algorithm 1.

Algorithm 1: ToGbdSub(V, \bm {\rho }^{1}).

Input: Volume fraction V, initial material layout \bm {\rho }^{1}

Output: Optimal material layout \boldsymbol{\rho }^*

Utility: Find an optimal material layout with a given volume fraction V and initial material layout \bm {\rho }^{1}

1:

for k=1, \dots do

2:

Initialize the upper bound U as U = +\infty

3:

Employ the linear system solver to obtain \mathbf {u}^{k} = \mathbf {K}^{-1}(\boldsymbol{\rho }^{k}) \mathbf {f}

4:

if \mathbf {f}^\intercal \mathbf {u}^{k} < U then

5:

U \gets \mathbf {f}^\intercal \mathbf {u}^{k}

6:

\boldsymbol{\rho }^* \gets \boldsymbol{\rho }^{k}

7:

end if

8:

Generate the Pareto optimal cuts \mathcal {P}(k) according to (14)

9:

if | \mathcal {P}(k) | = 1 then

10:

Solve the master problem (16) for the minimizer \eta ^{k} and \boldsymbol{\rho }^{k+1}

11:

else

12:

Solve the master problem (15) for the minimizer \eta ^{k} and \boldsymbol{\rho }^{k+1}

13:

end if

14:

if (U - \eta ^{k}) / U < \xi then

15:

break

16:

end if

17:

end for

Return: \boldsymbol{\rho }^*

In the following, we explain how to obtain \bm {\rho }^{1} to initiate the above iterative procedure, which is a feasible binary solution satisfying the volume constraint \sum _{i=1}^{n_{\rho }} \frac{\rho _{i}}{n_{\rho }} = V and has a nonsingular stiffness matrix \mathbf {K}(\bm {\rho }^{1}). First, we consider solving the problem \begin{equation*} \begin{aligned} \min _{\boldsymbol{\rho }} & \quad \mathbf {f}^\intercal \mathbf {u}^{0} - \sum _{i=1}^{n_{\rho }} \widetilde{w}_{i}^{0} (\rho _{i} - \rho ^{0}_{i}) \\ \text{s.t.} & \quad \sum _{i=1}^{n_{\rho }} \frac{\rho _{i}}{n_{\rho }} = V \\ & \quad \boldsymbol{\rho } \in \lbrace 0, 1\rbrace ^{n_{\rho }} \end{aligned} \tag{17} \end{equation*} View SourceRight-click on figure for MathML and additional features.where \mathbf {u}^{0} = \mathbf {K}^{-1}(\bm {\rho }^{0}) \mathbf {f}; \bm {\rho }^{0} can be any material layout that gives a nonsingular stiffness matrix \mathbf {K}(\bm {\rho }^{0}), e.g., with all \rho _{i}^{0} = 1, i = 1,\ldots, n_{\rho }. The problem (17) is formed by applying the first-order Taylor expansion to the problem (2) at (\mathbf {u}^{0}, \bm {\rho }^{0}) and with filtering. Hence, it can be regarded as the linear approximation of the original problem. To ensure sufficient accuracy for this approximation, the volume constraint imposed cannot be too far from \sum _{i=1}^{n_{\rho }} \frac{\rho _{i}^{0}}{n_{\rho }}. However, our ultimate goal is to satisfy the target volume V_{T}, and the value of V_{T} can generally be far from 1, e.g., V_{T}=0.5. Thus, we adopt an asymptotic process, namely letting the volume constraint gradually approach the target value V_{T}. In particular, we define a sequence of different volume values V_{m} = 1 - m \Delta V with m = 1, 2,\ldots, M and \Delta V = (1 - V_{T})/M. Limiting \Delta V sufficiently small, we follow an iterative procedure: in the iteration step m, solve the problem (17) with \bm {\rho }^{0} and letting V=V_{m}, and denote its solution as \bm {\rho }^{1}; invoke Algorithm 1 with the input of \bm {\rho }^{1} to find the optimal solution with respect to the volume constraint V=V_{m} and let this optimal solution be the new \bm {\rho }^{0} for the next iteration step m+1; and when m=1, \bm {\rho }^{0} = \bm {1}. The proposed procedure is summarized in Algorithm 2.

Algorithm 2: ToGbd(V_{T}, \Delta V).

Input: Target volume fraction V_{T}, incremental change of volume \Delta V, number of iterations M=(1 - V_{T})/\Delta V

Output: Optimal material layout \boldsymbol{\rho }^*

1:

Initialize the design variables as \boldsymbol{\rho }^{0} = \mathbf {1}

2:

for m=1,\cdots M do

3:

Employ the linear system solver to obtain \mathbf {u}^{0} = \mathbf {K}^{-1}(\boldsymbol{\rho }^{0}) \mathbf {f}

4:

Solve the problem (17) to obtain \boldsymbol{\rho }^{1}

5:

\bm {\rho }^* \gets \textsc {ToGbdSub}(V_{m}, \bm {\rho }^{1})

6:

Prepare for the next iteration with \boldsymbol{\rho }^{0} \gets \boldsymbol{\rho }^*

7:

end for

Return: \boldsymbol{\rho }^*

SECTION III.

Quantum Algorithm

A. From Topology Optimization to QUBO

As discussed above, the original MINLP problem, as in (2), is relaxed into a series of MILP problems, as in (15). In this section, we establish the conversion of MILP into QUBO, such that the master problem (15) can be solved on quantum annealers.

First, a slack variable \alpha ^{j} is introduced into each cut in the problem (15) to transform inequality constraints into equality constraints, as \begin{equation*} \begin{aligned} \min _{\boldsymbol{\rho }, \eta, \bm {\alpha }} & \quad \eta \\ \text{s.t.} & \quad \mathbf {f}^\intercal \mathbf {u}^{j} - \sum _{i=1}^{n_{\rho }} \widetilde{w}_{i}^{j} (\rho _{i} - \rho ^{j}_{i}) + \alpha ^{j} = \eta \quad \ \quad \forall j \in \mathcal {P}(k) \\ & \quad \sum _{i=1}^{n_{\rho }} \frac{\rho _{i}}{n_{\rho }} = V \\ & \quad \boldsymbol{\rho } \in \lbrace 0, 1\rbrace ^{n_{\rho }} \\ & \quad \alpha ^{j} \geq 0\quad \ \quad \forall j \in \mathcal {P}(k). \\ \end{aligned} \tag{18} \end{equation*} View SourceRight-click on figure for MathML and additional features.

Next, the continuous variables \eta and \alpha ^{j} are replaced with a series of binary variables. To do so, the upper and lower bounds for \eta and \alpha ^{j} need to be specified first. Note the facts that \eta is the lower bound of the problem (2), and the GBD has generated an upper bound U for (2). Thus, U can be regarded as the upper bound for \eta. Further, due to the positive definite property of the differential operator \mathcal {L}, the compliance defined as \mathbf {f}^\intercal \mathbf {u} must be nonnegative. The term, \mathbf {f}^\intercal \mathbf {u}^{j} - \sum _{i=1}^{n_{\rho }} \widetilde{w}_{i}^{j} (\rho _{i} - \rho ^{j}_{i}), is the linear approximation of the original problem (2) at (\mathbf {u}^{j}, \bm {\rho }^{j}), which is close to \mathbf {f}^\intercal \mathbf {u}^{j} when \bm {\rho } satisfies the volume constraint. Hence, \mathbf {f}^\intercal \mathbf {u}^{j} - \sum _{i=1}^{n_{\rho }} \widetilde{w}_{i}^{j} (\rho _{i} - \rho ^{j}_{i}) \geq 0 holds, from which we obtain: \eta = \mathbf {f}^\intercal \mathbf {u}^{j} - \sum _{i=1}^{n_{\rho }} \widetilde{w}_{i}^{j} (\rho _{i} - \rho ^{j}_{i}) + \alpha ^{j} \geq \alpha ^{j} \geq 0. Thus, the lower bounds for both \eta and \alpha ^{j} can be set zero. From \alpha ^{j} \leq \eta \leq U, we can set the upper bound for \alpha ^{j} as U as well. Given their upper and lower bounds specified, the continuous variables \eta and \alpha ^{j} can be approximated by a series of binary variables, as \begin{equation*} \begin{aligned} \eta \approx & \widetilde{\eta }(\mathbf {e}) = \frac{U}{2^{n_{\eta }}} e_{0} + \sum _{i = 1}^{n_{\eta }} \frac{U}{2^{i}} e_{i} \\ \alpha ^{j} \approx & \widetilde{\alpha }^{j}(\mathbf {a}^{j}) = \frac{U}{2^{n_{\alpha ^{j}}}} a_{0}^{j} + \sum _{i = 1}^{n_{\alpha ^{j}}} \frac{U}{2^{i}} a^{j}_{i} \; \end{aligned} \end{equation*} View SourceRight-click on figure for MathML and additional features.where \begin{equation*} \begin{aligned} \mathbf {e} \in \lbrace 0, 1\rbrace ^{n_{\eta } + 1}, \quad \mathbf {a}^{j} \in \lbrace 0,1 \rbrace ^{n_{\alpha ^{j}} + 1} \;. \end{aligned} \end{equation*} View SourceRight-click on figure for MathML and additional features.The accuracy of this approximation is controlled by n_{\eta } and n_{\alpha ^{j}}. Their values can be chosen according to the desired accuracy and/or the number of accessible qubits in quantum annealers. Thus, we obtain a binary programming problem as \begin{align*} & \min _{\boldsymbol{\rho }, \mathbf {e}, \mathbf {a^{j}}} \quad \widetilde{\eta }(\mathbf {e}) \\ &\rm{s.t.} \quad \mathbf {f}^\intercal \mathbf {u}^{j} - \sum _{i=1}^{n_{\rho }} \widetilde{w}_{i}^{j} (\rho _{i} - \rho ^{j}_{i}) + \widetilde{\alpha }^{j} (\mathbf {a}^{j}) \\ & \qquad\qquad \rm{= \widetilde{\eta }(\mathbf {e})\quad \forall j \in \mathcal {P}(k)} \\ & \quad \sum _{i=1}^{n_{\rho }} \frac{\rho _{i}}{n_{\rho }} = V \\ & \quad \boldsymbol{\rho } \in \lbrace 0, 1\rbrace ^{n_{\rho }} \;.\tag{19} \end{align*} View SourceRight-click on figure for MathML and additional features.By such, the constraints can be readily moved into the objective function through the penalty method to obtain the following QUBO formulation: \begin{equation*} \begin{aligned} \widetilde{\eta }(\mathbf {e}) & + \sum _{j \in \mathcal {P}(k)} A^{j} \left[ W^{j} - \left(\sum _{i = 1}^{n_{\rho }} \widetilde{w}_{i}^{j} \rho _{i} \right) + \widetilde{\alpha }^{j}(\mathbf {a}^{j}) - \widetilde{\eta }(\mathbf {e}) \right]^{2} \\ & + B \left(\sum _{i=1}^{n_{\rho }} \frac{\rho _{i}}{n_{\rho }} - V \right)^{2} \; \end{aligned} \tag{20} \end{equation*} View SourceRight-click on figure for MathML and additional features.where \begin{equation*} W^{j} = \mathbf {f}^\intercal \mathbf {u}^{j} + \sum _{i = 1}^{n_{\rho }} \widetilde{w}_{i}^{j} \rho _{i}^{j} \;. \end{equation*} View SourceRight-click on figure for MathML and additional features.As such, we have cast the MILP problem (15) into QUBO.

Finally, we determine the penalty factors. Theoretically, the penalty factors in (20) should be gradually increased until the optimal solution converges. However, due to the limited machine precision of currently accessible quantum computers (e.g., up to 10^{-6} [46]), the penalty factors cannot be arbitrarily large in practice. On the other hand, the penalty factors scaled with U, the upper bound of \eta, can effectively direct the quantum annealer to find a solution near the optimal. From our numerical experiments, setting the penalty factors equal to the upper bound U leads to a satisfactory performance, as a trade-off between the machine precision attainable in the hardware used and the magnitude of the penalty factors required for accuracy, i.e., A^{j} = U, \quad B = U. It is worth to mention that any inexact solution for the problem (15), smaller than the exact solution due to the violation of the constraints, will not lead to an abnormal termination of the GBD iterations in Algorithm 1, but just result in more iteration steps before reaching the convergence such that (U - \eta ^{k}) / U < \xi.

B. Splitting Approach for Problem Reduction

Solving the QUBO problem in (20) requires n_{qubit}^{l} = n_{\rho } + n_{\eta } + \sum _{j \in \mathcal {P}(k)} n_{\alpha ^{j}} + |\mathcal {P}(k)| + 1 logical qubits to embed the problem, which can be a large number. Furthermore, the quadratic terms [ W^{j} - (\sum _{i = 1}^{n_{\rho }} \widetilde{w}_{i}^{j} \rho _{i}) + \widetilde{\alpha }^{j} - \widetilde{\eta } ]^{2} and (\sum _{i=1}^{n_{\rho }} \frac{\rho _{i}}{n_{\rho }} - V) ^{2} require all-to-all connections, since each evolves all binary variables simultaneously. However, due to the limited connectivity of qubits on the currently available quantum computers [47], all-to-all connections would require much more physical qubits than logical qubits to embed the QUBO problem, making the hardware accessibility even more challenging. Consequently, a practical TO problem formulated as (20) cannot be handled by the current or near-term quantum computers.

To resolve this issue, we further develop a splitting approach in this section. We note that a binary programming problem like (16) or (17) can be efficiently solved by a classical optimizer, as discussed below. In fact, by taking the continuous relaxation of the design variable as \bar{\bm {\rho }} (0 \leq \bar{\rho }_{i} \leq 1, \, i=1,\ldots, n_\rho), the binary programming problem like (16) or (17) can be relaxed into a (continuous) linear programming. If we consider the volume constraint \sum _{i=1}^{n_{\rho }} \bar{\rho }_{i} = n_{\rho } V as a cut onto the n_\rho-dimensional cube of the relaxed design variable \bar{\bm {\rho }}, where n_{\rho } V is an integer, the feasible set for the linear programming can be represented by a polyhedron P as \begin{equation*} P = \left\lbrace (\bar{\rho }_{1},\ldots, \bar{\rho }_{n_{\rho }}) \left| \sum _{i=1}^{n_{\rho }} \bar{\rho }_{i} = n_{\rho } V, \quad 0 \leq \bar{\rho }_{i} \leq 1\quad \ \forall i \right. \right\rbrace \; \end{equation*} View SourceRight-click on figure for MathML and additional features.where each element of \bar{\bm {\rho }} must be binary, i.e., \bar{\rho }_{i} =0 or 1 \forall i, at each vertex. Thus, the linear programming's optimal solution, i.e., a vertex of the polyhedron P [48], must also be the optimal solution of the original binary programming problem. Therefore, the problem (16) or (17) can be exactly solved with the complexity of \mathcal {O}(n_{\rho }) and hence be handled efficiently on classical computers. In contrast, although the problem (19) is also a binary programming problem, the additional |\mathcal {P}(k)| constraints or cuts, other than the volume constraint, make it generally impossible to find the feasible set of the relaxed linear programming as a polyhedron with vertices taking binary values, and hence it still suffers from the NP-hardness.

By leveraging this observation, we propose the following procedure to split the problem (15) into two parts. The first part consists of several subproblems like (16), which will be solved on classical computers. The second part is a problem similar to (15) but with greatly reduced variables, which will be solved on a quantum computer. By taking only one of the |\mathcal {P}(k)| inequality constraints as well as the volume constraint in (15), we can form |\mathcal {P}(k)| subproblems like (16) as \begin{equation*} \begin{aligned} \min _{\boldsymbol{\rho }} & \quad \mathbf {f}^\intercal \mathbf {u}^{j} - \sum _{i=1}^{n_{\rho }} \widetilde{w}_{i}^{j} (\rho _{i} - \rho ^{j}_{i}) \ \quad \forall j \in \mathcal {P}(k) \\ \text{s.t.} & \quad \sum _{i=1}^{n_{\rho }} \frac{\rho _{i}}{n_{\rho }} = V \\ & \quad \boldsymbol{\rho } \in \lbrace 0, 1\rbrace ^{n_{\rho }} \;. \end{aligned} \tag{21} \end{equation*} View SourceRight-click on figure for MathML and additional features.The solution of (21) is denoted as \widetilde{\bm {\rho }}^{j}. Note that when |\mathcal {P}(k)|>1, the material layouts obtained in different iterations (e.g., j_{1}, j_{2} \in \mathcal {P}(k)) are only different in finite numbers of nodes, i.e., most elements of the design variables can be the same. We denote the index set of those elements as \mathcal {I} such that \begin{equation*} \widetilde{\rho }_{i}^{j_{1}} = \widetilde{\rho }_{i}^{j_{2}} \ \quad \forall i \in \mathcal {I} \subseteq \lbrace 1,\ldots, n_{\rho } \rbrace \ \quad \forall j_{1}, j_{2} \in \mathcal {P}(k) \;. \tag{22} \end{equation*} View SourceRight-click on figure for MathML and additional features.And the complement set of \mathcal {I} is denoted as \mathcal {I^{C}} = \lbrace 1,\ldots, n_{\rho } \rbrace \setminus \mathcal {I}, which contains the index of elements that have different values for the design variable in different iterations, i.e., \widetilde{\rho }_{i}^{j_{1}} \ne \widetilde{\rho }_{i}^{j_{2}}\quad \ \forall i \in \mathcal {I}^{C}, \exists j_{1}, j_{2} \in \mathcal {P}(k).

Thus, a reduced problem of (15) can be formed as \begin{equation*} \begin{aligned} \min _{\boldsymbol{\rho }, \eta } & \quad \eta \\ \text{s.t.} & \quad R^{j} - \sum _{i \in \mathcal {I^{C}}} \widetilde{w}_{i}^{j} (\rho _{i} - \rho ^{j}_{i}) \leq \eta \ \quad \forall j \in \mathcal {P}(k) \\ & \quad \sum _{i \in \mathcal {I^{C}}} \frac{\rho _{i}}{n_{\rho }} = V - \sum _{i \in \mathcal {I}} \frac{\widetilde{\rho }_{i}}{n_{\rho }} \\ & \quad \boldsymbol{\rho } \in \lbrace 0, 1\rbrace ^{|\mathcal {I^{C}}|} \; \end{aligned} \tag{23} \end{equation*} View SourceRight-click on figure for MathML and additional features.where \begin{equation*} R^{j} = \mathbf {f}^\intercal \mathbf {u}^{j} - \sum _{i \in \mathcal {I}} \widetilde{w}_{i}^{j} (\widetilde{\rho }_{i}^{j} - \rho _{i}^{j})\;. \tag{24} \end{equation*} View SourceRight-click on figure for MathML and additional features.Compared with the original problem (15), while the number of constraints is identical, the number of design variables has been reduced, from n_{\rho } to |\mathcal {I^{C}}|. Although it is difficult to estimate the exact value of |\mathcal {I^{C}}| in advance, we anticipate |\mathcal {I^{C}}|\ll n_{\rho } as discussed above and also demonstrated by the numerical experiments in Section IV-B.

By introducing slack variables \alpha ^{j} and approximating the continuous variables \eta and \alpha ^{j} as series of binary variables, as described in Section III-A, the reduced master problem (23) can be transformed into a binary programming problem as \begin{equation*} \begin{aligned} \min _{\boldsymbol{\rho }, \mathbf {e}, \mathbf {a}^{j}, \mathbf {b}} & \quad \widetilde{\eta }(\mathbf {e}) \\ \rm{s.t.} & \quad R^{j} - \sum _{i \in \mathcal {I^{C}}} \widetilde{w}_{i}^{j} (\rho _{i} - \rho ^{j}_{i}) + \widetilde{\alpha }^{j}(\mathbf {a}^{j}) \\ & \qquad \qquad \rm{ = \widetilde{\eta }(\mathbf {e}), \quad j \in \mathcal {P}(k)} \\ & \quad \sum _{i \in \mathcal {I^{C}}} \frac{\rho _{i}}{n_{\rho }} = \widetilde{V} \\ & \quad \boldsymbol{\rho } \in \lbrace 0, 1\rbrace ^{|\mathcal {I^{C}}|}, \mathbf {e} \in \lbrace 0, 1\rbrace ^{n_{\eta } + 1} \\ & \quad \mathbf {a}^{j} \in \lbrace 0,1 \rbrace ^{n_{\alpha ^{j}} + 1} \; \end{aligned} \tag{25} \end{equation*} View SourceRight-click on figure for MathML and additional features.where \begin{equation*} \begin{aligned} \widetilde{\eta }(\mathbf {e}) = & \frac{U}{2^{n_{\eta }}} e_{0} + \sum _{i = 1}^{n_{\eta }} \frac{U}{2^{i}} e_{i} \; \\ \widetilde{\alpha }^{j}(\mathbf {a}^{j}) = & \frac{U}{2^{n_{\alpha ^{j}}}} a_{0}^{j} + \sum _{i = 1}^{n_{\alpha ^{j}}} \frac{U}{2^{i}} a^{j}_{i} \; \\ \widetilde{V} = & V - \sum _{i \in \mathcal {A}} \frac{\widetilde{\rho }_{i}}{n_{\rho }} \;. \end{aligned} \end{equation*} View SourceRight-click on figure for MathML and additional features.And further through the penalty method, it can be written into the QUBO formulation as \begin{equation*} \begin{aligned} \widetilde{\eta }(\mathbf {e}) + & \sum _{j \in \mathcal {P}(k)} A^{j} \left[ R^{j} - \left(\sum _{i \in \mathcal {I^{C}}} \widetilde{w}_{i}^{j} \rho _{i} \right) + \widetilde{\alpha }_{j}(\mathbf {a}_{j}) - \widetilde{\eta }(\mathbf {e}) \right]^{2} \\ + & B \left[ \sum _{i \in \mathcal {I^{C}}} \frac{\rho _{i}}{n_{\rho }} - \widetilde{V} \right]^{2} \;. \\ \end{aligned} \tag{26} \end{equation*} View SourceRight-click on figure for MathML and additional features.

As |\mathcal {I^{C}}|\ll n_{\rho }, the QUBO problem in (26) can now be handled by the current or near-term quantum computers with a small number of qubits and limited connectivity.

The proposed splitting approach is summarized in Algorithm 3.

Algorithm 3: ToGbdSubSplitting(V, \bm {\rho }^{1}).

Input: Volume fraction V, initial material layout \bm {\rho }^{1}

Output: Optimal material layout \boldsymbol{\rho }^*

Utility: Find an optimal material layout with a given volume fraction V and initial material layout \bm {\rho }^{1}

1:

for k=1, \dots do

2:

Initialize the upper bound U as U = +\infty

3:

Employ the linear solver to obtain \mathbf {u}^{k} = \mathbf {K}^{-1}(\boldsymbol{\rho }^{k}) \mathbf {f}

4:

if \mathbf {f}^\intercal \mathbf {u}^{k} < U then

5:

U \gets \mathbf {f}^\intercal \mathbf {u}^{k}

6:

\boldsymbol{\rho }^* \gets \boldsymbol{\rho }^{k}

7:

end if

8:

Generate the Pareto optimal cuts \mathcal {P}(k) according to (14)

9:

if | \mathcal {P}(k) | = 1 then

10:

Solve the master problem (16) for the minimizer \eta ^{k} and \boldsymbol{\rho }^{k+1} on a classical computer

11:

else

12:

Solve the subproblem (21) for the minimizer \widetilde{\boldsymbol{\rho }}^{j} for each element in \mathcal {P}(k) on a classical computer

13:

Generate the index sets \mathcal {I} and \mathcal {I^{C}} according to (22)

14:

Solve the reduced QUBO problem (26) for the minimizer \eta ^{k} and \bm {\rho }^{k+1} on a quantum annealer

15:

end if

16:

if (U - \eta ^{k}) / U < \xi then

17:

break

18:

end if

19:

end for

Return: \boldsymbol{\rho }^*

SECTION IV.

Numerical Results

A. Solving a Toy Problem for Validation

To validate that the method discussed in Section III-A permits a quantum computer to effectively solve the MILP problem like (15), we consider the following mixed-integer programming problem with linear inequality and equality constraints: \begin{equation*} \begin{aligned} \min _{u, v, w, t} \quad & v + w + t + (u - 2)^{2} \\ \text{s. t. } \quad & v + \text{2}\,w + t + u \leq 3 \\ \quad & v + w + t \geq 1 \\ \quad & v + w = 1 \\ \quad & v, w, t \in \lbrace 0, 1 \rbrace, u \in \mathbb {R} \;. \end{aligned} \tag{27} \end{equation*} View SourceRight-click on figure for MathML and additional features.This problem contains one continuous and three binary variables and one equality and two inequality constraints. Its unique optimal solution is known to be u = 2, v = 1, and w = t = 0.

According to the proposed methodology, we first transform this problem into a binary programming problem, and then rewrite it in the formulation of QUBO. By noting that 0 \leq u \leq 3, we can introduce a binary approximation \widetilde{u} for the continuous variable u as \begin{equation*} \widetilde{u}(\mathbf {e}) = \frac{3}{2^{n_{u}}} e_{0} + \sum _{i = 1}^{n_{u}} \frac{3}{2^{i}} e_{i} \;. \end{equation*} View SourceRight-click on figure for MathML and additional features.Next, we introduce two slack variables \alpha ^{1} and \alpha ^{2} to convert the two inequality constraints into equality constraints. By noting that 0 \leq \alpha ^{1} \leq 3, 0 \leq \alpha ^{2} \leq 2, and \alpha ^{2} \in \mathbb {Z} since the second inequality constraint only contains integers, we can obtain their binary representations \widetilde{\alpha }^{1} and \widetilde{\alpha }^{2}, respectively, and thereby rewrite the MILP as a binary programming problem \begin{equation*} \begin{aligned} \min _{\mathbf {e}, \mathbf {a}^{1}, \mathbf {a}^{2}, v, w, t} \quad & v + w + t + (\widetilde{u} - 2)^{2} \\ \text{s. t. } \quad & v + \text{2}\,w + t + \widetilde{u} + \widetilde{\alpha }^{1} = 3 \\ \quad & -v - w - t + \widetilde{\alpha }^{2} = -1 \\ \quad & v + w = 1 \\ \quad & v, w, t \in \lbrace 0, 1 \rbrace \\ \quad & \mathbf {e} \in \lbrace 0, 1\rbrace ^{n_{u} + 1} \\ \quad & \mathbf {a}^{1} \in \lbrace 0, 1\rbrace ^{n_{\alpha ^{1}} + 1}, \mathbf {a}^{2} \in \lbrace 0, 1\rbrace ^{2} \; \end{aligned} \tag{28} \end{equation*} View SourceRight-click on figure for MathML and additional features.where \begin{equation*} \begin{aligned} \widetilde{\alpha }^{1}(\mathbf {a}^{1}) = & \frac{3}{2^{n_{\alpha ^{1}}}} a^{1}_{0} + \sum _{i = 1}^{n_{\alpha ^{1}}} \frac{3}{2^{i}} a^{1}_{i} \\ \widetilde{\alpha }^{2}(\mathbf {a}^{2}) = & a_{0}^{2} + a_{1}^{2} \;. \end{aligned} \end{equation*} View SourceRight-click on figure for MathML and additional features.Finally, through the penalty method, it can be written into the QUBO as \begin{align*} & v + w + t + \left[ \widetilde{u} - 2 \right]^{2} + A \left[ v + \text{2}\,w + t + \widetilde{u} - 3 + \widetilde{\alpha }^{1} \right]^{2} \\ & + A\left[ -v - w - t + 1 + \widetilde{\alpha }^{2} \right]^{2} + A\left[ v + w - 1 \right]^{2} \; \tag{29} \end{align*} View SourceRight-click on figure for MathML and additional features.

where the penalty factor is set as A = 4 in this problem.

The problem (29) was directly submitted to the D-Wave Advantage QPU. The annealing time was set with 20 \mu s, and the sampling was repeated for 1000 times. The final state reached at the end of each annealing was recorded, and the state with the lowest energy among all 1000 samplings was regarded as the ground state of the Hamiltonian defined in (29) and hence the solution to the binary programming problem (28). The continuous variables u and \alpha ^{1} as well as the integer variable \alpha ^{2} of the original problem (27) were then recovered from their binary approximations. The final results for the values of different variables are summarized in Table 1. To confirm that the results have converged with respect to the parameter settings in the binary approximations, we compare two different groups of values for n_{u} and n_{\alpha ^{1}}. The results agree with the theoretical solution to the original problem (27), and hence the accuracy of the obtained QC-based solution is validated. An additional note about the continuous variables is made as follows: if we treat the original problem (27) as a continuous optimization problem and solve it for the continuous variables, with all the binary variables fixed to the solution obtained from solving (29), the accuracy can be even better than recovering them from their binary approximations.

TABLE 1 Results for Solving the Toy Problem (27)
Table 1- Results for Solving the Toy Problem (27)

B. Solving the Minimal Compliance Problem

In this section, we concern a canonical TO problem, the minimum compliance, as formulated in (2). The minimum compliance problem denotes a kind of topology optimization problem for finding an optimal material layout exerted with static external loads with a given volume constraint when doing static analysis for structures. The optimal material layout can minimize the displacement under unit given external load—the compliance, and in turn maximize the load the structure can sustain. Therefore, the optimal solution can give out the most efficient way to utilize the given material to withstand external loads. The problem setup follows that in [25]. More specifically, we consider a rectangular material domain with the external loading of a unit point force \mathbf {F} exerted at the top left corner, as illustrated in Fig. 2. The solution domain, or the design space, is discretized by a uniform quadrilateral mesh with different resolutions. In (3), the Young's modulus is E = 1.0, and the Poisson ratio is \nu = 0.3. The target volume fraction is V_{T} = 0.5, and the incremental change of volume is \Delta V = \frac{1}{24}. The convergence criterion tolerance required in Algorithm 3 is \xi = 5 \times 10^{-4}. All values are in reduced units.

FIGURE 2. - Schematic of the problem setup for the minimum compliance. The yellow region denotes the design domain discretized by a uniform quadrilateral mesh with the resolution of $60 \times 20$, where $L=60$, $H=20$, and the external load is a unit point force, $\mathbf {F}=(0,-1)$, exerted at the top leftmost node.
FIGURE 2.

Schematic of the problem setup for the minimum compliance. The yellow region denotes the design domain discretized by a uniform quadrilateral mesh with the resolution of 60 \times 20, where L=60, H=20, and the external load is a unit point force, \mathbf {F}=(0,-1), exerted at the top leftmost node.

To solve this TO problem, we followed the proposed methodology, consisting of the GBD, conversion of the MILP into QUBO, and the splitting approach. More specifically, Algorithm 2 was executed, but with ToGbdSub in Line 5 replaced by ToGbdSubSplitting defined in Algorithm 3 to integrate the proposed splitting approach. While the binary programming problems (16), (17), and (21) were solved by invoking the classical MILP solver Gurobi [49], the binary programming problem (25) was solved on the quantum annealer provided by D-Wave. From the validation test in Section IV-A and considering the accuracy of the current quantum devices and the required number of qubits, the parameters for the binary approximations were set as: n_{\eta } = n_{\alpha ^{j}} = 10 in (25). Once the solution of the design variable \bm {\rho }^{k+1} is determined, the problem (15) can be treated as a continuous optimization problem, from which we can solve for the continuous variable \eta ^{k} required by Algorithm 3. Alternatively, \eta ^{k} can be recovered from its binary approximation. We chose the former approach herein for its slightly better accuracy.

To implement the problem (25) on the quantum annealer provided by D-Wave, we examined two different ways. In the first way, we directly embedded the QUBO problem (26) formulated from (25) in the QPU, which is named as “GBD-splitting-direct.” In the second way, we solved the problem (25) by taking advantage of the hybrid solver provided by D-Wave, particularly the constrained quadratic model (CQM), which is hence referred to as “GBD-splitting-CQM.” We compare the results in terms of solution quality and wall time, as summarized in Table 2, where the discretization resolution is chosen as 60 \times 20; the parameter for filtering, as described in (12), is set as r = 2; T denotes the total wall time spent for executing Algorithm 2 to find the optimal material layout; \mathbf {f}^\intercal \mathbf {u} denotes the objective function's value obtained corresponding to the optimal material layout; and N denotes the total number of binary programming problems invoked by Algorithms 2 and 3. Details about each implementation and our main findings are provided as below.

In GBD-splitting-direct, each sampling is set with 20\, \mu s annealing time, and the sampling is repeated for 1000 times. The final states reached at the end of annealing were recorded, and the state with the lowest energy among 1000 samplings was regarded as the ground state of the Hamiltonian defined in (26) and hence the solution to the binary programming problem (25). From our statistics, totally 80 binary programming problems were invoked, 11 among which are the problem (26) and were solved by the quantum annealer. For each, we carefully examined the number of the resultant reduced variables (|\mathcal {I}^{C}|), the number of physical qubits required for embedding the problem (26), and the solution time spent on the QPU, as summarized in Table 3, where the 11 problems are organized according to their sequence of being solved.

TABLE 2 Comparison Between Different Implementations for the Reduced Binary Master Problem (25) on the Quantum Annealer Provided by D-Wave, Where T Denotes the Total Wall Time Spent for Executing Algorithm 2
Table 2- Comparison Between Different Implementations for the Reduced Binary Master Problem (25) on the Quantum Annealer Provided by D-Wave, Where $T$ Denotes the Total Wall Time Spent for Executing Algorithm 2
TABLE 3 Statistics About the 11 QUBO Problems (26) Solved in the Implementation of GBD-Splitting-Direct, Where T_\text{QUBO} Denotes the Time Spent for Embedding and Annealing on the QPU
Table 3- Statistics About the 11 QUBO Problems (26) Solved in the Implementation of GBD-Splitting-Direct, Where $T_\text{QUBO}$ Denotes the Time Spent for Embedding and Annealing on the QPU

With the discretization resolution of 60 \times 20, the number of design variables is n_{\rho } = 1200. From the value of |\mathcal {I}^{C}| reported in Table 3, we can see that the splitting approach proposed in Section III-B has greatly reduced the size of the problem to be solved on a quantum computer. Note that the total logical qubits needed to represent the problem (26) should also include those required to represent the binary approximations of \eta and \alpha ^{j}. Thus, the number of logical qubits required in total is: n_{q}^{r} = |\mathcal {I}^{C}| + n_{\eta } + 1 + |\mathcal {P}(k)| +\sum _{j \in \mathcal {P}(k)} n_{\alpha ^{j}}. By reducing |\mathcal {I}^{C}|, n_{q}^{r} can be reduced accordingly. If the physical qubits have all-to-all connections, the number of physical qubits required would be consistent with n_{q}^{r}. However, due to the sparse connectivity provided by the current quantum annealing machines, the number of physical qubits required for embedding the problem, denoted as n_{q}^{e} is in fact much larger than n_{q}^{r} [50], as reported in Table 3. Also, with the increase of n_{q}^{r}, n_{q}^{e} can grow very fast [50], although fluctuating due to inhomogeneous connections between qubits. As a result, most of the time spent on the QPU is dominated by the embedding overhead, as indicated by the values of T_\text{QUBO} in Table 3, noting that the total annealing time for 1000 repetitions of sampling is only 20 ms. The current D-Wave Advantage system permits access to no more than 5000 qubits. To constrain n_{q}^{e} not beyond 5000, |\mathcal {I}^{C}| has to be no more than a few hundreds. Thus, even with the splitting approach, the number of design variables in the original problem must be limited to moderate, which makes solving the minimum compliance problem with finer discretization resolutions challenging.

To tackle this challenge, we pursued the second way of implementation, i.e., GBD-splitting-CQM, where the problem (25) was submitted, and the CQM, provided by D-Wave, was called for embedding and solving the problem. All default setups were used when employing the CQM hybrid solver in all the numerical tests. As indicated by the wall time T in Table 2, the implementation through CQM greatly saved the entire solving time. The reason for that is the CQM can further reduce the size of the problem embedded on the QPU and in turn save the time spent for embedding, knowing the fact that embedding dominates the time consumed on the quantum devices. In addition, the implementation through CQM results in fewer binary programming problems invoked and a lower value for the objective function. Recalling the discussion about the inexact solution in Section III-B that any inexact solution to the problem (15) can potentially increase the number of GBD iterations, fewer binary programming problem invoked suggest that the solution found by the CQM is closer to the exact solution. Possible reasons for that include: the CQM further reduces the problem embedded on the QPU and makes the resultant optimization problem easier to be accurately solved; in GBD-splitting-direct, the values chosen for the penalty factors may not be optimal, and the probability of recovering the global optimal solution is highly dependent on the embedding and annealing schedule.

Based on the above comparison and findings, when we scaled up the minimum compliance problem with increasing numbers of design variables (with finer discretization resolutions), we employed the implementation of GBD-splitting-CQM, owing to its better performance in terms of the solution quality, wall time, and capacity to embed larger problems. In particular, we considered two different shapes of material domains: L:H = 3:1 and L:H = 2:1, and the discretization resolution varies from 120\times 40 to 480\times 240, leading to the number of design variables (n_\rho) varying from 4800 to 115 200. The results about the obtained minimum values of the objective function, the associated computing time, and the corresponding optimal material layouts are presented in Table 4 and Figs. 3 and 4. Taking the discretization resolution of 480 \times 160 as an example, we examined the run time of the D-Wave's hybrid CQM solver for solving problem (25). Out of the total 89 iterations required for solving the TO problem (2), 6 iterations involve solving (25), and their statistics of run time are summarized in Table 5. Here, the size of problem (25) reduced by the proposed splitting approach is reported as |\mathcal {I}^{C}|, which is significantly smaller than the original number of variables, n_{\rho } =76 800. The total time spent by the CQM is denoted as T_\text{CQM}. T_\text{QPU} reports the time spent for annealing in QPUs, which is expected to be short.

TABLE 4 Comparison Between Different Methods for Solving the Minimum Compliance Problem
Table 4- Comparison Between Different Methods for Solving the Minimum Compliance Problem
TABLE 5 Run Time of the D-Wave's Hybrid CQM Solver for Solving Problem (25), Where the Discretization Resolution is 480 \times 160, T_\text{QPU} Denotes the Annealing Time Spent on QPUs, and T_\text{CQM} is the Total Wall Time Cost by the CQM
Table 5- Run Time of the D-Wave's Hybrid CQM Solver for Solving Problem (25), Where the Discretization Resolution is $480 \times 160$, $T_\text{QPU}$ Denotes the Annealing Time Spent on QPUs, and $T_\text{CQM}$ is the Total Wall Time Cost by the CQM
FIGURE 3. - Resultant optimal material layouts obtained by different methods, with the design space of ($L=60$, $H=30$) and the discretization resolution of $480 \times 240$. (a) GBD-Splitting-CQM. (b) SIMP-Heaviside. (c) TOBS. (d) DVTOPCRA.
FIGURE 3.

Resultant optimal material layouts obtained by different methods, with the design space of (L=60, H=30) and the discretization resolution of 480 \times 240. (a) GBD-Splitting-CQM. (b) SIMP-Heaviside. (c) TOBS. (d) DVTOPCRA.

FIGURE 4. - Resultant optimal material layouts obtained by different methods, with the design space of ($L=60$, $H=20$) and the discretization resolution of $480 \times 160$. (a) GBD-Splitting-CQM. (b) SIMP-Heaviside. (c) TOBS. (d) DVTOPCRA.
FIGURE 4.

Resultant optimal material layouts obtained by different methods, with the design space of (L=60, H=20) and the discretization resolution of 480 \times 160. (a) GBD-Splitting-CQM. (b) SIMP-Heaviside. (c) TOBS. (d) DVTOPCRA.

To verify the QC-based solutions and to assess the benefits of QC for TO, we further compare with the state-of-the-art classical solvers, as discussed in the next section.

C. Comparison With Classical Solvers

The classical solvers considered include the SIMP method [25], the TOBS method [27], and the DVTOPCRA method [26]. For the SIMP method, we followed the implementation in [25] with the penalty factor p = 3 and the maximum number of iterations set as 1000. We consider the sensitivity filter controlled by a radius size as in (12) along with Heaviside projection [51]. The convergence criterion is controlled by the change of design variables as \max _{i} | \Delta \rho _{i} | < 0.01. For the TOBS method, the setup was the same as provided in [27], except that the MILP solver was changed to Gurobi [49] with default setups to permit a fair comparison with our proposed solution strategy, where Gurobi is used for solving the binary programming problems (16), (17), and (21). For the DVTOCRA method, the implementation exactly followed [52] for the minimum compliance problem.

The results obtained by GBD-splitting-CQM and by the three classical methods are compared in Table 4 and Figs. 3 and 4, for different shapes of material domains with different discretization resolutions. In Table 4, T denotes the total wall time spent by each method to find the optimal material layout, including solving all suboptimization-problems and FEM analysis in (6); \mathbf {f}^\intercal \mathbf {u} denotes the objective function's value obtained corresponding to the optimal material layout. For all the three classical methods, N denotes the number of iterations; in each iteration, one FEM analysis (6) is performed, and one suboptimization-problem is solved. In GBD-splitting-CQM, N represents the total number of binary programming problems invoked by Algorithms 2 and 3; one FEM analysis (6) is performed prior to invoking each binary programming problem, as stipulated by Algorithm 2. Therefore, comparing the values of N is essentially comparing among all methods how many times the FEM analysis is conducted, which dominates the computational cost in large-scale TO problems.

By comparison, the proposed solution strategy exhibits the following advantages. First, it turns out requiring the least number of iterations to reach convergence in almost all the test cases. This can be related to the multicuts generated by the GBD iterations and the guarantee of finite iterations by the convexity of the problem. Whereas, the other classical methods solve nonconvex subproblems and in turn have no guarantee for convergence within finite iterations. As the discretization resolution increases, the SIMP method with Heaviside projection cannot meet the convergence criterion within the set maximum number of iterations, and the other two methods, TOBS and DVTOPCRA, tend to require more iterations to reach convergence, as shown in Table 4. Second, the minimum values of the objective function found by GBD-splitting-CQM are generally lower than those by the other methods, especially SIMP and DVTOPCRA. Lastly, in terms of the wall time, GBD-splitting-CQM exhibits only a slow growth in computing time and hence a good scaling performance, in contrast to the other three methods. Thus, for solving larger-scale problems, e.g., with the discretization resolutions of 480 \times 160 and 480 \times 240, GBD-splitting-CQM is more efficient than the SIMP-Heaviside and DVTOPCRA methods and comparable with the TOBS method. Several factors can contribute to that. First of all, the splitting approach proposed in Section III-B restricts the growth of the size of the problem to be embedded and solved on a quantum computer and in turn inhibits the growth of time required for embedding. Next, as the discretization resolution increases, the FEM analysis can be more expensive and in turn dominate the computational cost. As a result, the fewer FEM analysis conducted, the less computing time required. Thus, for higher discretization resolutions, e.g., 480 \times 160 and 480 \times 240, GBD-splitting-CQM is more efficient, as it calls for the least numbers of iterations and hence the least times of FEM analysis. Finally, comparing with the TOBS method, the problem (16) contains less constraints than the MILP problem invoked in the TOBS method and in turn costs less solution time in each iteration.

The resultant optimal material layouts obtained by different methods are shown and compared in Figs. 3 and 4, where two different shapes of the design space with the finest discretization resolution are presented. Unlike the nonconvergent SIMP designs with gray regions as shown in Figs. 3(b) and 4(b), the GBD-splitting-CQM designs have sharp contrast with clear 0/1 in density. We hence demonstrate the advantage of GBD-splitting-CQM in generating optimal material layouts with clear boundaries, compared against the designs by the method like SIMP that relaxes the binary design variable into a continuous variable. In addition, when compared with the designs yield from the TOBS method, the designs optimized by GBD-Splitting-CQM also exhibit better minimal length control. Furthermore, the layouts generated by the DVTOPCRA method display crooked internal structures, which can be unfavorable with respect to yield strength.

SECTION V.

Conclusion

We have established a quantum annealing-based solution method for solving continuum, nonlinear TO problems. Its key ingredients include the GBD, conversion of MILP into QUBO, a splitting approach, as well as the implementation on the quantum computer through the hybrid solver, CQM, provided by D-Wave. Through solving a canonical TO problem, the minimum compliance, we have assessed its accuracy and efficiency. By comparing with the state-of-the-art classical methods commonly used in the field of TO, we have demonstrated the advantages of our proposed QC-based methodology, with respect to both solution quality and computational efficiency. Considering the run-time penalty for embedding due to the hardware limitations of current quantum annealing machines [50] and given their future improvements in both the number of qubits and the long-range couplers between qubits, the advantage of the proposed QC-based solution method in terms of computing time can be remarkable. We have also shown its computational efficiency for TO in the sense that the number of FEA runs has been reduced significantly when compared with the classical SIMP method. Future work would exploit this computational efficiency advantages for larger-scale TO problems where FEA cost dominates the optimization run.

This work also presents a new application paradigm for QC and expands the application horizon of QC to include TO, enabling more efficient designs of topology for broad applications.

With increasing complexity, the continuum TO problems can become more challenging. For example, involving the constraints like G_{j} and H_{k} in (1) introduces nonvolumetric constraints; nonuniform meshes necessitated by discretizing complicated design spaces can lead to nonidentical coefficients in the volume constraint associated in the problem (15). The GBD has provided a general enough framework to handle those complexity. Moreover, with nonidentical coefficients in the volume constraint, the resultant binary programming problem (16) can be NP-hard and cannot be efficiently solved on classical computers. Whereas, the proposed quantum annealing-based solution method, as for the problem (23), provides a promising approach for solving NP-hard binary programming problems. In addition, due to the similarities between the problem (16) and the subproblems generated by the TOBS method [27], the proposed solution strategy can also be extended to the TOBS method for dealing with the TO problems subject to complex constraints [53]. In light of these perspectives, introducing QC, particularly quantum annealing, to TO may lead to broad and significant impacts.

The splitting approach developed in the present work has been shown effective for greatly reducing the size of the problem to be embedded in quantum computers, while keeping the solution quality for the optimal material layout. Other approaches, e.g., the multilevel hybrid framework introduced for generic combinatorial optimization [54] and the hierarchical mesh refinement approach [55], can be explored in the future and integrated into the GBD framework to further reduce the cost for solving the problems like (15) on quantum computers.

References

References is not available for this document.