Introduction
Singularly perturbed differential equations can be used to model systems with elements that evolve at different rates. For example, singularly perturbed models have been published for aircraft [1], spacecraft [1], electric motors [2], nuclear reactors [2], factory logistics [3], and pandemics [4]. Whereas adaptive controllers have sometimes been developed for the systems listed above, these adaptive methods have largely ignored the timescale separation (termed Full-Order Adaptive Control (FOAC)) or used sequential loop closure (e.g. [5], [6]). Singular perturbation theory is a more precise method of dealing with timescale behavior, but adaptive control research to date in the literature lacks a rigorous analytical method to check for stability in the presence of singularly perturbed plants.
Singular perturbation theory is a broad mathematical field that has been used in adaptive control design [7], [8], [9], but relatively little research addresses plants that are modeled with singularly perturbed differential equations. Researchers who have used adaptive control on singularly perturbed plants have primarily applied their methods to only a subset of the states and ignored the other dynamics [4], [10], [11]. This method is called Reduced-Order Adaptive Control (ROAC) and it fails when the ignored dynamics are unstable [12].
Multiple-timescale control is a branch of control theory that specifically addresses singularly perturbed plants. However, adaptive control has yet to be considered by multiple-timescale control researchers. Saha and Valasek designed controllers for uncertain singularly perturbed plants [13], [14], [15], [16], but their method derives the adaption laws using a full-order Lyapunov analysis. This makes their method difficult to generalize.
This article extends the [K]control of Adaptive Multiple-timescale Systems (KAMS) methodology which was first introduced and developed in [12]. KAMS provides a flexible framework that enables a wide class of modern adaptive methods to be applied to singularly perturbed systems. Compared to FOAC, ROAC, and sequential loop closure KAMS is more robust and rigorous. Compared to Saha and Valasek's method, KAMS is more general.
The singularly perturbed nature of the plant causes a subset of the states to evolve significantly faster than the other states. The general premise of KAMS is to use geometric singular perturbation theory to fully decouple the fast and slow states [17]. Two different adaptive controllers can then be designed in isolation for these two independent subsystems. The independent control signals are fused using a wide class of methods from the field of multiple-timescale control. These multiple-timescale control fusion techniques have not been studied in the presence of adaptive control. KAMS addresses this gap in the literature.
Allowing adaptive control in both the fast and slow states is a challenging problem because of complex interactions between the slow timescale trajectory of the fast states and the fast state reference model. The present work builds upon the author's prior work [12] which discusses the much simpler case of adaptive control for only the slow states. Unlike [12], the present work makes no prior assumptions about the stability of the plant subsystems. The novel contribution of the present work is formal proof that under certain conditions the coupling present in the more accurate full-order model is insufficient to destabilize these adaptive controllers even though they are designed in isolation.
Section II details the KAMS control framework and associated singular perturbation analysis that is used to decouple the subsystems. In Section III, a set of conditions are derived that are sufficient to show that the states converge to their reference models. Finally, in Section IV an example of KAMS on a nonlinear nonstandard system is given. This example demonstrates how methods common in the literature - Sequential Control and Adaptive Nonlinear Dynamic Inversion (ANDI) - can be used on singularly perturbed systems within the framework of KAMS.
Control Synthesis
This section introduces KAMS, explains the assumptions, and describes the notation. For more details, the reader is referred to [18], [19], [20] for adaptive control, [17], [21] for multiple-timescale control, [22] for singular perturbation theory, and [23] for differential geometry in the context of control theory.
A. System Description
This work addresses singularly perturbed systems that model multiple-timescale plants. A singularly perturbed system is a system that is a function of a small scalar
This work is generalized to the class of systems which are uncertain, nonlinear, multiple-input multiple-output (MIMO) plants of the form
\begin{align*}
\acute{\bm {x}} &= f_{x}(\bm {x},\bm {z},\bm {u}) \tag{1a}\\
\epsilon \acute{\bm {z}} &= f_{z}(\bm {x},\bm {z},\bm {u},\epsilon) \tag{1b}
\end{align*}
Remark 1:
Single-timescale systems can be written in the format described by (1). However, applying a multiple-timescale control technique to a single timescale system comes with a performance penalty. The resulting closed-loop responses will be slower. This effect was identified and explored in [24] and [25]. Oliveira et al. demonstrated this effect on during a neuromuscular electrical stimulation experiment [26].
B. Singular Perturbation Analysis
Geometric singular perturbation theory shows that the system can be approximated by two different asymptotic solutions. The first system is found by taking the limit as
\begin{align*}
\acute{\bm {x}} &= f_{x}(\bm {x},\bm {z}_{s},\bm {u}) \tag{2a}\\
0 &= f_{z}(\bm {x},\bm {z}_{s},\bm {u},0) \tag{2b}
\end{align*}
\begin{align*}
\grave{\bm {x}} &= 0 \tag{3a}\\
\grave{\bm {z}} &= f_{z}(\bm {x},\bm {z},\bm {u},0) \tag{3b}
\end{align*}
C. Adaptive Control
The control objective of this work is to determine the input as a function of the states that permits the full-order system to track a reference model asymptotically. The first step in this process is to design two different adaptive control algorithms that stabilize the reduced subsystems individually. Many adaptive control algorithms are available in the literature for this purpose. This article addresses a wide class of algorithms that fit the format described in this section. The input to the slow subsystem is
\begin{align*}
\bm {\theta }_{x} &= g_{\theta _{x}}(t_{s}) \tag{4a}\\
\bm {\theta }_{z} &= g_{\theta _{z}}(t_{f}) \tag{4b}\\
\acute{\bm {\theta }}_{x} &= f_{\theta _{x}}(t_{s}) \tag{4c}\\
\grave{\bm {\theta }}_{z} &= f_{\theta _{z}}(t_{f}) \tag{4d}
\end{align*}
\begin{align*}
\bm {r}_{x} &= g_{r_{x}}(t_{s}) \tag{5a}\\
\acute{\bm {r}}_{x} &= f_{r_{x}}(t_{s}) \tag{5b}
\end{align*}
\begin{align*}
\acute{\bm {x}}_{m} &= f_{x_{m}}(\bm {x},\bm {x}_{m},\hat{\bm {\theta }}_{x},t_{s}) \tag{6a}\\
\acute{\hat{\bm {\theta }}}_{x} &= f_{\hat{\theta }_{x}}(\bm {x},\bm {x}_{m},\hat{\bm {\theta }}_{x},t_{s}) \tag{6b}\\
\grave{\bm {z}}_{m} &= f_{z_{m}}(\bm {x},\bm {x}_{m},\hat{\bm {\theta }}_{x},\bm {z},\bm {z}_{m},\hat{\bm {\theta }}_{z},t_{f}) \tag{6c}\\
\grave{\hat{\bm {\theta }}}_{z} &= f_{\hat{\theta }_{z}}(\bm {x},\bm {x}_{m},\hat{\bm {\theta }}_{x},\bm {z},\bm {z}_{m},\hat{\bm {\theta }}_{z},t_{f}) \tag{6d}
\end{align*}
Remark 2:
Note that
The role of the timescale separation parameter is important in these equations. If the control input is incorrectly designed then the timescale analysis in the previous section could be invalidated. The following two assumptions are made to prevent that.
Assumption 1:
The manifold is an asymptotically stable equilibrium of the fast reference model in the reduced fast subsystem.
Assumption 2:
The timescale of the reference models, the slow state reference model input, and the adaptation laws all match the timescale of the subsystem to which they are applied. Mathematically this means that
These assumptions are intuitive. For example, if the reference model for the slow states evolved on the fast timescale then the slow states would not be able to keep up - or, more precisely, their evolution could not be decoupled from the fast states.
D. Multiple-Timescale Fusion
The inputs to the reduced-order subsystems have been defined and will form the building blocks of the full-order system input. Let the full-order input take the form
\begin{equation*}
\bm {u} = g_{u}(\bm {x},\bm {x}_{m},\hat{\bm {\theta }}_{x},\bm {z},\bm {z}_{m},\hat{\bm {\theta }}_{z},t_{s}) \tag{7}
\end{equation*}
1) Composite Control
Composite Control [21, p. 94-102] selects the control input to be
2) Sequential Control
In Sequential Control [17] the fast states are used as the input to the slow subsystem. The manifold is selected such that the slow states converge to their reference model by setting
3) Simultaneous Slow and Fast Tracking
Simultaneous Slow and Fast Tracking [2] uses the input
A block diagram for the KAMS control framework described in the previous section is given in Fig. 1. As defined in the previous section the fast adaptive control is allowed to be a function of the slow states. This uncommon case is excluded from the block diagram for readability.
Stability Analysis
This section develops tools for stability analysis of the full-order system. Whereas the adaptive controllers have been designed so that the reduced-order systems are well-behaved, these properties might not extend to the coupled full-order system. The system of equations is rewritten as a single augmented system in terms of the error coordinates for notational simplicity. Examining the differential geometric nature of the augmented system leads to the desired important insights into the behavior of the full-order system.
A. Augmented Error Dynamics
Adaptive control adds additional states (i.e. the reference model and adapting parameters) to the closed-loop system. These states evolve (see (6)) and effectively create a coupled augmented closed-loop system with control states and system states. The augmented closed-loop system is defined in this section.
The variables which describe the state of the system are
\begin{align*}
\bm {\xi } &\triangleq \begin{bmatrix}\bm {x}^{T} & \bm {x}_{m}^{T} & \hat{\bm {\theta }}_{x}^{T} \end{bmatrix}^{T} &\in \mathbb {D}^{n_\xi }_\xi \tag{8a}\\
\bm {\eta }&\triangleq \begin{bmatrix}\bm {z}^{T} & \bm {z}_{m}^{T} & \hat{\bm {\theta }}_{z}^{T} \end{bmatrix}^{T} &\in \mathbb {D}^{n_\eta }_\eta \tag{8b}\\
\bm {\phi }&\triangleq \begin{bmatrix}\bm {\xi }^{T} & \bm {\eta }^{T} \end{bmatrix}^{T} &\in \mathbb {D}^{n_\phi }_\phi \tag{8c}
\end{align*}
\begin{align*}
\bm {z}_{s} &= g_{z_{s}}(\bm {x},\bm {x}_{m},\hat{\bm {\theta }}_{x},\hat{\bm {\theta }}_{z},t_{s}) \tag{9a}\\
\acute{\bm {z}}_{s} &= f_{z_{s}}(\bm {x},\bm {x}_{m},\hat{\bm {\theta }}_{x},\bm {z},\bm {z}_{m},\hat{\bm {\theta }}_{z},t_{s}) \tag{9b}
\end{align*}
Assumption 3:
Section III-C discusses the manifold in more detail.
If the control objective for the full-order system is successfully achieved then two things occur as
\begin{align*}
\bm {e}_{x} &\triangleq \bm {x}-\bm {x}_{m} & \in \mathbb {B}^{n_{x}}(r_{e_\phi }) \tag{10a}\\
\tilde{\bm {x}}_{m} & \triangleq \bm {x}_{m} - \bm {r}_{x} &\in \mathbb {B}^{n_{x}}(r_{e_\phi }) \tag{10b}\\
\tilde{\bm {\theta }}_{x} &\triangleq \hat{\bm {\theta }}_{x}-\bm {\theta }_{x} & \in \mathbb {B}^{n_{\theta _{x}}}(r_{e_\phi }) \tag{10c}\\
\tilde{\bm {z}} &\triangleq \bm {z}-\bm {z}_{s} & \in \mathbb {B}^{n_{z}}(2 r_{e_\phi }) \tag{10d}\\
\tilde{\bm {z}}_{m} &\triangleq \bm {z}_{m}-\bm {z}_{s} & \in \mathbb {B}^{n_{z}}(r_{e_\phi }) \tag{10e}\\
\bm {e}_{z} &\triangleq \bm {z}-\bm {z}_{m} & \in \mathbb {B}^{n_{z}}(r_{e_\phi }) \tag{10f}\\
\tilde{\bm {\theta }}_{z} &\triangleq \hat{\bm {\theta }}_{z}-\bm {\theta }_{z} & \in \mathbb {B}^{n_{\theta _{z}}}(r_{e_\phi }) \tag{10g}
\end{align*}
\begin{equation*}
\bm {e}_{z}= \tilde{\bm {z}}-\tilde{\bm {z}}_{m} \tag{11}
\end{equation*}
A change of variables is now performed to describe the system in terms of the error variables. The new system state variables are
\begin{align*}
\bm {e}_\xi &\triangleq \begin{bmatrix}\bm {e}_{x}^{T} & \tilde{\bm {x}}_{m}^{T} & \tilde{\bm {\theta }}_{x}^{T} \end{bmatrix}^{T} &\in \mathbb {B}^{n_\xi }(r_{e_\phi })\tag{12a}\\
\bm {e}_\eta &\triangleq \begin{bmatrix}\bm {e}_{z}^{T} & \tilde{\bm {z}}_{m}^{T} & \tilde{\bm {\theta }}_{z}^{T} \end{bmatrix}^{T} &\in \mathbb {B}^{n_\eta }(r_{e_\phi })\tag{12b}\\
\bm {e}_\phi &\triangleq \begin{bmatrix}\bm {e}_\xi ^{T} & \bm {e}_\eta ^{T} \end{bmatrix}^{T} &\in \mathbb {B}^{n_\phi }(r_{e_\phi }) \tag{12c}
\end{align*}
\begin{equation*}
(\bm {\phi },t_{s}) = h(\bm {e}_\phi,t_{s}) \tag{13}
\end{equation*}
\begin{equation*}
\bm {\phi } = \begin{bmatrix}\bm {e}_{x} + \tilde{\bm {x}}_{m} + g_{r_{x}}(t_{s}) \\
\tilde{\bm {x}}_{m} + g_{r_{x}}(t_{s}) \\
\tilde{\bm {\theta }}_{x} + g_{\theta _{x}}(t_{s}) \\
\bm {e}_{z} + \tilde{\bm {z}}_{m} + g_{z_{s}}(\cdot) \\
\tilde{\bm {z}}_{m} + g_{z_{s}}(\cdot) \\
\tilde{\bm {\theta }}_{z} + g_{\theta _{z}}(t_{s}/\epsilon) \end{bmatrix} \tag{14}
\end{equation*}
\begin{align*}
g_{z_{s}}(\cdot) = g_{z_{s}}(\bm {e}_{x} + \tilde{\bm {x}}_{m} + g_{r_{x}}(t_{s}),\quad \tilde{\bm {x}}_{m} + g_{r_{x}}(t_{s}), \\
\tilde{\bm {\theta }}_{x} + g_{\theta _{x}}(t_{s}),\quad \tilde{\bm {\theta }}_{z} + g_{\theta _{z}}(t_{s}/\epsilon),\quad t_{s}) \tag{15}
\end{align*}
\begin{align*}
\acute{\bm {e}}_{x} = & f_{x}\circ h(\bm {e}_\xi,\bm {e}_\eta,t_{s}) - f_{x_{m}}\circ h(\bm {e}_\xi,t_{s}) \tag{16a}\\
\acute{\tilde{\bm {x}}}_{m} = & f_{x_{m}}\circ h(\bm {e}_\xi,t_{s}) - f_{r_{x}}(t_{s}) \tag{16b}\\
\acute{\tilde{\bm {\theta }}}_{x} = & f_{\hat{\theta }_{x}}\circ h(\bm {e}_\xi,t_{s}) - f_{\theta _{x}}(t_{s}) \tag{16c}\\
\epsilon \acute{\bm {e}}_{z} = & f_{z}\circ h(\bm {e}_\xi,\bm {e}_\eta,t_{s},\epsilon) - f_{z_{m}}\circ h(\bm {e}_\xi,\bm {e}_\eta, t_{s}) \tag{16d}\\
\epsilon \acute{\tilde{\bm {z}}}_{m} = & f_{z_{m}}\circ h(\bm {e}_\xi,\bm {e}_\eta,t_{s}) -\epsilon f_{z_{s}}\circ h(\bm {e}_\xi,\bm {e}_\eta,t_{s}) \tag{16e}\\
\epsilon \acute{\tilde{\bm {\theta }}}_{z} = & f_{\hat{\theta }_{z}} \circ h(\bm {e}_\xi,\bm {e}_\eta,t_{s}) - f_{\theta _{z}}(t_{f}) \tag{16f}
\end{align*}
\begin{align*}
\acute{\bm {e}}_\xi &= f_{e_\xi }(\bm {e}_\xi,\bm {e}_\eta,t_{s}) \tag{17a}\\
\epsilon \acute{\bm {e}}_\eta &= f_{e_\eta }(\bm {e}_\xi,\bm {e}_\eta,t_{s},\epsilon) \tag{17b}
\end{align*}
\begin{equation*}
\acute{\bm {e}}_\phi = f_{e_\phi }(\bm {e}_\phi,t_{s},\epsilon) \tag{18}
\end{equation*}
Let the subscript
\begin{align*}
\acute{\bm {e}}_{x} = & f_{x}\circ h(\bm {e}_\xi,\bm {e}_{\eta,s},t_{s}) - f_{x_{m}}\circ h(\bm {e}_\xi,t_{s}) \tag{19a}\\
\acute{\tilde{\bm {x}}}_{m} = & f_{x_{m}}\circ h(\bm {e}_\xi,t_{s}) - f_{r_{x}}(t_{s}) \tag{19b}\\
\acute{\tilde{\bm {\theta }}}_{x} = & f_{\hat{\theta }_{x}}\circ h(\bm {e}_\xi,t_{s}) - f_{\theta _{x}}(t_{s}) \tag{19c}
\end{align*}
\begin{equation*}
\acute{\bm {e}}_\xi = f_{e_\xi,s}(\bm {e}_\xi,\bm {e}_{\eta,s},t_{s}) \tag{20}
\end{equation*}
\begin{align*}
\grave{\bm {e}}_{x} = & 0 \tag{21a}\\
\grave{\tilde{\bm {x}}}_{m} = & 0 \tag{21b}\\
\grave{\tilde{\bm {\theta }}}_{x} = & 0 \tag{21c} \\
\grave{\bm {e}}_{z} = & f_{z}\circ h(\bm {e}_\xi,\bm {e}_\eta,t_{s},0) - f_{z_{m}}\circ h(\bm {e}_\xi,\bm {e}_\eta, t_{s}) \tag{21d}\\
\grave{\tilde{\bm {z}}}_{m} = & f_{z_{m}}\circ h(\bm {e}_\xi,\bm {e}_\eta,t_{s}) -0 \tag{21e}\\
\grave{\tilde{\bm {\theta }}}_{z} = & f_{\hat{\theta }_{z}} \circ h(\bm {e}_\xi,\bm {e}_\eta,t_{s}) - f_{\theta _{z}}(t_{f}) \tag{21f}
\end{align*}
\begin{align*}
\grave{\bm {e}}_\xi &= f_{e_\xi,f}(\bm {e}_\xi,\bm {e}_\eta,t_{f}) \tag{22a}\\
\grave{\bm {e}}_\eta &= f_{e_\eta,f}(\bm {e}_\xi,\bm {e}_\eta,t_{f}) \tag{22b}
\end{align*}
\begin{equation*}
\acute{\bm {e}}_\phi = f_{e_\phi,f}(\bm {e}_\phi,t_{f}) \tag{23}
\end{equation*}
\begin{align*}
\grave{\bm {r}}_{x} & = 0 \tag{24a}\\
\grave{\bm {\theta }}_{x} & = 0 \tag{24b}\\
\grave{\bm {z}}_{s} & = 0 \tag{24c}
\end{align*}
B. Differential Geometry
Differential geometry is a natural fit for the analysis of singularly perturbed systems because the differential equations which describe these systems form nonautonomous vector fields on a topological manifold. The term manifold has been used somewhat informally and will continue to be used to refer to
C. Manifold and the Reference Model
The stability proofs in the next section are significantly complicated by the relationship between the manifold and the fast reference model. Traditional multiple-timescale control and adaptive control both use a feedback loop to ensure closed-loop stability. These feedback loops still exist in the KAMS control architecture. Fig. 2 is the block diagram of KAMS from Fig. 1 except that the traditional feedback loop has been highlighted. All paths which contribute to this loop are bolded but the primary loop is blue. However, KAMS has another unconventional feedback loop because the fast reference model uses the manifold as an input (Fig. 1), the manifold is a function of the slow states (see (9)), the slow states are coupled with the fast states, and the control objective is for the fast states to track the fast reference model which is itself a function of the manifold. This creates a feedback loop that is typically not seen in adaptive control. Fig. 3 highlights this feedback loop. Again, all paths which contribute to this loop are bolded but the primary unconventional loop is red.
The reference model adds a complication that is not encountered in traditional multiple-timescale control. If the fast reference model is not asymptotically stable then the steady state trajectory for the slow states may not be the manifold. This calls into question the validity of the slow subsystem and means that the multiple-timescale fusion stability proofs in prior work are not applicable. These effects are unavoidable because the full-order stability analysis works by extending the stability of the reduced subsystems to the full-order system. The slow subsystem assumes that the fast states have reached their manifold, so if the stability of the reduced slow subsystem is to have any bearing on the full-order system then the fast reference model must converge to that manifold. This is the purpose of Assumption 1. Reference models are not usually asymptotically stable when their input is time-varying since they are typically Type 1 linear systems. Thus they are only capable of tracking a step input with zero steady-state error. However, closer examination reveals that Assumption 1 is not as restrictive as it appears. Recall that the manifold is assumed to evolve on the slow timescale. Equation (24c) shows that in the fast timescale
Case 1:
There exist no prior assumptions about the stability of the fast reference model in relation to the full-order manifold. This is the most general case considered, but also has the most restrictive conditions. This case often requires the control objective to be downgraded to a regulation problem.
Case 2:
The fast reference model is always on the manifold. This case most commonly occurs when adaptive control is not necessary for the fast subsystem. The fast control drives the fast states directly to the manifold. This type of control can be modeled by setting
and\tilde{\bm {z}}_{m}=0 . Note that a parallel simplification exists where there the slow control is non-adaptive,\acute{\tilde{\bm {\theta }}}_{z}=0 , and\tilde{\bm {x}}_{m}=0 but this still falls within Case 1 above.\acute{\tilde{\bm {\theta }}}_{x}=0 Case 3:
The manifold is an asymptotically stable equilibrium of the fast state reference model in the context of the full-order system. This is possible but requires an unusual reference model. This case is a slightly stricter version of Assumption 1 which only requires asymptotic stability in a subset of the domain.
Remark 3:
In the present work, stating that the slow subsystem does not require adaptive control will be equivalent to saying
D. Full-Order System Stability
In this section, the stability of KAMS is analyzed in the context of the full-order system. The goal is to develop conditions that, if met, extend the stability of the reduced subsystems to the full-order system. To that end, four related theorems are proved. Each theorem belongs to one of the three cases described in the previous section. All of the theorems in this work will make use of the vector
\begin{equation*}
\bm {v}\triangleq \begin{bmatrix}|\bm {e}_{x}|_{2} & |\tilde{\bm {x}}_{m}|_{2} & |\bm {e}_{z}|_{2} & |\tilde{\bm {z}}_{m}|_{2}\end{bmatrix}^{T} \tag{25}
\end{equation*}
1) Foundation of Reduced-Order Stability
The proofs in this section are similar to the proof proposed by [29], [35]. However, they have been significantly altered to account for adaptive control. The general process begins by generating a composite Lyapunov function using Lyapunov functions for the reduced-order subsystems. This composite Lyapunov function is then differentiated along the vector field describing the evolution of the full-order subsystem. Using the stability of the reduced subsystems it is shown that the differences between reduced subsystems and the full-order system are insufficient to violate the negative definiteness. This implies that
\begin{align*}
V_{e_{x}}(\bm {e}_{x},\tilde{\bm {\theta }}_{x},t_{s})&:\mathbb {B}^{n_{x}}(r_{e_\phi })\times \mathbb {B}^{n_{\theta _{x}}}(r_{e_\phi })\times \mathbb {R}_+{\to }\mathbb {R}_{\geq 0} \tag{26a}\\
V_{\tilde{x}_{m}}(\tilde{\bm {x}}_{m},t_{s})&:\mathbb {B}^{n_{x}}(r_{e_\phi })\times \mathbb {R}_+\to \mathbb {R}_{\geq 0} \tag{26b}\\
V_{e_{z}}(\bm {e}_{z},\tilde{\bm {\theta }}_{z},t_{f})&:\mathbb {B}^{n_{z}}(r_{e_\phi })\times \mathbb {B}^{n_{\theta _{z}}}(r_{e_\phi })\times \mathbb {R}_+{\to }\mathbb {R}_{\geq 0} \tag{26c}\\
V_{\tilde{z}_{m}}(\tilde{\bm {z}}_{m},t_{f})&:\mathbb {B}^{n_{z}}(r_{e_\phi })\times \mathbb {R}_+\to \mathbb {R}_{\geq 0} \tag{26d}
\end{align*}
\begin{align*}
\frac{\partial V_{e_{x}}}{\partial t_{s}} + \mathcal {L}^{}({f_{e_\xi,s}})V_{e_{x}} &\leq -\alpha _{1} |\bm {e}_{x}|^{2}_{2} \tag{27a}\\
\frac{\partial V_{e_{z}}}{\partial t_{f}} + \mathcal {L}^{}({f_{e_\eta,f}})V_{e_{z}} &\leq -\alpha _{3} |\bm {e}_{z}|^{2}_{2} \tag{27b}\\
\frac{\partial V_{\tilde{z}_{m}}}{\partial t_{f}} + \mathcal {L}^{}({f_{z_{m},f}})V_{\tilde{z}_{m}} &\leq -\alpha _{4} |\tilde{\bm {z}}_{m}|^{2}_{2} \tag{27c}
\end{align*}
Assumption 4:
The Lyapunov functions
Note that the existence of
Assumption 5:
The functions defined in the present work are sufficiently smooth and bounded so that the function is continuously differentiable as many times as necessary. Sufficiently bounded means that, as necessary, the domain of a function being in
The definitions above are a formal way of saying and indeed imply that the adaptive control for the reduced subsystems is well designed. This conclusion only applies to the reduced subsystems.
2) CASE 1
There exist no prior assumptions about the stability of the fast reference model in relation to the full-order manifold.
Theorem 1:
Assume
\begin{align*}
\frac{\partial V_{\tilde{x}_{m}}}{\partial t_{s}} + \mathcal {L}^{}({f_{\tilde{x}_{m}}}) V_{\tilde{x}_{m}} &\leq -\alpha _{2} |\tilde{\bm {x}}_{m}|^{2}_{2} \tag{28a}\\
\mathcal {L}^{}({f_{x}-f_{x,s}}) V_{e_{x}} &\leq \beta |\bm {e}_{x}|_{2} |\tilde{\bm {z}}|_{2} \tag{28b}\\
\mathcal {L}^{}({f_{z}-f_{z,f}}) V_{e_{z}} &\leq \epsilon \bm {\gamma }^{T}\bm {v} |\bm {e}_{z}|_{2} \tag{28c}\\
-\mathcal {L}^{}({f_{z_{s}}}) V_{\tilde{z}_{m}} & \leq \bm {\delta }^{T}\bm {v} |\tilde{\bm {z}}_{m}|_{2} \tag{28d}
\end{align*}
\begin{align*}
&K \triangleq \\
&\left[\begin{array}{cccc}d^*\alpha _{1} & 0 & -\frac{1}{2}(d^*\beta + d\gamma _{1}) & -\frac{1}{2}(d^*\beta +d\delta _{1}) \\
&{} d^*\alpha _{2} & -\frac{1}{2}d\gamma _{2} & -\frac{1}{2}d\delta _{2} \\
&{} & \frac{d}{\epsilon }\alpha _{3} - d\gamma _{3} & -\frac{1}{2}(d\delta _{3} + d\gamma _{4})\\
{\text{Symmetric}} {}&{}&{}& \frac{d}{\epsilon }\alpha _{4} - d\delta _{4} \end{array}\right] \tag{29}
\end{align*}
Proof:
Define a composite Lyapunov function
\begin{equation*}
V\triangleq d^*(V_{e_{x}} + V_{\tilde{x}_{m}}) + d(V_{e_{z}} + V_{\tilde{z}_{m}}) \tag{30}
\end{equation*}
\begin{align*}
\acute{V} & = d^*\left(\frac{\partial V_{e_{x}}}{\partial t_{s}} + \frac{\partial V_{\tilde{x}_{m}}}{\partial t_{s}}\right) + d\left(\frac{\partial V_{e_{z}}}{\partial t_{s}} + \frac{\partial V_{\tilde{z}_{m}}}{\partial t_{s}}\right) \tag{31a}\\
&\quad + d^*\mathcal {L}^{}({f_{e_\phi }})(V_{e_{x}} + V_{\tilde{x}_{m}}) d\mathcal {L}^{}({f_{e_\phi }})(V_{e_{z}} + V_{\tilde{z}_{m}}) \tag{31b}
\end{align*}
\begin{align*}
\acute{V} =& d^*\left(\frac{\partial V_{e_{x}}}{\partial t_{s}} + \frac{\partial V_{\tilde{x}_{m}}}{\partial t_{s}}\right) + d\left(\frac{\partial V_{e_{z}}}{\partial t_{s}} + \frac{\partial V_{\tilde{z}_{m}}}{\partial t_{s}}\right) \\
& \!+ d^*\mathcal {L}^{}({f_{e_\phi,s}})(V_{e_{x}} \!+\! V_{\tilde{x}_{m}}) \!+\! d^*\mathcal {L}^{}({f_{e_\phi }-f_{e_\phi,s}})(V_{e_{x}} \!+\! V_{\tilde{x}_{m}}) \\
& \!+ d\mathcal {L}^{}({f_{e_\phi,f}})(V_{e_{z}} \!+\! V_{\tilde{z}_{m}}) \!+\! d\mathcal {L}^{}({f_{e_\phi }-f_{e_\phi,f}})(V_{e_{z}} \!+\! V_{\tilde{z}_{m}}) \tag{32}
\end{align*}
\begin{align*}
\acute{V} =& d^*\left(\frac{\partial V_{e_{x}}}{\partial t_{s}} + \mathcal {L}^{}({f_{e_\phi,s}}) V_{e_{x}} + \frac{\partial V_{\tilde{x}_{m}}}{\partial t_{s}} + \mathcal {L}^{}({f_{e_\phi,s}}) V_{\tilde{x}_{m}}\right) \\
&+ d^*\mathcal {L}^{}({f_{e_\phi }-f_{e_\phi,s}}) V_{e_{x}} + d^*\mathcal {L}^{}({f_{e_\phi }-f_{e_\phi,s}}) V_{\tilde{x}_{m}} \\
&+ d\left(\frac{\partial V_{e_{z}}}{\partial t_{s}} + \mathcal {L}^{}({f_{e_\phi,f}})V_{e_{z}} + \frac{\partial V_{\tilde{z}_{m}}}{\partial t_{s}} + \mathcal {L}^{}({f_{e_\phi,f}})V_{\tilde{z}_{m}}\right) \\
&+ d\mathcal {L}^{}({f_{e_\phi }-f_{e_\phi,f}}) V_{e_{z}} + d\mathcal {L}^{}({f_{e_\phi }-f_{e_\phi,f}}) V_{\tilde{z}_{m}} \tag{33}
\end{align*}
\begin{align*}
\acute{V} =& d^*\left(\frac{\partial V_{e_{x}}}{\partial t_{s}} + \mathcal {L}^{}({f_{e_\xi,s}}) V_{e_{x}} + \frac{\partial V_{\tilde{x}_{m}}}{\partial t_{s}} + \mathcal {L}^{}({f_{\tilde{x}_{m},s}}) V_{\tilde{x}_{m}}\right) \\
&+ d^*\mathcal {L}^{}({f_{e_\xi }-f_{e_\xi,s}}) V_{e_{x}} + d^*\mathcal {L}^{}({f_{\tilde{x}_{m}}-f_{\tilde{x}_{m},s}}) V_{\tilde{x}_{m}} \\
&+ \frac{d}{\epsilon }\left(\frac{\partial V_{\tilde{z}_{m}}}{\partial t_{f}} + \mathcal {L}^{}({f_{e_\eta,f}})V_{e_{z}} + \frac{\partial V_{\tilde{z}_{m}}}{\partial t_{f}} + \mathcal {L}^{}({f_{\tilde{z}_{m},f}})V_{\tilde{z}_{m}}\right) \\
&+ \frac{d}{\epsilon }\mathcal {L}^{}({f_{e_\eta }-f_{e_\eta,f}}) V_{e_{z}} + \frac{d}{\epsilon }\mathcal {L}^{}({f_{\tilde{z}_{m}}-f_{\tilde{z}_{m},f}}) V_{\tilde{z}_{m}} \tag{34}
\end{align*}
\begin{align*}
\acute{V} =& d^{*}\left(\frac{\partial V_{e_{x}}}{\partial t_{s}} + \mathcal {L}^{}({f_{e_\xi,s}}) V_{e_{x}} + \frac{\partial V_{\tilde{x}_{m}}}{\partial t_{s}} + \mathcal {L}^{}({f_{\tilde{x}_{m}}}) V_{\tilde{x}_{m}}\right) \\
&+ d^{*}\mathcal {L}^{}({f_{x}-f_{x,s}}) V_{e_{x}} \\
&+\frac{d}{\epsilon }\left(\frac{\partial V_{\tilde{z}_{m}}}{\partial t_{f}} + \mathcal {L}^{}({f_{e_\eta,f}})V_{e_{z}} + \frac{\partial V_{\tilde{z}_{m}}}{\partial t_{f}} + \mathcal {L}^{}({f_{\tilde{z}_{m},f}})V_{\tilde{z}_{m}}\right) \\
&+ \frac{d}{\epsilon }\mathcal {L}^{}({f_{z}-f_{z,f}}) V_{e_{z}} - d \mathcal {L}^{}({f_{z_{s}}}) V_{\tilde{z}_{m}} \tag{35}
\end{align*}
\begin{align*}
\acute{V} \leq& - d^* \alpha _{1} |\bm {e}_{x}|^{2}_{2} - d^* \alpha _{2} |\tilde{\bm {x}}_{m}|^{2}_{2} \\
&+ d^*\beta |\bm {e}_{x}|_{2} |\tilde{\bm {z}}|_{2} \\
&- \frac{d}{\epsilon }\alpha _{3} |\bm {e}_{z}|^{2}_{2} - \frac{d}{\epsilon }\alpha _{4} |\tilde{\bm {z}}_{m}|^{2}_{2} \\
&+ \frac{d}{\epsilon }\epsilon \bm {\gamma }^{T}\bm {v} |\bm {e}_{z}|_{2} + d \bm {\delta }^{T}\bm {v} |\tilde{\bm {z}}_{m}|_{2} \tag{36}
\end{align*}
\begin{equation*}
\acute{V} \leq - \bm {v}^{T} K \bm {v} \tag{37}
\end{equation*}
Corollary 1:
Let the plant exist such that the reduced slow subsystem does not require adaptive control (i.e.
\begin{align*}
K \triangleq \begin{bmatrix}d^*\alpha _{1} & -\frac{1}{2}(d^*\beta + d\gamma _{1}) & -\frac{1}{2}(d^*\beta +d\delta _{1}) \\
& \frac{d}{\epsilon }\alpha _{3} - d\gamma _{3} & -\frac{1}{2}(d\delta _{3} + d\gamma _{4})\\
{\text{Symmetric}} && \frac{d}{\epsilon }\alpha _{4} - d\delta _{4} \end{bmatrix} \tag{38}
\end{align*}
Proof:
The proof proceeds exactly as Theorem 1 except that
Each of the following proofs assumes that
3) CASE 2
The fast reference model is always on the manifold.
Corollary 2:
Let the plant exist such that the reduced fast subsystem does not require adaptive control (
Proof:
The proof proceeds exactly as Theorem 1 except for
\begin{equation*}
K \triangleq \begin{bmatrix}d^*\alpha _{1} & -\frac{1}{2}d^*\beta \\
-\frac{1}{2}d^*\beta & \frac{d}{\epsilon }\alpha _{3} \end{bmatrix} \tag{39}
\end{equation*}
\begin{align*}
0 &< d^*\alpha _{1} \tag{40a}\\
0 &< \frac{d(1-d)\alpha _{1}\alpha _{3}}{\epsilon } - \frac{1}{4}(1-d)^{2}\beta ^{2} \tag{40b}
\end{align*}
\begin{equation*}
\epsilon < \frac{4d\alpha _{1}\alpha _{3}}{(1-d)\beta ^{2}} \tag{41}
\end{equation*}
4) CASE 3
The manifold is an asymptotically stable equilibrium of the fast state reference model in the context of the full-order system.
Corollary 3:
Assume that
\begin{equation*}
\mathcal {L}^{}({f_{\tilde{z}_{m}}}) V_{\tilde{z}_{m}} \leq -\alpha _{4} |\tilde{\bm {z}}_{m}|_{2}^{2} \tag{42}
\end{equation*}
Proof:
The proof largely follows Theorem 1 except for
\begin{equation*}
K \triangleq \begin{bmatrix}d^* \alpha _{1} & -\frac{1}{2}d^*\beta & -\frac{1}{2}d^*\beta \\
-\frac{1}{2}d^*\beta & \frac{d}{\epsilon }\alpha _{3} & 0 \\
-\frac{1}{2}d^*\beta & 0 & \frac{d}{\epsilon }\alpha _{4} \end{bmatrix} \tag{43}
\end{equation*}
\begin{align*}
0 &< d^*\alpha _{1} \tag{44a}\\
0 &< \frac{d(1-d)\alpha _{1}\alpha _{3}}{\epsilon } - \frac{1}{4}(1-d)^{2}\beta ^{2} \tag{44b}\\
0 &< \frac{d^{2}(1-d)}{\epsilon ^{2}}\alpha _{1}\alpha _{3}\alpha _{4} - \frac{d(1-d)^{2}}{4\epsilon }(\alpha _{3}+\alpha _{4})\beta ^{2} \tag{44c}
\end{align*}
\begin{align*}
\epsilon &< \frac{4d\alpha _{1}\alpha _{3}}{(1-d)\beta ^{2}} \tag{45a}\\
\epsilon &< \frac{4d\alpha _{1}\alpha _{3}\alpha _{4}}{(1-d)(\alpha _{3}+\alpha _{4})\beta ^{2}} \tag{45b}
\end{align*}
Remark 4:
Assumption 2 places bounds on the acceptable range of the adaptation gains. Note that
Remark 5:
The condition that
Remark 6:
Corollaries 1 and 2 study the case where only one subsystem requires adaptive control. If neither subsystem requires adaptive control then Theorem 1 reduces to [35, Theorem 1].
Remark 7:
Systems which use adaptive control are likely to be nonstandard because adaptive control is specifically designed for systems with model uncertainties. Thus it is common for the open-loop manifold to be uncertain even if the system is standard in the traditional sense. Let the term uncertain nonstandard refer to this condition. Recent multiple-timescale control research has addressed nonstandard systems [17]. Both Sequential Control and Simultaneous Slow and Fast Tracking are nonstandard methods because the manifold is specified. By comparison, Composite Control requires the open-loop manifold to be known apriori, so the manifold must be measured or analytically available. Thus Composite Control is well suited for systems that do not require adaptive control in the fast subsystem.
5) Summary of Theorems
This section describes each of the theorems that are proven in this article and provides criteria that can be used to determine which of the theorems apply to a given system. Theorem 1 is the most general but also has the most restrictive conditions on stability. It requires that the slow reference model be asymptotically stable to the reference model input. In practice, this can often limit the theorem to regulation. Three special cases of Theorem 1 were studied that are less restrictive. Corollary 1 is applicable when adaptive control is only used for the fast subsystem. Corollary 2 is applicable when adaptive control is only used for the slow subsystem. Corollary 3 allows adaptive control in both subsystems, but the manifold must be an asymptotically stable equilibrium of the fast reference model in the context of the full-order system.
Theorem 1 and Corollary 1 both allow the timescale separation parameter to appear on the right side of the fast states' differential equations and require checking the positive definiteness of a matrix. Corollaries 2 and 3 do not. KAMS typically requires differentiation of the manifold. In Theorem 1 and Corollary 1 the derivative of the manifold is used to ensure that condition (28d) is satisfied. The derivative of the manifold is not explicitly required for Corollary 3, but it is often required to ensure the manifold is an asymptotically stable equilibrium of the fast reference model. It is therefore significant that Corollary 2 does not require differentiating the manifold.
Validation
An example demonstrates and validates this method. Consider the following nonlinear, nonstandard, uncertain dynamical system
\begin{align*}
\acute{x} &= -(x^{2}+1)z \tag{46a}\\
\epsilon \acute{z} &= \theta xz + u \tag{46b}
\end{align*}
\begin{equation*}
\acute{x}_{m} = -a_{x} (x_{m} - r_{x}) \tag{47}
\end{equation*}
A. Control Synthesis
The reduced slow subsystem is
\begin{align*}
\acute{x} &= -(x^{2}+1)z_{s} \tag{48a}
\end{align*}
\begin{align*}
\grave{x} &= 0 \tag{49a}\\
\grave{z} &= \theta xz + u \tag{49b}
\end{align*}
\begin{equation*}
z_{s} = -(x^{2}+1)^{-1}(\acute{x}_{m} - k_{x} e_{x}) \tag{50}
\end{equation*}
\begin{equation*}
\acute{x} = \acute{x}_{m} - k_{x} e_{x} \tag{51}
\end{equation*}
\begin{equation*}
\acute{e}_{x} = - k_{x} e_{x} \tag{52}
\end{equation*}
\begin{equation*}
u = \grave{z}_{m} - \hat{\theta } xz - k_{z} e_{z} \tag{53}
\end{equation*}
\begin{equation*}
\grave{\hat{\theta }} = \gamma \; \text{Proj}(\hat{\theta },xze_{z}) \tag{54}
\end{equation*}
\begin{equation*}
\grave{\tilde{z}}_{m} = -a_{z}\tilde{z}_{m} \tag{55}
\end{equation*}
\begin{equation*}
\grave{z}_{m} = -a_{z}\tilde{z}_{m} + \grave{z}_{s} \tag{56}
\end{equation*}
\begin{align*}
\grave{z}_{s} =& \frac{2xz\epsilon }{x^{2}+1}(a_{x} \tilde{x}_{m} + k_{x} e_{x}) \\
& + \frac{\epsilon }{x^{2}+1}(-a_{x} (a_{x} \tilde{x}_{m} + \acute{r}_{x}) \\
& + k_{x}(-(x^{2}+1)z + a_{x}\tilde{x}_{m})) \tag{57}
\end{align*}
B. Confirmation of Full-Order Stability
Consider the candidate Lyapunov functions
\begin{align*}
V_{e_{x}} &= \frac{1}{2} e_{x}^{2} \tag{58a}\\
V_{e_{z}} &= \frac{1}{2} e_{z}^{2} + \frac{1}{2\gamma }\tilde{\theta }^{2} \tag{58b}\\
V_{\tilde{z}_{m}} &= \frac{1}{2} \tilde{z}_{m}^{2} \tag{58c}
\end{align*}
\begin{align*}
\mathcal {L}^{}({f_{e_\xi, s}})V_{e_{x}} &= -k_{x} e_{x}^{2} & \leq -\alpha _{1} |e_{x}|^{2}_{2} \tag{59a}\\
\mathcal {L}^{}({f_{e_\eta,f}})V_{e_{z}} &\leq -k_{z} e_{z}^{2} & \leq -\alpha _{3} |e_{z}|^{2}_{2} \tag{59b}\\
\mathcal {L}^{}({f_{\tilde{z}_{m}}}) V_{\tilde{z}_{m}} &= -a_{z} \tilde{z}_{m} & \leq -\alpha _{4} |\tilde{z}_{m}|_{2}^{2} \tag{59c}\\
\mathcal {L}^{}({f_{x}-f_{x,s}}) V_{e_{x}} &= -(x^{2}+1) e_{x} \tilde{z} & \leq \beta |e_{x}|_{2} |\tilde{z}|_{2} \tag{59d}
\end{align*}
C. Numerical Results
A numerical simulation validates the control. The system parameters are
\begin{align*}
\theta &= 0.5 \tag{60a}\\
\epsilon &= 0.1 \tag{60b}
\end{align*}
\begin{align*}
r_{x} = \sin (t_{s}) \tag{61a}
\\
a_{x} = k_{x} = a_{z} = k_{z} = \gamma = 1 \tag{61b}
\end{align*}
\begin{align*}
x &= z = 0.5 \tag{62a}\\
x_{m} &= z_{m} = 0 \tag{62b}\\
\hat{\theta } &= 0.44 \tag{62c}
\end{align*}
D. Alternative Approach
Corollary 1 is also applicable because adaptive control is only required for the fast subsystem. To demonstrate this the problem is revised to a regulation problem and the fast reference model is redefined so that it is no longer asymptotically stable about the manifold
\begin{equation*}
\grave{z}_{m} = -a_{z} \tilde{z}_{m} \tag{63}
\end{equation*}
\begin{align*}
-\mathcal {L}^{}({f_{z_{s}}}) V_{\tilde{z}_{m}} &= \left(-\frac{2xz}{x^{2}+1} k_{x} e_{x} + k_{x} z\right)\tilde{z}_{m} \tag{64a}\\
& \leq \left(2 k_{x} |e_{x}|_{2} + k_{x} |z|_{2}\right) |\tilde{\bm {z}}_{m}|_{2} \tag{64b}\\
& \leq \begin{bmatrix}2 k_{x}+ k_{x}^{2} & 0 & k_{x} & k_{x}\end{bmatrix} \bm {v} |\tilde{z}_{m}|_{2} \tag{64c}\\
& \leq \bm {\delta }^{T}\bm {v} |\tilde{z}_{m}|_{2} \tag{64d}
\end{align*}
\begin{equation*}
K \triangleq \begin{bmatrix}d^* & -\frac{1}{2} d^* & -\frac{1}{2}(d^*+3\,d) \\
-\frac{1}{2} d^* & \frac{d}{\epsilon } & -\frac{1}{2}d\\
-\frac{1}{2}(d^*+3\,d) & -\frac{1}{2}d & \frac{d}{\epsilon } - d \end{bmatrix} \tag{65}
\end{equation*}
From Fig. 7 it can also be seen that when
Remark 8:
As mentioned previously a common approach to these problems is to apply sequential loop closure to the subsystems. This example uses Sequential Control. To clarify Sequential Control is an extension of sequential loop closure. Applying sequential loop closure to this example would yield numerically indistinguishable results. Sequential Control was first published by Narang-Siddarth and Valasek in [17] where they showed that singular perturbation techniques could be used to rigorously show stability and obtain specific bounds on the time scale separation parameter. By extension, this same advantage is available to KAMS. The insights from Fig. 7 are a unique contribution of KAMS that is not available to traditional adaptive sequential loop closure implementations. Furthermore, unlike sequential loop closure, KAMS allows the use of Composite Control and Simultaneous Slow and Fast Tracking.
Remark 9:
The MATLAB code used for both of the examples has been made open source and is available on Code Ocean [38].
Conclusion
This article extended the [K]control of Adaptive Multiple-timescale Systems (KAMS) methodology to singularly perturbed systems with adaptive control in both the fast and slow subsystems; a wide class of adaptive control and multiple-timescale control methods fit within this framework. Sufficient conditions for asymptotic stability were proven and coupling effects between the manifold and the fast reference model were identified. The stability of the full-order system was connected to the stability of the reduced-order systems through Theorem 1 and its corollaries. A nonlinear nonstandard system was used to demonstrate KAMS.
This article identified complex interactions between the fast reference model and the manifold which occur when adaptive control is used to stabilize the fast subsystem. This makes traditional multiple-timescale control proofs insufficient when adaptive control is used in the fast subsystem. The theorems proved in this article account for these complex interactions by carefully formatting the augmented error dynamics and by judiciously selecting sufficient conditions. The primary limitation of KAMS is the requirement to verify the conditions given in Theorem 1 and its corollaries. These conditions restrict the set of systems to which the theorems in this article can be applied. Lyapunov functions may not be known for some systems and adaptive control methods. Suitable Lyapunov functions are known for several popular adaptive control methods (e.g. Model Reference Adaptive Control and Adaptive Nonlinear Dynamic Inversion). Another limitation is that many applications will require differentiation of the manifold. This can be a complicated calculation, but it can sometimes be avoided by judicious selection of control objectives, careful system modeling, and the use of the correct corollary. See the alternative approach example above for a demonstration of this. Based upon the results presented in the article KAMS is judged to be a feasible control approach for uncertain nonstandard singularly perturbed systems regardless of which subsystem (fast, slow, or both) the uncertainty appears in. Further, KAMS is more capable than traditional sequential loop closure because it can be used to determine the minimum allowable timescale separation and it allows for the use of Composite Control and Simultaneous Slow and Fast Tracking.
There are several potential avenues for future research. First, future research could consider adapting laws that do not adapt in the same timescale as the subsystem to which they are applied. Second, future research could determine alternate formats for the upper bounds in Theorem 1. Finally, experimentally validating the performance of KAMS on physical systems would be insightful.
ACKNOWLEDGMENT
The technical monitor is Brian Holm-Hansen. This support is gratefully acknowledged by the authors. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the U.S. Navy.