Loading [MathJax]/extensions/TeX/boldsymbol.js
Towards Optimal Control Problems of Hybrid Impulsive and Switching Systems with Free Terminal States | CIE Journals & Magazine | IEEE Xplore

Towards Optimal Control Problems of Hybrid Impulsive and Switching Systems with Free Terminal States

Open Access

Abstract:

The global optimal control problem with free system terminal states is proposed for a special class of hybrid dynamical systems, Hybrid impulsive and switching systems (H...Show More

Abstract:

The global optimal control problem with free system terminal states is proposed for a special class of hybrid dynamical systems, Hybrid impulsive and switching systems (HISS). The necessary condition of the above problem, the HISS' minimum principle is given. In the proof of the above theorem, the general variational method and the matrix cost functional structure expression are employed. Based on this result, a special linear HISS is illustrated and the optimal control algorithm is derived. Moreover, the cases of pure impulsive systems and pure switched systems are considered in this paper.
Published in: Chinese Journal of Electronics ( Volume: 19, Issue: 3, July 2010)
Page(s): 557 - 562
Date of Publication: July 2010

ISSN Information:

Funding Agency:


SECTION I.

Introduction

A Hybrid dynamical system (HDS) is capable of exhibiting several kinds of dynamic behaviors in different parts of the system simultaneously. In practical engineering, there are lots of systems, which have switches and abrupt changes of states at the switching instants. It has been proved that HDS is a powerful tool in modeling the current engineering systems. Based on the theory of impulsive and switching systems, hybrid impulsive and switching control strategies have been developed. The advantage of these control strategies is that the stabilization of some complex systems can be easily realized by introducing small control impulses in different modes of the systems. So, the interest in studying impulsive and switching systems has grown recently because of its theoretical and practical significance.

Optimal control problems of HDS have been studied broadly in recent years[1]–​[11]. Branicky et al.[1] proposed a unified framework of hybrid optimal control and synthesized a hybrid controller for hybrid devices. Giua et al.[2] studied the optimal control problem of minimizing a quadratic performance index over an infinite time horizon for a class of switched piecewise linear autonomous systems. Based on linear quadratic adaptive control laws of continuous dynamical systems, Tan et al.[3] concentrated on a sampled data system with unknown Markov jump parameters, and then gave a parameter estimator and a control design method. Bengea and DeCarlo[4] considered an optimal control problem for a class of switching systems under the assumption that the number of switches and model sequences are both indeterminate. Baotic et al.[5] studied the constrained finite and infinite time optimal control problems towards a class of discrete-time linear hybrid systems, and proposed algorithms that compute the optimal solutions. Borrelli et al.[6] worked on the solution to the optimal control problems for constrained discrete-time linear HDS based on the linear quadratic performance criteria, and constructed the state-feedback optimal control law by combining multi-parametric programming and dynamic programming. Gokbayrak and Selvi[7] derived some sample path characteristics for a two-stage serial HDS, and transformed an original non-smooth optimal control problem into a convex optimization problem. Spinelli et al.[8] dealt with the optimal control problem for the continuous-time autonomous linear switched system on a finite control horizon, and developed the sufficient conditions of their optimality by Hamilton-Jacobi-Bellman theory. Shaikh and Caines[9] studied a class of hybrid optimal control problems for the systems with controlled and autonomous location transitions, and extended the maximum principle from pure continuous systems to HDS. In the author's previous papers[10], [11], towards a class of HDS with the pre-specified switching sequence, the local optimal control problems for both free and restricted terminal states were discussed. In this paper we will extend these works to a special class of HDS, HISS, and study the global optimal control problem.

This paper is organized as follows. In Section II, we first give a special controlled nonlinear system, denote it by the HISS, and then state the optimal control problem for it. In Section III, we give the theoretical method, the minimum principle of the HISS, to solve this optimal control problem. In the proof, the general variational method and the matrix cost functional will be utilized. Moreover, we illustrate a special example of the HISS, and present its minimum principle. In Section IV, we provide the optimal control algorithm for the HISS mentioned in Section III, and give the relative results for a pure impulsive system and a pure switched system. Section V is the conclusion of this paper.

SECTION II.

The Optimal Control Problem of HISS with Free Terminal States

The controlled model of nonlinear system is given by \begin{equation*} \dot{x}(t)=Ax(t)+f(t, x)+v(t, x, u) \tag{1} \end{equation*}View SourceRight-click on figure for MathML and additional features. where x(t)\in \mathbb{R}^{n}, denotes the state of the system at t(t\in R^{+}). f(t, x): R^{+}\times \mathbb{R}^{n}\rightarrow \mathbb{R}^{n} is a bounded vector, which has bounded partial derivatives w.r.t.\ x, and v(t, x, u): R^{+}\times \mathbb{R}^{n}\times \mathbb{R}^{m}\rightarrow\mathbb{R}^{n} is control input. We suppose that f(t, x) and v(t, x, u) guarantee the existence and the uniqueness of the solutions for Eq.(1) with the initial values. So, we can construct the control function input as follows: \begin{equation*} v(t, x, u)=u_{1}(t, x)+u_{2}(t, x)+u(t) \tag{2} \end{equation*}View SourceRight-click on figure for MathML and additional features.

In the above equation, u_{1}(t, x) and u_{2}(t, x) denote the switching input function and the impulsive input function, respectively. We let \begin{equation*} u_{1}(t, x)=\sum\limits_{k=1}^{\infty}B_{1k}x(t)l_{k}(t),\quad u_{2}(t, x)=B_{2k}x(t)\delta(t-\tau_{k}) \tag{3} \end{equation*}View SourceRight-click on figure for MathML and additional features. where B_{1k} and B_{2k} are both n\times n constant square matrices with proper dimensions, l_{k}(\cdot) is defined by \begin{equation*} l_{k}(t)=\begin{cases} 1, &\tau_{k-1}\leq t < \tau_{k},\qquad k=1,2, \cdots\\ 0, &\text{others}\end{cases} \tag{4} \end{equation*}View SourceRight-click on figure for MathML and additional features. and \delta(\cdot) is the Dirac's delta function. Here we suppose that \{\tau_{k}\vert k\in Z^{+},\tau_{0} < \tau_{1} < \cdots\} is a fixed, unbounded, close ordered set of discrete points, which satisfies \lim\nolimits_{k\rightarrow\infty}\tau_{k}=\infty. \tau_{0} is the initial time. Without loss of generality, we suppose that each interval [\tau_{k-1},\tau_{k}) have non-empty interior. u(t)\in U\subset \mathbb{R}^{m} is the external control input of the system, where \begin{equation*} U{\buildrel \triangle\over=}\{\ (u_{1}, u_{2}, \cdots\, u_{m})^{T}\in \mathbb{R}^{m}\Vert u_{i}\vert \leq 1, i=1,2, \cdots, m\} \tag{5} \end{equation*}View SourceRight-click on figure for MathML and additional features. is called the control range. We give the definition of the admissible control as follows.

Definition 1

Let Eq.(1) evolve over [\tau_{0},t^{f}], u(\cdot) is called an admissible control, if u(\cdot):[\tau_{0},t^{f}]\rightarrow \mathbb{R}^{m} is bounded and quadratically integrable, where u(t)\in U holds almost everywhere on [\tau_{0},t^{f}]. All of above admissible controls make up of a set, which is denoted by U_{ad}, i.e., \begin{equation*} U_{ad}{\buildrel \triangle\over=}\{u(\cdot)\vert u(\cdot)\in L^{2}[\tau_{0},t^{f};U]\} \tag{6} \end{equation*}View SourceRight-click on figure for MathML and additional features.

For convenience, we define \tau_{l+1}=t^{f}. The the evolving interval can be described as \begin{equation*} [\tau_{0},t^{f}]=\cup_{k= 1}^{l}[\tau_{k-1},\tau_{k})\cup[\tau_{l},t^{f}] \end{equation*}View SourceRight-click on figure for MathML and additional features.

By Eqs.(3) and (4), the following holds \begin{equation*} u_{1}(t, x)=B_{1k}x(t),\quad t\in[\tau_{k-1}, \tau_{k})\quad \text{or}\quad t\in[\tau_{l}, t^{f}] \tag{7} \end{equation*}View SourceRight-click on figure for MathML and additional features. which implies that u_{1}(t) changes at \tau_{k}, namely, x(\tau_{k})=x(\tau_{k}^{+})= \lim\nolimits_{h\rightarrow 0}+(\tau_{k}+h). Moreover, by the property of the Dirac's delta function \delta(\cdot), it holds that u_{2}(t, x)=0, whenever t\neq\tau_{k}. Then by Eqs.(1)–​(3), we obtain \begin{align*} x(\tau_{k})=&x(\tau_{k-h})+\int\nolimits_{\tau_{k-h}}^{\tau_{k}}[Ax(s)+f(s, x(s))\\ &+B_{1k}x(s)+B_{2k}x(s)\delta(s-\tau_{k})+u(s)]ds \tag{8} \end{align*}View SourceRight-click on figure for MathML and additional features. where we suppose that h > 0 is small enough. If h\rightarrow 0^{+}, we get \begin{equation*} \Delta x(\tau_{k})=x(\tau_{k})-x(\tau_{k}^{-})=B_{2k}x(\tau_{k}) \tag{9} \end{equation*}View SourceRight-click on figure for MathML and additional features.

Then the system (1) can be rewritten as \begin{equation*} \begin{cases} \dot{x}(t)=Ax(t)+f(t, x)+B_{1k}x(t)+u(t),\\ \qquad \qquad \qquad t\in[\tau_{k-1}, \tau_{k})\ \text{or}\ t\in[\tau_{l}, t^{f}]\\ \Delta x(t)=B_{2k}x(t),\qquad t=\tau_{k}\\ x(\tau_{0})=x_{0},\qquad \qquad\ k=1,2, \cdots, l \end{cases} \tag{10} \end{equation*}View SourceRight-click on figure for MathML and additional features.

The system (10) is called the HISS[12]. Furthermore, we define \begin{equation*} q(t)=\Delta x(t),\quad t=\tau_{k},\quad k=1,2, \cdots, l \tag{11} \end{equation*}View SourceRight-click on figure for MathML and additional features.

Then Eq.(10) is further expressed as \begin{equation*} \begin{cases} \dot{x}(t)=A_{k}x(t)+f(t, x)+u(t), &t\in[\tau_{k-1}, \tau_{k})\ \text{or}\ t\in[\eta, t^{f}]\\ q(\tau_{k})=B_{k}x(\tau_{k}), &k= 1, 2, \cdots, l \end{cases}\tag{12} \end{equation*}View SourceRight-click on figure for MathML and additional features. where A_{k}=A+B_{1k}, and Eq.(12) evolves from the initial states x(\tau_{0})=x_{0} and q(\tau_{0})=0.

We consider the HISS Eq.(12) evolving under a admissible control u_{k}(\cdot)\in U_{ad}, (k=1,2, \cdots, l+1). Corresponding to u_{k}(\cdot), the trajectory of its continuous subsystems and the discrete event states are denoted by x_{k}(\cdot)=x(\cdot, u_{k}(\cdot)) and q_{k}(\cdot)= q(\cdot, u_{k}(\cdot)), respectively. We define the control input by \boldsymbol{u}(\cdot)= (u_{1}(\cdot), u_{2}(\cdot), \cdots, u_{l+1}(\cdot))^{T}\in U_{ad}^{l+1}, and define the cost functional as \begin{equation*} \boldsymbol{J}(\boldsymbol{u}(\cdot))= \text{diag}\{J_{1}(u_{1} (\cdot)), J_{2}(u_{2} (\cdot)), \cdots, J_{l+1}(u_{l+1}(\cdot))\} \tag{13} \end{equation*}View SourceRight-click on figure for MathML and additional features. where \begin{align*} J_{k}(u_{k}(\cdot))=&\sum\limits_{i=k}^{l+1}\left[\int\nolimits_{\tau_{i-1}}^{\tau_{i}}L(s, x(s, u_{k}(s)), u_{k}(s))ds\right.\\ &\left.+\sum_{i=k}^{l} L_{a}(q(\tau_{i}, u_{k}(\cdot)))\right],\ k=1,2, \cdots, l+1 \tag{14} \end{align*}View SourceRight-click on figure for MathML and additional features. and \text{diag}\{\cdot\} denotes a diagonal matrix, L:R^{+}\times \mathbb{R}^{n}\times \mathbb{R}^{m}\rightarrow R^{+} describes the operating cost of the HISS' continuous subsystems, and L_{a}:\mathbb{R}^{n}\rightarrow R^{+} denotes the switching cost of HISS. For convenience, we also define t^{f}=\tau_{l+1} somewhere. On [\tau_{k-1},\tau_{k})(k=1,2, \cdots, l), or on [\tau_{l},t^{f}], L is integrable w.r.t. t, is continuous w.r.t. u, and is continuously differentiable w.r.t. x. L_{a} is continuously differentiable in its variable.

Then we describe the global optimal control problem of Eq.(12) with free terminal states as follows. It is supposed that \bar{u}(\cdot)\in U_{ad} is the optimal control. Define \bar{\boldsymbol{u}}(\cdot)=(\bar{u}(\cdot,\bar{u}(\cdot, \cdots,\bar{(}\cdot))^{T}, which includes l+1 components. So that the cost functional Eq.(13) reaches its minimum under \bar{\boldsymbol{u}}(\cdot), that is, for any \boldsymbol{u}(\cdot)\in U_{ad}^{l+1}, it holds that \boldsymbol{J}(\boldsymbol{u}(\cdot))-\boldsymbol{J}(\bar{\boldsymbol{u}}(\cdot)) is positive definite. Moreover, we suppose that the terminal state (x(t^{f}), q(\tau_{l})) of Eq.(12) is free, namely, \begin{equation*} (x(t^{f}), q(\tau_{l}))\in \mathbb{R}^{2n} \tag{15} \end{equation*}View SourceRight-click on figure for MathML and additional features.

In the following section, we will derive the necessary condition for the above global optimal control problem, the minimum principle of the HISS over [\tau_{0},t^{f}].

SECTION III.

Main Results

Lemma 1

Let f:[a,\ b]\rightarrow \mathbb{R}^{n} be Lebesgue integrable and \lambda\in(0,1). Then there exists a measurable set E_{\lambda}(\varepsilon)\subset[a, b], for any \varepsilon > 0, so that \begin{align*} &\text{meas}(E_{\lambda}(\varepsilon))=\lambda(b-a)\tag{16}\\ &\lambda \int\nolimits_{a}^{b}f(t)dt=f_{E_{\lambda}(\varepsilon)}f(t)dt+\eta\tag{17} \\ &\Vert\eta\Vert < \varepsilon \tag{18} \end{align*}View SourceRight-click on figure for MathML and additional features.

We can find Lemma 1 in Ref.[13], and we omit its proof here.

Definition 2

Given a Lebesgue integrable function f, a point t in the domain of f is a Lebesgue point[13], if \begin{equation*} \lim\limits_{r\rightarrow 0^{+}}\frac{1}{B(t, r)}\int\nolimits_{B(t, r)}\vert f(s)-f(t)\vert ds=0 \end{equation*}View SourceRight-click on figure for MathML and additional features. where B(t, r) is the ball centered at t with radius r, and \vert B(t, r)\vert is the Lebesgue measure of that ball.

Theorem 1

Consider the HISS Eq.(12) and let \overline{u}(\cdot) be a solution of the optimal control problem mentioned above. We suppose that x(\cdot)=x(\cdot,\bar{u}(\cdot)) is the optimal trajectory of the HISS corresponding to \bar{u}(\cdot)). Then over [\tau_{k-1},\tau_{k}) for k=1,2, \cdots, l, or over [\tau_{l},t^{f}] for k=l+1, there exists \tau, which is piecewise continuous. So that we get the adjoint equation \begin{equation*} - \frac{d}{dt}y_{k}(t)=\left[A_{k}^{T}+\frac{\partial}{\partial x}f(t, x(t))\right] y_{k}(t)+\frac{\partial}{\partial x}L(t, x(t),\bar{u}(t)) \tag{19} \end{equation*}View SourceRight-click on figure for MathML and additional features. where it satisfies that y_{k}(t^{f})=0, k=1,2, \cdots, l+1. Moreover, the minimum condition is satisfied, namely, \begin{gather*} H_{k}(t, x(t), y_{k}(t),\bar{u}(t))=\min\limits_{u\in U}H_{k}(t, x(t), y_{k}(t), u),\\ a.e.\quad t\in[\tau_{k-1}, \tau_{k})\ \text{or}\ t\in[\tau_{l}, t^{f}] \tag{20} \end{gather*}View SourceRight-click on figure for MathML and additional features. where \begin{align*} &H_{k}(t, x(t), y_{k}(t), u(t))\\ &\qquad =L(t, x(t), u(t))+\langle y_{k}(t), A_{k}x(t)+f(t, x(t))+u(t)\rangle \tag{21} \end{align*}View SourceRight-click on figure for MathML and additional features.

Proof

At first, we construct a new set of controls \boldsymbol{u}_{k}^{\varepsilon}(\cdot)= (u_{1}^{\varepsilon}(\cdot), u_{2}^{\varepsilon}(\cdot), \cdots, u_{l+1}^{\varepsilon}(\cdot))^{T}, where for any k\in\{1,2,\ \cdots,\ l+1\}, we suppose \begin{equation*} u_{k}^{\varepsilon}(t)=\begin{cases} u_{k}(t), & t\in E_{k\varepsilon},\\ \bar{u}(t), & t\in[\tau_{k-1},\tau_{k})\backslash E_{k\varepsilon} \end{cases} \tag{22} \end{equation*}View SourceRight-click on figure for MathML and additional features.

In the above definition, we let the set E_{k\varepsilon} be a measurable set on \varepsilon, and it satisfies \begin{equation*} \text{meas}(E_{k\varepsilon})=\varepsilon(\tau_{k}-\tau_{k-1}) \tag{23} \end{equation*}View SourceRight-click on figure for MathML and additional features.

If u_{k}(\cdot) takes value in U, we know that u_{k}^{\varepsilon}(\cdot)\in U_{ad} whenever \varepsilon is small enough.

Let x_{k}^{\varepsilon}(\cdot) be the continuous subsystem trajectory corresponding to u_{k}^{\varepsilon}(\cdot). For t\in [\tau_{k-1},\tau_{k}), k=1,2, \cdots, l+1, we know that \begin{align*} &x_{k}^{\varepsilon}(t)=x(\tau_{k-1})+\int\nolimits_{\tau_{k-1}}^{t}[A_{k}x_{k}^{\varepsilon}(s)+f(s, x_{k}^{\varepsilon}(s))+u_{k}^{\varepsilon}(s)]ds \tag{24}\\ &x(t)=x(\tau_{k-1})+\int\nolimits_{\tau_{k-1}}^{t}[A_{k}x(s)+f(s, x(s))+\overline{u}(s)]ds \tag{25} \end{align*}View SourceRight-click on figure for MathML and additional features.

Define \begin{equation*} z_{k}^{\varepsilon}(t)= \frac{1}{\varepsilon}[x_{k}^{\varepsilon}(t)-x(t)] \tag{26} \end{equation*}View SourceRight-click on figure for MathML and additional features. and by Lemma 1 and the property of E_{k\varepsilon}, we get \begin{align*} z_{k}^{\varepsilon}(t)=&\int\nolimits_{\tau_{k-1}}^{t}A_{k}z_{k}^{\varepsilon}(s)ds+ \int\nolimits_{\tau_{k-1}}^{t}\frac{\partial^{T}}{\partial x}f(s, x(s))z_{k}^{\varepsilon}(s)ds\\ &+\int\nolimits_{\tau_{k-1}}^{t}[u_{k}(s)-\bar{u}(s)]ds+o(\varepsilon) \tag{27} \end{align*}View SourceRight-click on figure for MathML and additional features. where o(\varepsilon) is the infinitesimal of \varepsilon. Furthermore, we define \begin{equation*} \delta x_{k}(\cdot)=\lim\limits_{\varepsilon\rightarrow 0}z_{k}^{\varepsilon}(\cdot) \tag{28} \end{equation*}View SourceRight-click on figure for MathML and additional features.

If \varepsilon\rightarrow 0, the following holds \begin{align*} \delta x_{k}(t)=&\int\nolimits_{\tau_{k-1}}^{t}\left[A_{k}+\frac{\partial^{T}}{\partial x}f(s, x(s))\right]\delta x_{k}(s)ds\\ &+ \int\nolimits_{\tau_{k-1}}^{t}[u_{k}(s)-\bar{u}(s)]ds \tag{29} \end{align*}View SourceRight-click on figure for MathML and additional features.

Then over [\tau_{k-1},\tau_{k}), we get that \begin{equation*} \frac{d}{dt}\delta x_{k}(t)=\left[A_{k}+\frac{\partial^{T}}{\partial x}f(t, x(t))\right]\delta x_{k}(t)+[u_{k}(t)-\bar{u}(t)] \tag{30} \end{equation*}View SourceRight-click on figure for MathML and additional features.

We consider the discrete event states at \tau_{k} corresponding to u_{k}^{\varepsilon}(\cdot) and \bar{u}(\cdot), respectively, \begin{align*} &q_{k}^{\varepsilon}(\tau_{k})=B_{k}x_{k}^{\varepsilon}(\tau_{k}) \tag{31}\\ &q(\tau_{k})=B_{k}x(\tau_{k}) \tag{32} \end{align*}View SourceRight-click on figure for MathML and additional features.

We let \begin{equation*} \delta q_{k}(\tau_{k})=\lim\limits_{\varepsilon\rightarrow 0}\frac{1}{\varepsilon}[q_{k}^{\varepsilon}(\tau_{k})-q(\tau_{k})]=B_{k}\delta x_{k}(\tau_{k}) \tag{33} \end{equation*}View SourceRight-click on figure for MathML and additional features.

It is known that the perturbation of \bar{u}(t) takes place on [\tau_{k-1},\tau_{k}), then we have \delta q_{k}(\tau_{k-1})=0. Therefore, on the following [\tau_{h-1},\tau_{h}) for h=k+1, k+2, \cdots, l, \text{or}[\tau_{l},t^{f}) for h=l+1, the variational equation of the continuous subsystem is denoted by \begin{equation*} \frac{d}{dt}\delta x_{k}(t)=\left[A_{h}+\frac{\partial^{T}}{\partial x}f(t, x(t))\right]\delta x_{k}(t) \tag{34} \end{equation*}View SourceRight-click on figure for MathML and additional features.

By Eq.(33), we get that \begin{equation*} \delta q_{k}(\tau_{h})=B_{h}\delta x_{k}(\tau_{h}) \tag{35} \end{equation*}View SourceRight-click on figure for MathML and additional features. where h=k+1, k+2, \cdots, l.

Then we obtain that \begin{align*} J(\boldsymbol{u}^{\varepsilon}(\cdot))-J(\bar{\boldsymbol{u}}(\cdot))= &\text{diag} \{J(u_{1}^{\varepsilon}(\cdot))-J(\bar{u}(\cdot)), J(u_{2}^{\varepsilon}(\cdot)),\\ &-J(\bar{u}(\cdot)), \cdots, J(u_{l+1}^{\varepsilon}(\cdot))-J(\bar{u}(\cdot))\} \tag{36} \end{align*}View SourceRight-click on figure for MathML and additional features. where \begin{align*} J(u_{k}^{\varepsilon}(\cdot))&-J(\bar{u}(\cdot))\\ =&\int\nolimits_{\tau_{k-1}}^{\tau_{k}}\{L_{x}^{\tau}(s, x(s), u_{k}^{\varepsilon}(s))(x_{k}^{\varepsilon}(s)\\ &-x(s))+\varepsilon[L(s, x(s), u_{k}(s))\\ &-L(s, x(s),\bar{u}(s))]+o(\varepsilon)\}ds\\ &+\sum\limits_{i=k+1}^{l+1}\int\nolimits_{\tau_{i-1}}^{\tau_{i}}L_{x}^{\tau}(s, x(s),\bar{u}(s))(x_{k}^{\varepsilon}(s)-x(s))ds\\ &+\sum\limits_{i=k}^{l}\left\{\frac{d^{T}}{dq}L_{a}(q_{k}(\tau_{i}, u_{k}(\cdot))(q_{k}^{\varepsilon}(\tau_{i})-q(\tau_{i}))+o(\varepsilon)\right\}\tag{37} \end{align*}View SourceRight-click on figure for MathML and additional features. where k=1,2, \cdots, l+1. Because that \bar{u}(t)\in U_{ad} is optimal control, then Eq.(37) is positive definite.

Then for t\in[\tau_{k-1},\tau_{k}), we define the adjoint equation of Eq.(30) as \begin{equation*} - \frac{d}{dt}y_{k}(t)=\left[A_{k}^{T}+\frac{\partial}{\partial x}f(t, x(t))\right]y_{k}(t)+\frac{\partial}{\partial x}L(t, x(t),\overline{u}(t)) \tag{38} \end{equation*}View SourceRight-click on figure for MathML and additional features.

By Eqs. (30) and (38), we get that \begin{align*} &\langle y_{k}(\tau_{k}), \delta x_{k}(\tau_{k})\rangle-\langle y_{k}(\tau_{k-1}), \delta x_{k}(\tau_{k-1})\rangle\\ &\qquad =\int\nolimits_{\tau_{k-1}}^{\tau_{k}}\{-L_{x}^{T}(s, x(s),\bar{u}(s))\delta x_{k}(s)\\ &\qquad+\langle y_{k}(s), u_{k}(s)-\bar{u}(s)\rangle\}ds \tag{39} \end{align*}View SourceRight-click on figure for MathML and additional features.

If t\in[\tau_{h-1},\tau_{h})(h=k+1,\ k+2, \cdots, l) or t\in[\tau_{l},t^{f}], the adjoint equation of Eq.(34) is \begin{equation*} - \frac{d}{dt}y_{k}(t)=\left[A_{h}^{T}+\frac{\partial}{\partial x}f(t, x(t))\right]y_{k}(t)+\frac{\partial}{\partial x}L(t, x(t),\bar{u}(t)) \tag{40} \end{equation*}View SourceRight-click on figure for MathML and additional features.

By Eqs. (34) and (40), \begin{align*} &\langle y_{k}(\tau_{h}),\delta x_{k}(\tau_{h})\rangle-\langle y_{k}(\tau_{h-1}), \delta x_{k}(\tau_{h-1})\rangle\\ &\qquad\qquad =- \int\nolimits_{\tau_{h-1}}^{\tau_{h}}L_{x}^{T}(s, x(s),\bar{u}(s))\delta x_{k}(s)ds \tag{41} \end{align*}View SourceRight-click on figure for MathML and additional features.

Moreover, by the statement of the theorem, we know that \begin{equation*} y_{k}(t^{f})=0 \tag{42} \end{equation*}View SourceRight-click on figure for MathML and additional features.

Then by Eqs. (39), (41) and (42), \begin{align*} 0&=\langle y_{k}(t^{f}), \delta x_{k}(t^{f})\rangle\\ &= \sum\limits_{i=k}^{i+1}\{\langle y_{k}(\tau_{i}), \delta x_{k}(\tau_{i})\rangle-\langle y_{k}(\tau_{i-1}), \delta x_{k}(\tau_{i-1})\rangle\}\\ &=- \sum\limits_{i=k+1}^{l+1}\int\nolimits_{\tau_{i-1}}^{\tau_{i}}L_{x}^{T}(s, x(s),\bar{u}(s))\delta x_{k}(s)ds\\ &\quad +\int\nolimits_{\tau_{k-1}}^{\tau_{k}}\{-L_{x}^{T}(s, x(s),\bar{u}(s))\delta x_{k}(s)\\ &\quad+\langle y_{k}(s), u_{k}(s)-\bar{u}(s)\rangle\Vert ds \tag{43} \end{align*}View SourceRight-click on figure for MathML and additional features.

We let u\in U be given and let t is the Lebesgue point of the function L(t,\ x(t),\bar{u}(t))+\langle y_{k}(t),\bar{u}(t)) 〉 and the function L(t, x(t),\ u_{k}(t))+ \langle y_{k}(t), u_{k}(t)\rangle. For \varepsilon > 0, we choose u\in U, so that \begin{equation*} u_{k}(s)=\begin{cases} u, & \vert s-t\vert \leq\varepsilon\\ \bar{u}(s), & \text{others} \end{cases} \tag{44} \end{equation*}View SourceRight-click on figure for MathML and additional features.

Moreover, because f has bounded partial derivatives w.r.t. x, we let C be an upper bound of f over all [\tau_{i-1},\tau_{i})(i=k, k+1, \cdots, l), and [\tau_{l},t^{f}]. Then if t\in[\tau_{k-1},\tau_{k}), we get that \begin{align*} \Vert\delta x_{k}(t)\Vert&\leq \int\nolimits_{\tau_{k-1}}^{t}\{\Vert f_{x}(s, x(s))\Vert\Vert\delta x_{k}(s)\Vert+\Vert u_{k}(s)-\bar{u}(s)\Vert\}ds\\ &\leq \int\nolimits_{\tau_{k-1}}^{t}C\Vert\delta x_{k}(s)\Vert ds+\int\nolimits_{t-\varepsilon}^{t+\varepsilon}\Vert u-\bar{u}(s)\Vert ds \tag{45} \end{align*}View SourceRight-click on figure for MathML and additional features.

We know that the optimal control \bar{u}(\cdot)\in U_{ad} is bounded, and by Grönwall's inequality[13], we obtain \begin{equation*} \sup\limits_{t\in[\tau_{k-1},\tau_{k})} \Vert\delta x_{k}(t)\Vert\leq C^{\prime}\varepsilon \tag{46} \end{equation*}View SourceRight-click on figure for MathML and additional features. where \begin{equation*} C^{\prime}=2e^{\max\{\tau_{k}-\tau_{k-1}\vert k=1,2,\ldots, l+1\}}\max_{u\in U}\Vert u-\bar{u}(s)\Vert \tag{47} \end{equation*}View SourceRight-click on figure for MathML and additional features.

It holds that \lim\nolimits_{\varepsilon\rightarrow 0}\Vert\delta x_{k}(t)\Vert=0 for t\in[\tau_{k-1},\tau_{k}), and then we get that \lim\nolimits_{\varepsilon\rightarrow 0}\Vert\delta q_{k}(\tau_{k})\Vert=0. Moreover, by Eqs.(34) and (35), we get that \begin{equation*} \lim\limits_{\varepsilon\rightarrow 0}\Vert\delta x_{k}(t)\Vert=0 \tag{48} \end{equation*}View SourceRight-click on figure for MathML and additional features. for i\in[\tau_{h-1},\tau_{h})(h=k+1,\ k+2,\ \cdots,\ l) or for t\in[\tau_{l},t^{f}], and \begin{equation*} \lim\limits_{\varepsilon\rightarrow 0}\Vert\delta q_{k}(\tau_{h})\Vert=0 \tag{49} \end{equation*}View SourceRight-click on figure for MathML and additional features. for h=k+1, k+2, \cdots, l.

We substitute Eq.(43) into Eq.(37), divide Eq.(37) by \varepsilon, and let \varepsilon\rightarrow 0, then we get that \begin{align*} &\int\nolimits_{\tau_{k-1}}^{\tau_{k}}\{\langle y_{k}(s), u_{k}(s)-\bar{u}(s)\rangle+[L(s, x(s), u_{k}(s))\\ &\qquad \quad-L(s, x(s),\bar{u}(s))]\}ds\\ &\quad \geq 0,\qquad \qquad \qquad k=1,2, \cdots, l+1 \tag{50} \end{align*}View SourceRight-click on figure for MathML and additional features.

Based on the definition Eq.(44) of \varepsilon, we know \begin{align*} &\int\nolimits_{t-\varepsilon}^{t+\varepsilon}\{\langle y_{k}(s), u\rangle+L(s, x(s), u)\}ds\\ &\qquad \geq \int\nolimits_{t-\varepsilon}^{t+\varepsilon}\{\langle y_{k}(s),\bar{u}(s)\rangle+L(s, x(s),\bar{u}(s))\}ds,\\ &\qquad \qquad \qquad\qquad\qquad k=1,2, \cdots, l+1 \tag{51} \end{align*}View SourceRight-click on figure for MathML and additional features.

We divide the two sides of the above matrix inequality by \varepsilon and let \varepsilon\rightarrow 0, then for any entries of Eq.(51), we have \begin{equation*} L(t, x(t), u)+\langle y_{k}(t), u\rangle \geq L(t, x(t),\bar{u}(t))+\langle y_{k}(t),\bar{u}(t)\rangle \tag{52} \end{equation*}View SourceRight-click on figure for MathML and additional features. where k=1,2, \cdots, l+1. Since the Lebesgue points are full measurement over [\tau_{k-1},\tau_{k}) for k=1,2, \cdots, l, or over [\tau_{l},t^{f}], it holds almost everywhere over this interval.

Then based on the notation of x(\cdot) in the theorem, we add \langle y_{k}(t), A_{k}x(t)+f(t, x(t))\rangle into the both sides of Eq.(52), we get the minimum condition as the follows. For all k=1,2, \cdots, l+1, we can obtain \begin{align*} &L(t, x(t),\bar{u}(t))+\langle y_{k}(t), A_{k}x(t)+f(t, x(t))+\bar{u}(t)\rangle\\ &\qquad \quad =\min\limits_{u\in U}\{L(t, x(t), u)+\langle y_{k}(t), A_{k}x(t)+f(t, x(t))+u\rangle\},\\ &\qquad \qquad\qquad\quad a.e.\ t\in[\tau_{k-1}, \tau_{k})\quad \text{or}\quad t\in[\tau_{l}, t^{f}] \tag{53} \end{align*}View SourceRight-click on figure for MathML and additional features.

This ends the proof.

If we suppose f(t, x) in Eq.(1) is the linear time-vary function of x(t), i.e., \begin{equation*} f(t, x)=C(t)x(t) \tag{54} \end{equation*}View SourceRight-click on figure for MathML and additional features. and control input imposed on the system is the same as Eq.(2), then based on the HDS modeling method mentioned in Section II, we get the HDS model as the follows. \begin{equation*} \begin{cases} \dot{x}(t)=A_{k}(t)x(t)+u(t), & t\in[\tau_{k-1}, \tau_{k})\text{or}t\in[\tau_{l}, t^{f}]\\ q(\tau_{k})=B_{k}x(\tau_{k}), & k=1,2, \cdots, l \end{cases} \tag{55} \end{equation*}View SourceRight-click on figure for MathML and additional features. where A_{k}(t)=A+B_{1k}+C(t), and the system evolves from the initial states x(\tau_{0})=x_{0} and q(\tau_{0})=0. We let the terminal state of Eq.(55) is free, namely, \begin{equation*} (x(t^{f}), q(\tau_{l}))^{T}\in \mathbb{R}^{2n} \end{equation*}View SourceRight-click on figure for MathML and additional features.

We define the evolving cost functional matrix as follows. For convenience, we only define the kth diagonal entry of the matrix \begin{align*} J_{k}(u_{k}(\cdot))=&\sum\limits_{i=k}^{l+1}\int\nolimits_{\tau_{i-1}}^{\tau_{i}}\frac{1}{2}[x_{k}^{T}(s)Q_{k}x_{k}(s)+u_{k}^{T}(s)V_{k}u_{k}(s)]ds\\ &+ \sum\limits_{i=k}^{l}L_{a}(q(\tau_{i}, u_{k}(\cdot)))\tag{56} \end{align*}View SourceRight-click on figure for MathML and additional features. where Q_{k}, V_{k} are both positive definite symmetrical matrices with proper dimensions. Then by Theorem 1, we get the specific linear HISS minimum principle with free terminal states.

Theorem 2

Let \bar{u}(\cdot) be the solution of the optimal control problem for Eq.(55). We suppose that x(\cdot)=x(\cdot,\bar{u}(\cdot)) is the optimal continuous trajectory and y(\cdot)=q(\cdot,\bar{u}(\cdot)) is the optimal discrete event state corresponding to \bar{u}(\cdot), respectively. Then over [\tau_{k-1},\tau_{k}) for k=1,2, \cdots, l, or over [\tau_{l},t^{f}] for k=l+1, there exists y_{k}(\cdot), which is piecewise continuous. So that we get the adjoint equation \begin{equation*} - \frac{d}{dt}y_{k}(t)=A_{k}^{T}(t)y_{k}(t)+Q_{k}x(t) \tag{57} \end{equation*}View SourceRight-click on figure for MathML and additional features.

At t^{f}, y_{k}(\cdot) satisfies y_{k}(t^{f})=0. Moreover, the minimum condition holds, namely, \begin{align*} &H_{k}(t, x(t), y_{k}(t),\bar{u}(t))=\min\limits_{u\in U}H_{k}(t, x(t), y_{k}(t), u),\\ &\qquad \qquad \qquad\quad a.e.\ t\in[\tau_{k-1}, \tau_{k})\text{or}\ t\in[\tau_{l}, t^{f}] \tag{58} \end{align*}View SourceRight-click on figure for MathML and additional features. where \begin{align*} H_{k}(t, x(t), y_{k}(t), u(t))=&\frac{1}{2}[x^{T}(t)Q_{k}x(t)+u^{T}(t)V_{k}u(t)]\\ &+y_{k}^{T}(t)[A_{k}(t)x(t)+u(t)] \tag{59} \end{align*}View SourceRight-click on figure for MathML and additional features.

Furthermore, we get the formulation of the optimal control for Eq.(55) in the next section.

SECTION IV.

The Formulation of the Optimal Control and Further Remarks

In the optimal control problem of Eq.(55), for u(\cdot)\in U_{ad}, the minimum condition stated in Theorem 2 can degenerate into \begin{equation*} \min\limits_{u(\cdot)\in U_{ad}}\left\{\frac{u(t)^{T}V_{k}u(t)}{2}+y_{k}^{T}(t)u(t)\right\}=\frac{\bar{u}^{T}(t)V_{k}\bar{u}(t)}{2}+y_{k}^{T}(t)\bar{u}(t) \tag{60} \end{equation*}View SourceRight-click on figure for MathML and additional features.

Based on the minimum condition of the above equation, we get the optimal control input over [\tau_{k-1},\tau_{k}). We let \begin{equation*} \tilde{H}(y_{k}(t), u(t))=\frac{1}{2}u^{T}(t)V_{k}u(t)+y_{k}^{T}(t)u(t) \end{equation*}View SourceRight-click on figure for MathML and additional features. and differentiate \tilde{H} by u at u(t)=\bar{u}(t), i.e., \begin{equation*} 0= \frac{\partial}{\partial u}\tilde{H}(y_{k}(t), u(t))\vert_{u=\bar{u}(t)}=V_{k}\bar{u}(t)+y_{k}(t) \tag{61} \end{equation*}View SourceRight-click on figure for MathML and additional features.

Then we obtain the optimal control input as \begin{equation*} \bar{u}(t)=-V_{k}^{-1}y_{k}(t) \tag{62} \end{equation*}View SourceRight-click on figure for MathML and additional features. where t\in[\tau_{k-1},\tau_{k}) for k=1,2, \cdots, l, or t\in[\tau_{l},t^{f}) for k=l+1.

The main results mentioned here also cover the both cases of pure impulsive systems and pure switched systems.

We consider an impulsive system, and denote the control input by \begin{equation*} v_{1}(t, x, u)=u_{2}(t, x)+u(t) \tag{63} \end{equation*}View SourceRight-click on figure for MathML and additional features. where u_{2}(t, x) is defined by Eq.(3), and u(t), is the external control input. Similar to the HISS Eq.(12), we give the Hybrid impulsive system (HIS) as following: \begin{equation*} \begin{cases} \dot{x}(t)=Ax(t)+f(t, x)+u(t), & t\in[\tau_{k-1}, \tau_{k})\ \text{or}\ t\in[\tau_{l}, t^{f}]\\ q(\tau_{k})=B_{k}x(\tau_{k}), & k=1,2, \cdots, l\\ x(\tau_{0})=x_{0}, q(\tau_{0})=0 & \end{cases}\tag{64} \end{equation*}View SourceRight-click on figure for MathML and additional features.

We suppose the optimal control problem of the HIS is described as Section II. So, the minimum principle of global the HIS Eq.(64) is stated as the follows.

Corollary 1

Consider the HIS Eq.(64) and let \bar{u}(\cdot) be the solution of the optimal control problem mentioned above. We assume that x(\cdot)=x(\cdot,\bar{u}(\cdot)) and q(\cdot)=q(\cdot,\bar{u}(\cdot)) are the optimal states of the HIS corresponding to \bar{u}(\cdot). Then over [\tau_{k-1},\tau_{k}) for k=1,2, \cdots, l, or over \lceil p\tau_{l},t^{f}] for k=l+1, we have the adjoint equation \begin{equation*} - \frac{d}{dt}y_{k}(t)=\left[A^{T}+\frac{\partial}{\partial x}f(t, x(t))\right]y_{k}(t)+\frac{\partial}{\partial x}L(t, x(t),\bar{u}(t)) \tag{65} \end{equation*}View SourceRight-click on figure for MathML and additional features. where y_{k}(\cdot) is piecewise continuous, and at t^{f}, y_{k}(\cdot) satisfies y_{k}(t^{f})=0. Moreover, the minimum condition is satisfied, namely, \begin{align*} &L(t, x(t), u(t))+\langle y_{k}(t), Ax(t)+f(t, x(t))+u(t)\rangle\\ &\qquad =\min\limits_{u\in U}\{L(t, x(t), u)+\langle y_{k}(t), Ax(t)+f(t, x(t))+u\rangle\},\\ &\qquad \qquad\qquad a.e.\ t\in[\tau_{k-1}, \tau_{k})\ \text{or}\ t\in[\tau_{l}, t^{f}] \tag{66} \end{align*}View SourceRight-click on figure for MathML and additional features.

If the HISS Eq.(12) has no impulsive control input, i.e., \begin{equation*} v_{2}(t, x, u)=u_{1}(t, x)+u(t) \tag{67} \end{equation*}View SourceRight-click on figure for MathML and additional features. where u_{1}(t, x) is defined by Eq.(3), and u(t) is the external control input. so we get the switched system \begin{equation*} \dot{x}(t)=A_{k}x(t)+f(t, x)+u(t),\quad t\in[\tau_{k-1}, \tau_{k})\quad \text{or}\quad t\in[\tau_{l}, t^{f}] \tag{68} \end{equation*}View SourceRight-click on figure for MathML and additional features. where A_{k}=A+B_{k}. It is known that x(\tau_{k})=x(\tau_{k}^{-}). We can see that there is no controlled impulse[1] in Eq.(68).

We define the cost functional matrix as the follows. For convenience, we just give the kth entry of the cost functional matrix: \begin{equation*} J_{k}(u_{k}(\cdot))=\sum\limits_{i=k}^{l+1}\int\nolimits_{\tau_{i-1}}^{\tau_{i}}L(s, x(s, u_{k}(s)), u_{k}(s))ds \tag{69} \end{equation*}View SourceRight-click on figure for MathML and additional features.

The proof is similar to Theorem 1, and it just needs us to omit the discussion of discrete event state q(\tau_{k}).

Corollary 2

Let \bar{u}(\cdot) be the solution of the optimal control problem of the system Eq.(68). We suppose that x(\cdot)=x(\cdot,\bar{u}(\cdot)), is the optimal state corresponding to \bar{u}(\cdot). Then over [\tau_{k-1},\tau_{k}) for k=1,2, \cdots, l, or over [\tau_{l},t^{f}] for k=l+1, we get the adjoint equation \begin{equation*} - \frac{d}{dt}y_{k}(t)=\left[A_{k}^{T}+\frac{\partial}{\partial x}f(t, x(t))\right]y_{k}(t)+\frac{\partial}{\partial x}L(t, x(t),\bar{u}(t)) \tag{70} \end{equation*}View SourceRight-click on figure for MathML and additional features. where y_{k}(\cdot) is piecewise continuous, and at t^{f}, y_{k}(\cdot) satisfies y_{k}(t^{f})=0. Moreover, the minimum condition is satisfied, namely, \begin{align*} &L(t, x(t), u(t))+\langle y_{k}(t), A_{k}x(t)+f(t, x(t))+u(t)\rangle\\ = &\min\limits_{u\in U}\{L(t, x(t), u)+\langle y_{k}(t), A_{k}x(t)+f(t, x(t))+u\rangle\},\\ &\qquad \qquad a.e.\ t\in[\tau_{k-1}, \tau_{k})\quad \text{or}\quad t\in[\tau_{l}, t^{f}] \tag{71} \end{align*}View SourceRight-click on figure for MathML and additional features.

SECTION V.

Conclusions

In this paper, we have introduced a optimal control problem for a general class of HISS with free terminal states. We have established the necessary conditions of the aforementioned optimal control problem, the minimum principle of HISS over the global running time interval, and then we have proved it. In the proof, the general variational method and the matrix cost functional have been utilized. Based on the main result, we gave the minimum principle and the optimal control algorithm for a special example of HISS, which has linear time-variant continuous subsystems. Furthermore, the minimum principle of pure impulsive systems and pure switched systems have also been studied in this paper. According to this study, a research method to deal with global optimal control problems of HISS has been proposed.

Acknowledgement

Valuable discussions with Professor Liu Yungang are very much appreciated. The authors would like to thank the anonymous reviewers for their constructive and insightful comments for improving the quality of this work.

References

References is not available for this document.