Introduction
A Hybrid dynamical system (HDS) is capable of exhibiting several kinds of dynamic behaviors in different parts of the system simultaneously. In practical engineering, there are lots of systems, which have switches and abrupt changes of states at the switching instants. It has been proved that HDS is a powerful tool in modeling the current engineering systems. Based on the theory of impulsive and switching systems, hybrid impulsive and switching control strategies have been developed. The advantage of these control strategies is that the stabilization of some complex systems can be easily realized by introducing small control impulses in different modes of the systems. So, the interest in studying impulsive and switching systems has grown recently because of its theoretical and practical significance.
Optimal control problems of HDS have been studied broadly in recent years[1]–[11]. Branicky et al.[1] proposed a unified framework of hybrid optimal control and synthesized a hybrid controller for hybrid devices. Giua et al.[2] studied the optimal control problem of minimizing a quadratic performance index over an infinite time horizon for a class of switched piecewise linear autonomous systems. Based on linear quadratic adaptive control laws of continuous dynamical systems, Tan et al.[3] concentrated on a sampled data system with unknown Markov jump parameters, and then gave a parameter estimator and a control design method. Bengea and DeCarlo[4] considered an optimal control problem for a class of switching systems under the assumption that the number of switches and model sequences are both indeterminate. Baotic et al.[5] studied the constrained finite and infinite time optimal control problems towards a class of discrete-time linear hybrid systems, and proposed algorithms that compute the optimal solutions. Borrelli et al.[6] worked on the solution to the optimal control problems for constrained discrete-time linear HDS based on the linear quadratic performance criteria, and constructed the state-feedback optimal control law by combining multi-parametric programming and dynamic programming. Gokbayrak and Selvi[7] derived some sample path characteristics for a two-stage serial HDS, and transformed an original non-smooth optimal control problem into a convex optimization problem. Spinelli et al.[8] dealt with the optimal control problem for the continuous-time autonomous linear switched system on a finite control horizon, and developed the sufficient conditions of their optimality by Hamilton-Jacobi-Bellman theory. Shaikh and Caines[9] studied a class of hybrid optimal control problems for the systems with controlled and autonomous location transitions, and extended the maximum principle from pure continuous systems to HDS. In the author's previous papers[10], [11], towards a class of HDS with the pre-specified switching sequence, the local optimal control problems for both free and restricted terminal states were discussed. In this paper we will extend these works to a special class of HDS, HISS, and study the global optimal control problem.
This paper is organized as follows. In Section II, we first give a special controlled nonlinear system, denote it by the HISS, and then state the optimal control problem for it. In Section III, we give the theoretical method, the minimum principle of the HISS, to solve this optimal control problem. In the proof, the general variational method and the matrix cost functional will be utilized. Moreover, we illustrate a special example of the HISS, and present its minimum principle. In Section IV, we provide the optimal control algorithm for the HISS mentioned in Section III, and give the relative results for a pure impulsive system and a pure switched system. Section V is the conclusion of this paper.
The Optimal Control Problem of HISS with Free Terminal States
The controlled model of nonlinear system is given by
\begin{equation*}
\dot{x}(t)=Ax(t)+f(t, x)+v(t, x, u) \tag{1}
\end{equation*}
\begin{equation*}
v(t, x, u)=u_{1}(t, x)+u_{2}(t, x)+u(t) \tag{2}
\end{equation*}
In the above equation, \begin{equation*}
u_{1}(t, x)=\sum\limits_{k=1}^{\infty}B_{1k}x(t)l_{k}(t),\quad u_{2}(t, x)=B_{2k}x(t)\delta(t-\tau_{k}) \tag{3}
\end{equation*}
\begin{equation*}
l_{k}(t)=\begin{cases}
1, &\tau_{k-1}\leq t < \tau_{k},\qquad k=1,2, \cdots\\
0, &\text{others}\end{cases} \tag{4}
\end{equation*}
\begin{equation*}
U{\buildrel \triangle\over=}\{\ (u_{1}, u_{2}, \cdots\, u_{m})^{T}\in \mathbb{R}^{m}\Vert u_{i}\vert \leq 1, i=1,2, \cdots, m\} \tag{5}
\end{equation*}
Definition 1
Let Eq.(1) evolve over \begin{equation*}
U_{ad}{\buildrel \triangle\over=}\{u(\cdot)\vert u(\cdot)\in L^{2}[\tau_{0},t^{f};U]\} \tag{6}
\end{equation*}
For convenience, we define \begin{equation*}
[\tau_{0},t^{f}]=\cup_{k= 1}^{l}[\tau_{k-1},\tau_{k})\cup[\tau_{l},t^{f}]
\end{equation*}
By Eqs.(3) and (4), the following holds
\begin{equation*}
u_{1}(t, x)=B_{1k}x(t),\quad t\in[\tau_{k-1}, \tau_{k})\quad \text{or}\quad t\in[\tau_{l}, t^{f}] \tag{7}
\end{equation*}
\begin{align*}
x(\tau_{k})=&x(\tau_{k-h})+\int\nolimits_{\tau_{k-h}}^{\tau_{k}}[Ax(s)+f(s, x(s))\\
&+B_{1k}x(s)+B_{2k}x(s)\delta(s-\tau_{k})+u(s)]ds \tag{8}
\end{align*}
\begin{equation*}
\Delta x(\tau_{k})=x(\tau_{k})-x(\tau_{k}^{-})=B_{2k}x(\tau_{k}) \tag{9}
\end{equation*}
Then the system (1) can be rewritten as
\begin{equation*}
\begin{cases}
\dot{x}(t)=Ax(t)+f(t, x)+B_{1k}x(t)+u(t),\\
\qquad \qquad \qquad t\in[\tau_{k-1}, \tau_{k})\ \text{or}\ t\in[\tau_{l}, t^{f}]\\
\Delta x(t)=B_{2k}x(t),\qquad t=\tau_{k}\\
x(\tau_{0})=x_{0},\qquad \qquad\ k=1,2, \cdots, l
\end{cases} \tag{10}
\end{equation*}
The system (10) is called the HISS[12]. Furthermore, we define
\begin{equation*}
q(t)=\Delta x(t),\quad t=\tau_{k},\quad k=1,2, \cdots, l \tag{11}
\end{equation*}
Then Eq.(10) is further expressed as
\begin{equation*}
\begin{cases}
\dot{x}(t)=A_{k}x(t)+f(t, x)+u(t), &t\in[\tau_{k-1}, \tau_{k})\ \text{or}\ t\in[\eta, t^{f}]\\
q(\tau_{k})=B_{k}x(\tau_{k}), &k= 1, 2, \cdots, l
\end{cases}\tag{12}
\end{equation*}
We consider the HISS Eq.(12) evolving under a admissible control \begin{equation*}
\boldsymbol{J}(\boldsymbol{u}(\cdot))= \text{diag}\{J_{1}(u_{1} (\cdot)), J_{2}(u_{2} (\cdot)), \cdots, J_{l+1}(u_{l+1}(\cdot))\} \tag{13}
\end{equation*}
\begin{align*}
J_{k}(u_{k}(\cdot))=&\sum\limits_{i=k}^{l+1}\left[\int\nolimits_{\tau_{i-1}}^{\tau_{i}}L(s, x(s, u_{k}(s)), u_{k}(s))ds\right.\\
&\left.+\sum_{i=k}^{l} L_{a}(q(\tau_{i}, u_{k}(\cdot)))\right],\ k=1,2, \cdots, l+1 \tag{14}
\end{align*}
Then we describe the global optimal control problem of Eq.(12) with free terminal states as follows. It is supposed that \begin{equation*}
(x(t^{f}), q(\tau_{l}))\in \mathbb{R}^{2n} \tag{15}
\end{equation*}
In the following section, we will derive the necessary condition for the above global optimal control problem, the minimum principle of the HISS over [
Main Results
Lemma 1
Let \begin{align*}
&\text{meas}(E_{\lambda}(\varepsilon))=\lambda(b-a)\tag{16}\\
&\lambda \int\nolimits_{a}^{b}f(t)dt=f_{E_{\lambda}(\varepsilon)}f(t)dt+\eta\tag{17} \\
&\Vert\eta\Vert < \varepsilon \tag{18}
\end{align*}
We can find Lemma 1 in Ref.[13], and we omit its proof here.
Definition 2
Given a Lebesgue integrable function \begin{equation*}
\lim\limits_{r\rightarrow 0^{+}}\frac{1}{B(t, r)}\int\nolimits_{B(t, r)}\vert f(s)-f(t)\vert ds=0
\end{equation*}
Theorem 1
Consider the HISS Eq.(12) and let \begin{equation*}
- \frac{d}{dt}y_{k}(t)=\left[A_{k}^{T}+\frac{\partial}{\partial x}f(t, x(t))\right] y_{k}(t)+\frac{\partial}{\partial x}L(t, x(t),\bar{u}(t)) \tag{19}
\end{equation*}
\begin{gather*}
H_{k}(t, x(t), y_{k}(t),\bar{u}(t))=\min\limits_{u\in U}H_{k}(t, x(t), y_{k}(t), u),\\
a.e.\quad t\in[\tau_{k-1}, \tau_{k})\ \text{or}\ t\in[\tau_{l}, t^{f}] \tag{20}
\end{gather*}
\begin{align*}
&H_{k}(t, x(t), y_{k}(t), u(t))\\
&\qquad =L(t, x(t), u(t))+\langle y_{k}(t), A_{k}x(t)+f(t, x(t))+u(t)\rangle \tag{21}
\end{align*}
Proof
At first, we construct a new set of controls \begin{equation*}
u_{k}^{\varepsilon}(t)=\begin{cases}
u_{k}(t), & t\in E_{k\varepsilon},\\
\bar{u}(t), & t\in[\tau_{k-1},\tau_{k})\backslash E_{k\varepsilon}
\end{cases} \tag{22}
\end{equation*}
In the above definition, we let the set \begin{equation*}
\text{meas}(E_{k\varepsilon})=\varepsilon(\tau_{k}-\tau_{k-1}) \tag{23}
\end{equation*}
If
Let \begin{align*}
&x_{k}^{\varepsilon}(t)=x(\tau_{k-1})+\int\nolimits_{\tau_{k-1}}^{t}[A_{k}x_{k}^{\varepsilon}(s)+f(s, x_{k}^{\varepsilon}(s))+u_{k}^{\varepsilon}(s)]ds \tag{24}\\
&x(t)=x(\tau_{k-1})+\int\nolimits_{\tau_{k-1}}^{t}[A_{k}x(s)+f(s, x(s))+\overline{u}(s)]ds \tag{25}
\end{align*}
Define
\begin{equation*}
z_{k}^{\varepsilon}(t)= \frac{1}{\varepsilon}[x_{k}^{\varepsilon}(t)-x(t)] \tag{26}
\end{equation*}
\begin{align*}
z_{k}^{\varepsilon}(t)=&\int\nolimits_{\tau_{k-1}}^{t}A_{k}z_{k}^{\varepsilon}(s)ds+ \int\nolimits_{\tau_{k-1}}^{t}\frac{\partial^{T}}{\partial x}f(s, x(s))z_{k}^{\varepsilon}(s)ds\\
&+\int\nolimits_{\tau_{k-1}}^{t}[u_{k}(s)-\bar{u}(s)]ds+o(\varepsilon) \tag{27}
\end{align*}
\begin{equation*}
\delta x_{k}(\cdot)=\lim\limits_{\varepsilon\rightarrow 0}z_{k}^{\varepsilon}(\cdot) \tag{28}
\end{equation*}
If \begin{align*}
\delta x_{k}(t)=&\int\nolimits_{\tau_{k-1}}^{t}\left[A_{k}+\frac{\partial^{T}}{\partial x}f(s, x(s))\right]\delta x_{k}(s)ds\\
&+ \int\nolimits_{\tau_{k-1}}^{t}[u_{k}(s)-\bar{u}(s)]ds \tag{29}
\end{align*}
Then over \begin{equation*}
\frac{d}{dt}\delta x_{k}(t)=\left[A_{k}+\frac{\partial^{T}}{\partial x}f(t, x(t))\right]\delta x_{k}(t)+[u_{k}(t)-\bar{u}(t)] \tag{30}
\end{equation*}
We consider the discrete event states at \begin{align*}
&q_{k}^{\varepsilon}(\tau_{k})=B_{k}x_{k}^{\varepsilon}(\tau_{k}) \tag{31}\\
&q(\tau_{k})=B_{k}x(\tau_{k}) \tag{32}
\end{align*}
We let
\begin{equation*}
\delta q_{k}(\tau_{k})=\lim\limits_{\varepsilon\rightarrow 0}\frac{1}{\varepsilon}[q_{k}^{\varepsilon}(\tau_{k})-q(\tau_{k})]=B_{k}\delta x_{k}(\tau_{k}) \tag{33}
\end{equation*}
It is known that the perturbation of \begin{equation*}
\frac{d}{dt}\delta x_{k}(t)=\left[A_{h}+\frac{\partial^{T}}{\partial x}f(t, x(t))\right]\delta x_{k}(t) \tag{34}
\end{equation*}
By Eq.(33), we get that
\begin{equation*}
\delta q_{k}(\tau_{h})=B_{h}\delta x_{k}(\tau_{h}) \tag{35}
\end{equation*}
Then we obtain that
\begin{align*}
J(\boldsymbol{u}^{\varepsilon}(\cdot))-J(\bar{\boldsymbol{u}}(\cdot))= &\text{diag} \{J(u_{1}^{\varepsilon}(\cdot))-J(\bar{u}(\cdot)), J(u_{2}^{\varepsilon}(\cdot)),\\
&-J(\bar{u}(\cdot)), \cdots, J(u_{l+1}^{\varepsilon}(\cdot))-J(\bar{u}(\cdot))\} \tag{36}
\end{align*}
\begin{align*}
J(u_{k}^{\varepsilon}(\cdot))&-J(\bar{u}(\cdot))\\
=&\int\nolimits_{\tau_{k-1}}^{\tau_{k}}\{L_{x}^{\tau}(s, x(s), u_{k}^{\varepsilon}(s))(x_{k}^{\varepsilon}(s)\\
&-x(s))+\varepsilon[L(s, x(s), u_{k}(s))\\
&-L(s, x(s),\bar{u}(s))]+o(\varepsilon)\}ds\\
&+\sum\limits_{i=k+1}^{l+1}\int\nolimits_{\tau_{i-1}}^{\tau_{i}}L_{x}^{\tau}(s, x(s),\bar{u}(s))(x_{k}^{\varepsilon}(s)-x(s))ds\\
&+\sum\limits_{i=k}^{l}\left\{\frac{d^{T}}{dq}L_{a}(q_{k}(\tau_{i}, u_{k}(\cdot))(q_{k}^{\varepsilon}(\tau_{i})-q(\tau_{i}))+o(\varepsilon)\right\}\tag{37}
\end{align*}
Then for \begin{equation*}
- \frac{d}{dt}y_{k}(t)=\left[A_{k}^{T}+\frac{\partial}{\partial x}f(t, x(t))\right]y_{k}(t)+\frac{\partial}{\partial x}L(t, x(t),\overline{u}(t)) \tag{38}
\end{equation*}
By Eqs. (30) and (38), we get that
\begin{align*}
&\langle y_{k}(\tau_{k}), \delta x_{k}(\tau_{k})\rangle-\langle y_{k}(\tau_{k-1}), \delta x_{k}(\tau_{k-1})\rangle\\
&\qquad =\int\nolimits_{\tau_{k-1}}^{\tau_{k}}\{-L_{x}^{T}(s, x(s),\bar{u}(s))\delta x_{k}(s)\\
&\qquad+\langle y_{k}(s), u_{k}(s)-\bar{u}(s)\rangle\}ds \tag{39}
\end{align*}
If \begin{equation*}
- \frac{d}{dt}y_{k}(t)=\left[A_{h}^{T}+\frac{\partial}{\partial x}f(t, x(t))\right]y_{k}(t)+\frac{\partial}{\partial x}L(t, x(t),\bar{u}(t)) \tag{40}
\end{equation*}
By Eqs. (34) and (40),
\begin{align*}
&\langle y_{k}(\tau_{h}),\delta x_{k}(\tau_{h})\rangle-\langle y_{k}(\tau_{h-1}), \delta x_{k}(\tau_{h-1})\rangle\\
&\qquad\qquad =- \int\nolimits_{\tau_{h-1}}^{\tau_{h}}L_{x}^{T}(s, x(s),\bar{u}(s))\delta x_{k}(s)ds \tag{41}
\end{align*}
Moreover, by the statement of the theorem, we know that
\begin{equation*}
y_{k}(t^{f})=0 \tag{42}
\end{equation*}
Then by Eqs. (39), (41) and (42),
\begin{align*}
0&=\langle y_{k}(t^{f}), \delta x_{k}(t^{f})\rangle\\
&= \sum\limits_{i=k}^{i+1}\{\langle y_{k}(\tau_{i}), \delta x_{k}(\tau_{i})\rangle-\langle y_{k}(\tau_{i-1}), \delta x_{k}(\tau_{i-1})\rangle\}\\
&=- \sum\limits_{i=k+1}^{l+1}\int\nolimits_{\tau_{i-1}}^{\tau_{i}}L_{x}^{T}(s, x(s),\bar{u}(s))\delta x_{k}(s)ds\\
&\quad +\int\nolimits_{\tau_{k-1}}^{\tau_{k}}\{-L_{x}^{T}(s, x(s),\bar{u}(s))\delta x_{k}(s)\\
&\quad+\langle y_{k}(s), u_{k}(s)-\bar{u}(s)\rangle\Vert ds \tag{43}
\end{align*}
We let \begin{equation*}
u_{k}(s)=\begin{cases}
u, & \vert s-t\vert \leq\varepsilon\\
\bar{u}(s), & \text{others}
\end{cases} \tag{44}
\end{equation*}
Moreover, because \begin{align*}
\Vert\delta x_{k}(t)\Vert&\leq \int\nolimits_{\tau_{k-1}}^{t}\{\Vert f_{x}(s, x(s))\Vert\Vert\delta x_{k}(s)\Vert+\Vert u_{k}(s)-\bar{u}(s)\Vert\}ds\\
&\leq \int\nolimits_{\tau_{k-1}}^{t}C\Vert\delta x_{k}(s)\Vert ds+\int\nolimits_{t-\varepsilon}^{t+\varepsilon}\Vert u-\bar{u}(s)\Vert ds \tag{45}
\end{align*}
We know that the optimal control \begin{equation*}
\sup\limits_{t\in[\tau_{k-1},\tau_{k})} \Vert\delta x_{k}(t)\Vert\leq C^{\prime}\varepsilon \tag{46}
\end{equation*}
\begin{equation*}
C^{\prime}=2e^{\max\{\tau_{k}-\tau_{k-1}\vert k=1,2,\ldots, l+1\}}\max_{u\in U}\Vert u-\bar{u}(s)\Vert \tag{47}
\end{equation*}
It holds that \begin{equation*}
\lim\limits_{\varepsilon\rightarrow 0}\Vert\delta x_{k}(t)\Vert=0 \tag{48}
\end{equation*}
\begin{equation*}
\lim\limits_{\varepsilon\rightarrow 0}\Vert\delta q_{k}(\tau_{h})\Vert=0 \tag{49}
\end{equation*}
We substitute Eq.(43) into Eq.(37), divide Eq.(37) by \begin{align*}
&\int\nolimits_{\tau_{k-1}}^{\tau_{k}}\{\langle y_{k}(s), u_{k}(s)-\bar{u}(s)\rangle+[L(s, x(s), u_{k}(s))\\
&\qquad \quad-L(s, x(s),\bar{u}(s))]\}ds\\
&\quad \geq 0,\qquad \qquad \qquad k=1,2, \cdots, l+1 \tag{50}
\end{align*}
Based on the definition Eq.(44) of \begin{align*}
&\int\nolimits_{t-\varepsilon}^{t+\varepsilon}\{\langle y_{k}(s), u\rangle+L(s, x(s), u)\}ds\\
&\qquad \geq \int\nolimits_{t-\varepsilon}^{t+\varepsilon}\{\langle y_{k}(s),\bar{u}(s)\rangle+L(s, x(s),\bar{u}(s))\}ds,\\
&\qquad \qquad \qquad\qquad\qquad k=1,2, \cdots, l+1 \tag{51}
\end{align*}
We divide the two sides of the above matrix inequality by \begin{equation*}
L(t, x(t), u)+\langle y_{k}(t), u\rangle \geq L(t, x(t),\bar{u}(t))+\langle y_{k}(t),\bar{u}(t)\rangle \tag{52}
\end{equation*}
Then based on the notation of \begin{align*}
&L(t, x(t),\bar{u}(t))+\langle y_{k}(t), A_{k}x(t)+f(t, x(t))+\bar{u}(t)\rangle\\
&\qquad \quad =\min\limits_{u\in U}\{L(t, x(t), u)+\langle y_{k}(t), A_{k}x(t)+f(t, x(t))+u\rangle\},\\
&\qquad \qquad\qquad\quad a.e.\ t\in[\tau_{k-1}, \tau_{k})\quad \text{or}\quad t\in[\tau_{l}, t^{f}] \tag{53}
\end{align*}
This ends the proof.
If we suppose \begin{equation*}
f(t, x)=C(t)x(t) \tag{54}
\end{equation*}
\begin{equation*}
\begin{cases}
\dot{x}(t)=A_{k}(t)x(t)+u(t), & t\in[\tau_{k-1}, \tau_{k})\text{or}t\in[\tau_{l}, t^{f}]\\
q(\tau_{k})=B_{k}x(\tau_{k}), & k=1,2, \cdots, l
\end{cases} \tag{55}
\end{equation*}
\begin{equation*}
(x(t^{f}), q(\tau_{l}))^{T}\in \mathbb{R}^{2n}
\end{equation*}
We define the evolving cost functional matrix as follows. For convenience, we only define the \begin{align*}
J_{k}(u_{k}(\cdot))=&\sum\limits_{i=k}^{l+1}\int\nolimits_{\tau_{i-1}}^{\tau_{i}}\frac{1}{2}[x_{k}^{T}(s)Q_{k}x_{k}(s)+u_{k}^{T}(s)V_{k}u_{k}(s)]ds\\
&+ \sum\limits_{i=k}^{l}L_{a}(q(\tau_{i}, u_{k}(\cdot)))\tag{56}
\end{align*}
Theorem 2
Let \begin{equation*}
- \frac{d}{dt}y_{k}(t)=A_{k}^{T}(t)y_{k}(t)+Q_{k}x(t) \tag{57}
\end{equation*}
At \begin{align*}
&H_{k}(t, x(t), y_{k}(t),\bar{u}(t))=\min\limits_{u\in U}H_{k}(t, x(t), y_{k}(t), u),\\
&\qquad \qquad \qquad\quad a.e.\ t\in[\tau_{k-1}, \tau_{k})\text{or}\ t\in[\tau_{l}, t^{f}] \tag{58}
\end{align*}
\begin{align*}
H_{k}(t, x(t), y_{k}(t), u(t))=&\frac{1}{2}[x^{T}(t)Q_{k}x(t)+u^{T}(t)V_{k}u(t)]\\
&+y_{k}^{T}(t)[A_{k}(t)x(t)+u(t)] \tag{59}
\end{align*}
Furthermore, we get the formulation of the optimal control for Eq.(55) in the next section.
The Formulation of the Optimal Control and Further Remarks
In the optimal control problem of Eq.(55), for \begin{equation*}
\min\limits_{u(\cdot)\in U_{ad}}\left\{\frac{u(t)^{T}V_{k}u(t)}{2}+y_{k}^{T}(t)u(t)\right\}=\frac{\bar{u}^{T}(t)V_{k}\bar{u}(t)}{2}+y_{k}^{T}(t)\bar{u}(t) \tag{60}
\end{equation*}
Based on the minimum condition of the above equation, we get the optimal control input over \begin{equation*}
\tilde{H}(y_{k}(t), u(t))=\frac{1}{2}u^{T}(t)V_{k}u(t)+y_{k}^{T}(t)u(t)
\end{equation*}
\begin{equation*}
0= \frac{\partial}{\partial u}\tilde{H}(y_{k}(t), u(t))\vert_{u=\bar{u}(t)}=V_{k}\bar{u}(t)+y_{k}(t) \tag{61}
\end{equation*}
Then we obtain the optimal control input as
\begin{equation*}
\bar{u}(t)=-V_{k}^{-1}y_{k}(t) \tag{62}
\end{equation*}
The main results mentioned here also cover the both cases of pure impulsive systems and pure switched systems.
We consider an impulsive system, and denote the control input by
\begin{equation*}
v_{1}(t, x, u)=u_{2}(t, x)+u(t) \tag{63}
\end{equation*}
\begin{equation*}
\begin{cases}
\dot{x}(t)=Ax(t)+f(t, x)+u(t), & t\in[\tau_{k-1}, \tau_{k})\ \text{or}\ t\in[\tau_{l}, t^{f}]\\
q(\tau_{k})=B_{k}x(\tau_{k}), & k=1,2, \cdots, l\\
x(\tau_{0})=x_{0}, q(\tau_{0})=0 &
\end{cases}\tag{64}
\end{equation*}
We suppose the optimal control problem of the HIS is described as Section II. So, the minimum principle of global the HIS Eq.(64) is stated as the follows.
Corollary 1
Consider the HIS Eq.(64) and let \begin{equation*}
- \frac{d}{dt}y_{k}(t)=\left[A^{T}+\frac{\partial}{\partial x}f(t, x(t))\right]y_{k}(t)+\frac{\partial}{\partial x}L(t, x(t),\bar{u}(t)) \tag{65}
\end{equation*}
\begin{align*}
&L(t, x(t), u(t))+\langle y_{k}(t), Ax(t)+f(t, x(t))+u(t)\rangle\\
&\qquad =\min\limits_{u\in U}\{L(t, x(t), u)+\langle y_{k}(t), Ax(t)+f(t, x(t))+u\rangle\},\\
&\qquad \qquad\qquad a.e.\ t\in[\tau_{k-1}, \tau_{k})\ \text{or}\ t\in[\tau_{l}, t^{f}] \tag{66}
\end{align*}
If the HISS Eq.(12) has no impulsive control input, i.e.,
\begin{equation*}
v_{2}(t, x, u)=u_{1}(t, x)+u(t) \tag{67}
\end{equation*}
\begin{equation*}
\dot{x}(t)=A_{k}x(t)+f(t, x)+u(t),\quad t\in[\tau_{k-1}, \tau_{k})\quad \text{or}\quad t\in[\tau_{l}, t^{f}] \tag{68}
\end{equation*}
We define the cost functional matrix as the follows. For convenience, we just give the \begin{equation*}
J_{k}(u_{k}(\cdot))=\sum\limits_{i=k}^{l+1}\int\nolimits_{\tau_{i-1}}^{\tau_{i}}L(s, x(s, u_{k}(s)), u_{k}(s))ds \tag{69}
\end{equation*}
The proof is similar to Theorem 1, and it just needs us to omit the discussion of discrete event state
Corollary 2
Let \begin{equation*}
- \frac{d}{dt}y_{k}(t)=\left[A_{k}^{T}+\frac{\partial}{\partial x}f(t, x(t))\right]y_{k}(t)+\frac{\partial}{\partial x}L(t, x(t),\bar{u}(t)) \tag{70}
\end{equation*}
\begin{align*}
&L(t, x(t), u(t))+\langle y_{k}(t), A_{k}x(t)+f(t, x(t))+u(t)\rangle\\
= &\min\limits_{u\in U}\{L(t, x(t), u)+\langle y_{k}(t), A_{k}x(t)+f(t, x(t))+u\rangle\},\\
&\qquad \qquad a.e.\ t\in[\tau_{k-1}, \tau_{k})\quad \text{or}\quad t\in[\tau_{l}, t^{f}] \tag{71}
\end{align*}
Conclusions
In this paper, we have introduced a optimal control problem for a general class of HISS with free terminal states. We have established the necessary conditions of the aforementioned optimal control problem, the minimum principle of HISS over the global running time interval, and then we have proved it. In the proof, the general variational method and the matrix cost functional have been utilized. Based on the main result, we gave the minimum principle and the optimal control algorithm for a special example of HISS, which has linear time-variant continuous subsystems. Furthermore, the minimum principle of pure impulsive systems and pure switched systems have also been studied in this paper. According to this study, a research method to deal with global optimal control problems of HISS has been proposed.
Acknowledgement
Valuable discussions with Professor Liu Yungang are very much appreciated. The authors would like to thank the anonymous reviewers for their constructive and insightful comments for improving the quality of this work.