Loading [MathJax]/extensions/MathMenu.js
Adaptive dynamic programming | IEEE Journals & Magazine | IEEE Xplore

Adaptive dynamic programming


Abstract:

Unlike the many soft computing applications where it suffices to achieve a "good approximation most of the time," a control system must be stable all of the time. As such...Show More

Abstract:

Unlike the many soft computing applications where it suffices to achieve a "good approximation most of the time," a control system must be stable all of the time. As such, if one desires to learn a control law in real-time, a fusion of soft computing techniques to learn the appropriate control law with hard computing techniques to maintain the stability constraint and guarantee convergence is required. The objective of the paper is to describe an adaptive dynamic programming algorithm (ADPA) which fuses soft computing techniques to learn the optimal cost (or return) functional for a stabilizable nonlinear system with unknown dynamics and hard computing techniques to verify the stability and convergence of the algorithm. Specifically, the algorithm is initialized with a (stabilizing) cost functional and the system is run with the corresponding control law (defined by the Hamilton-Jacobi-Bellman equation), with the resultant state trajectories used to update the cost functional in a soft computing mode. Hard computing techniques are then used to show that this process is globally convergent with stepwise stability to the optimal cost functional/control law pair for an (unknown) input affine system with an input quadratic performance measure (modulo the appropriate technical conditions). Three specific implementations of the ADPA are developed for 1) the linear case, 2) for the nonlinear case using a locally quadratic approximation to the cost functional, and 3) the nonlinear case using a radial basis function approximation of the cost functional; illustrated by applications to flight control.
Page(s): 140 - 153
Date of Publication: 10 December 2002

ISSN Information:


I. Introduction

The PRESENT work has its roots in the approximate dynamic programming/adaptive critic concept [2], [30], [20], [32], [16], in which soft computing techniques are used to approximate the solution of a dynamic programming algorithm without the explicit imposition of a stability or convergence constraint, and the authors' stability criteria for these algorithms [6], [24]. Alternatively, a number of authors have combined hard and soft computing techniques to develop tracking controllers. These include Lyapunov synthesis techniques using both neural [25], [28], [18], [5], [21] and fuzzy learning laws [28], [29], [17], sliding mode techniques [31], and input–output techniques [9]. The objective of the present paper is to describe an adaptive dynamic programming algorithm (ADPA) which uses soft computing techniques to learn the optimal cost (or return) functional for a stabilizable nonlinear system with unknown dynamics and hard computing techniquesto verify the stability and convergence of the algorithm.

Contact IEEE to Subscribe

References

References is not available for this document.