I. Introduction
Noncooperative game theory [1]–[3] can be used to provide a solution to a number of control engineering applications. In a differential game formulation, the controlled system is influenced by a number of different inputs, computed by different players that are individually trying to optimize a performance function. The control objective is to determine a set of policies that are admissible [4], i.e., control policies that guarantee the stability of the dynamic system and minimize individual performance functions to yield an equilibrium. A Nash differential game consists of multiple players making simultaneous decisions where each player has an outcome that cannot be unilaterally improved from a change in strategy. Players are committed to following a predetermined strategy based on knowledge of the initial state, the system model, and the cost functional to be minimized. Solution techniques to the Nash equilibrium are classified depending on the amount of information available to the players (e.g., open-loop, feedback), the objectives of each player (zero-sum or nonzero-sum), the planning horizon (infinite horizon or finite horizon), and the nature of the dynamic constraints (e.g., continuous, discrete, linear, nonlinear).