Document Type

Article

Abstract

This paper proposes an event-triggered optimal adaptive output feedback control design approach by utilizing integral reinforcement learning (IRL) for linear time-invariant systems with state delay and uncertain internal dynamics. In the proposed approach, the general optimal control problem is formulated into the game-theoretic framework by treating the event-triggering threshold and the optimal control policy as players. A cost function is defined and a value functional, which includes the delayed system output, is considered. First, by using the value functional and applying stationarity conditions using the Hamiltonian function, the output game delay algebraic Riccati equation (OGDARE) and optimal control policy are derived when the internal system dynamics are available. Then to relax the knowledge of internal dynamics, a hybrid learning scheme using measured output is proposed for tuning the value function parameters, which in turn is employed to compute the estimated optimal control policy. The overall closed-loop system is shown to be asymptotically stable by selecting an appropriate event-triggering condition when the dynamics of the system are both known and partially uncertain. A simulation example is given to substantiate the efficacy of the theoretical claims.

Share

COinS