Abstract
This work proposes an online policy iteration procedure for the synthesis of sub-optimal control laws for uncertain Linear Time Invariant (LTI) Asymptotically Null-Controllable with Bounded Inputs (ANCBI) systems. The proposed policy iteration method relies on: a policy evaluation step with a piecewise quadratic Lyapunov function in both the state and the deadzone functions of the input signals; a policy improvement step which guarantees at the same time close to optimality (exploitation) and persistence of excitation (exploration). The proposed approach guarantees convergence of the trajectory to a neighborhood around the origin. Besides, the trajectories can be made arbitrarily close to the optimal one provided that the rate at which the the value function and the control policy are updated is fast enough. The solution to the inequalities required to hold at each policy evaluation step can be efficiently implemented with semidefinite programming (SDP) solvers. A numerical example illustrates the results.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2016 American Control Conference (ACC 2016) |
Editors | George Chiu, Katie Johnson, Danny Abramovitch |
Place of Publication | Piscataway, NJ, USA |
Publisher | IEEE |
Pages | 5734-5739 |
ISBN (Electronic) | 978-1-4673-8682-1 |
DOIs | |
Publication status | Published - 2016 |
Event | American Control Conference (ACC), 2016 - Boston, MA, United States Duration: 6 Jul 2016 → 8 Jul 2016 |
Conference
Conference | American Control Conference (ACC), 2016 |
---|---|
Abbreviated title | ACC 2016 |
Country/Territory | United States |
City | Boston, MA |
Period | 6/07/16 → 8/07/16 |
Bibliographical note
Accepted Author ManuscriptKeywords
- Optimal control
- Linear systems
- Convergence
- Asymptotic stability
- Lyapunov methods
- Estimation
- Trajectory