TY - JOUR
T1 - Approximate Dynamic Programming for Constrained Piecewise Affine Systems With Stability and Safety Guarantees
AU - He, Kanghui
AU - Shi, Shengling
AU - Boom, Ton van den
AU - Schutter, Bart de
PY - 2025
Y1 - 2025
N2 - Infinite-horizon optimal control of constrained piecewise affine (PWA) systems has been approximately addressed by hybrid model predictive control (MPC), which, however, has computational limitations, both in offline design and online implementation. In this article, we consider an alternative approach based on approximate dynamic programming (ADP), an important class of methods in reinforcement learning. We accommodate nonconvex union-of-polyhedra state constraints and linear input constraints into ADP by designing PWA penalty functions. PWA function approximation is used, which allows for a mixed-integer encoding to implement ADP. The main advantage of the proposed ADP method is its online computational efficiency. Particularly, we propose two control policies, which lead to solving a smaller-scale mixed-integer linear program than conventional hybrid MPC, or a single convex quadratic program, depending on whether the policy is implicitly determined online or explicitly computed offline. We characterize the stability and safety properties of the closed-loop systems, as well as the suboptimality of the proposed policies, by quantifying the approximation errors of value functions and policies. We also develop an offline mixed-integer-linear-programming-based method to certify the reliability of the proposed method. Simulation results on an inverted pendulum with elastic walls and on an adaptive cruise control problem validate the control performance in terms of constraint satisfaction and CPU time.
AB - Infinite-horizon optimal control of constrained piecewise affine (PWA) systems has been approximately addressed by hybrid model predictive control (MPC), which, however, has computational limitations, both in offline design and online implementation. In this article, we consider an alternative approach based on approximate dynamic programming (ADP), an important class of methods in reinforcement learning. We accommodate nonconvex union-of-polyhedra state constraints and linear input constraints into ADP by designing PWA penalty functions. PWA function approximation is used, which allows for a mixed-integer encoding to implement ADP. The main advantage of the proposed ADP method is its online computational efficiency. Particularly, we propose two control policies, which lead to solving a smaller-scale mixed-integer linear program than conventional hybrid MPC, or a single convex quadratic program, depending on whether the policy is implicitly determined online or explicitly computed offline. We characterize the stability and safety properties of the closed-loop systems, as well as the suboptimality of the proposed policies, by quantifying the approximation errors of value functions and policies. We also develop an offline mixed-integer-linear-programming-based method to certify the reliability of the proposed method. Simulation results on an inverted pendulum with elastic walls and on an adaptive cruise control problem validate the control performance in terms of constraint satisfaction and CPU time.
KW - Approximate dynamic programming (ADP)
KW - constrained control
KW - piecewise affine (PWA) systems
KW - reinforcement learning (RL)
UR - http://www.scopus.com/inward/record.url?scp=85213234384&partnerID=8YFLogxK
U2 - 10.1109/TSMC.2024.3515645
DO - 10.1109/TSMC.2024.3515645
M3 - Article
AN - SCOPUS:85213234384
SN - 2168-2216
VL - 55
SP - 1722
EP - 1734
JO - IEEE Transactions on Systems, Man, and Cybernetics: Systems
JF - IEEE Transactions on Systems, Man, and Cybernetics: Systems
IS - 3
ER -