TY - JOUR
T1 - Adaptive parameterized model predictive control based on reinforcement learning
T2 - A synthesis framework
AU - Sun, Dingshan
AU - Jamshidnejad, Anahita
AU - De Schutter, Bart
PY - 2024
Y1 - 2024
N2 - Parameterized model predictive control (PMPC) is one of the many approaches that have been developed to alleviate the high computational requirement of model predictive control (MPC), and it has been shown to significantly reduce the computational complexity while providing comparable control performance with conventional MPC. However, PMPC methods still require a sufficiently accurate model to guarantee the control performance. To deal with model mismatches caused by the changing environment and by disturbances, this paper first proposes a novel framework that uses reinforcement learning (RL) to adapt all components of the PMPC scheme in an online way. More specifically, the novel framework integrates various strategies to adjust different components of PMPC (e.g., objective function, state-feedback control function, optimization settings, and system model), which results in a synthesis framework for RL-based adaptive PMPC. We show that existing adaptive (P)MPC approaches can also be embedded in this synthesis framework. The resulting combined RL-PMPC framework provides a solution for an efficient MPC approach that can deal with model mismatches. A case study is performed in which the framework is applied to freeway traffic control. Simulation results show that for the given case study the RL-based adaptive PMPC approach reduces computational complexity by 98% on average compared to conventional MPC while achieving better control performance than the other controllers, in the presence of model mismatches and disturbances.
AB - Parameterized model predictive control (PMPC) is one of the many approaches that have been developed to alleviate the high computational requirement of model predictive control (MPC), and it has been shown to significantly reduce the computational complexity while providing comparable control performance with conventional MPC. However, PMPC methods still require a sufficiently accurate model to guarantee the control performance. To deal with model mismatches caused by the changing environment and by disturbances, this paper first proposes a novel framework that uses reinforcement learning (RL) to adapt all components of the PMPC scheme in an online way. More specifically, the novel framework integrates various strategies to adjust different components of PMPC (e.g., objective function, state-feedback control function, optimization settings, and system model), which results in a synthesis framework for RL-based adaptive PMPC. We show that existing adaptive (P)MPC approaches can also be embedded in this synthesis framework. The resulting combined RL-PMPC framework provides a solution for an efficient MPC approach that can deal with model mismatches. A case study is performed in which the framework is applied to freeway traffic control. Simulation results show that for the given case study the RL-based adaptive PMPC approach reduces computational complexity by 98% on average compared to conventional MPC while achieving better control performance than the other controllers, in the presence of model mismatches and disturbances.
KW - Deep reinforcement learning
KW - Freeway traffic management
KW - Learning-based control
KW - Model predictive control
KW - Parameterized control
KW - Synthesis control framework
UR - http://www.scopus.com/inward/record.url?scp=85199293485&partnerID=8YFLogxK
U2 - 10.1016/j.engappai.2024.109009
DO - 10.1016/j.engappai.2024.109009
M3 - Article
AN - SCOPUS:85199293485
SN - 0952-1976
VL - 136
JO - Engineering Applications of Artificial Intelligence
JF - Engineering Applications of Artificial Intelligence
M1 - 109009
ER -