TY - JOUR
T1 - A projected gradient and constraint linearization method for nonlinear model predictive control
AU - Torrisi, Giampaolo
AU - Grammatico, Sergio
AU - Smith, Roy S.
AU - Morari, Manfred
PY - 2018
Y1 - 2018
N2 - Projected gradient descent denotes a class of iterative methods for solving optimization programs. In convex optimization, its computational complexity is relatively low whenever the projection onto the feasible set is relatively easy to compute. On the other hand, when the problem is nonconvex, e.g., because of nonlinear equality constraints, the projection becomes hard and thus impractical. In this paper, we propose a projected gradient method for nonlinear programs that only requires projections onto the linearization of the nonlinear constraints around the current iterate, similar to sequential quadratic programming (SQP). The proposed method falls neither into the class of projected gradient descent approaches, because the projection is not performed onto the original nonlinear manifold, nor into that of SQP, since second-order information is not used. For nonlinear smooth optimization problems, we assess local and global convergence to a Karush–Kuhn–Tucker point of the original problem. Further, we show that nonlinear model predictive control is a promising application of the proposed method, due to the sparsity of the resulting optimization problem.
AB - Projected gradient descent denotes a class of iterative methods for solving optimization programs. In convex optimization, its computational complexity is relatively low whenever the projection onto the feasible set is relatively easy to compute. On the other hand, when the problem is nonconvex, e.g., because of nonlinear equality constraints, the projection becomes hard and thus impractical. In this paper, we propose a projected gradient method for nonlinear programs that only requires projections onto the linearization of the nonlinear constraints around the current iterate, similar to sequential quadratic programming (SQP). The proposed method falls neither into the class of projected gradient descent approaches, because the projection is not performed onto the original nonlinear manifold, nor into that of SQP, since second-order information is not used. For nonlinear smooth optimization problems, we assess local and global convergence to a Karush–Kuhn–Tucker point of the original problem. Further, we show that nonlinear model predictive control is a promising application of the proposed method, due to the sparsity of the resulting optimization problem.
KW - First-order methods
KW - Nonlinear model predictive control
KW - Nonlinear programming
KW - Sequential quadratic programming
UR - http://resolver.tudelft.nl/uuid:6544e41b-80fb-4023-8ff6-0d95e2755c34
UR - http://www.scopus.com/inward/record.url?scp=85049465353&partnerID=8YFLogxK
U2 - 10.1137/16M1098103
DO - 10.1137/16M1098103
M3 - Article
AN - SCOPUS:85049465353
SN - 0363-0129
VL - 56
SP - 1968
EP - 1999
JO - SIAM Journal on Control and Optimization
JF - SIAM Journal on Control and Optimization
IS - 3
ER -