Model-Reference Reinforcement Learning for Collision-Free Tracking Control of Autonomous Surface Vehicles

Qingrui Zhang, Wei Pan, Vasso Reppa

Research output: Contribution to journalArticleScientificpeer-review


This paper presents a novel model-reference reinforcement learning algorithm for the intelligent tracking control of uncertain autonomous surface vehicles with collision avoidance. The proposed control algorithm combines a conventional control method with reinforcement learning to enhance control accuracy and intelligence. In the proposed control design, a nominal system is considered for the design of a baseline tracking controller using a conventional control approach. The nominal system also defines the desired behaviour of uncertain autonomous surface vehicles in an obstacle-free environment. Thanks to reinforcement learning, the overall tracking controller is capable of compensating for model uncertainties and achieving collision avoidance at the same time in environments with obstacles. In comparison to traditional deep reinforcement learning methods, our proposed learning-based control can provide stability guarantees and better sample efficiency. We demonstrate the performance of the new algorithm using an example of autonomous surface vehicles.

Original languageEnglish
JournalIEEE Transactions on Intelligent Transportation Systems
Publication statusAccepted/In press - 15 Jun 2021


  • Analytical models
  • Autonomous surface vehicles
  • Collision avoidance
  • collision avoidance
  • control architecture.
  • reinforcement learning
  • Reinforcement learning
  • Stability analysis
  • Tracking
  • Trajectory
  • Uncertainty


Dive into the research topics of 'Model-Reference Reinforcement Learning for Collision-Free Tracking Control of Autonomous Surface Vehicles'. Together they form a unique fingerprint.

Cite this