In this article, we propose a trajectory planning algorithm that enables autonomous surface vessels to perform socially compliant navigation in a city's canal. The key idea behind the proposed algorithm is to adopt an optimal control formulation in which the deviation of movements of the autonomous vessel from nominal movements of human-operated vessels is penalized. Consequently, given a pair of origin and destination points, it finds vessel trajectories that resemble those of human-operated vessels. To formulate this, we adopt kernel density estimation (KDE) to build a nominal movement model of human-operated vessels from a prerecorded trajectory dataset, and use a Kullback-Leibler control cost to measure the deviation of the autonomous vessel's movements from the model. We establish an analogy between our trajectory planning approach and the maximum entropy inverse reinforcement learning (MaxEntIRL) approach to explain how our approach can learn the navigation behavior of human-operated vessels. On the other hand, we distinguish our approach from the MaxEntIRL approach in that it does not require well-defined bases, often referred to as features, to construct its cost function as required in many of inverse reinforcement learning approaches in the trajectory planning context. Through experiments using a dataset of vessel trajectories collected from the automatic identification system, we demonstrate that the trajectories generated by our approach resemble those of human-operated vessels and that using them for canal navigation is beneficial in reducing head-on encounters between vessels and improving navigation safety.
- Autonomous vehicle navigation
- learning from demonstration
- marine robotics
- motion and path planning