Visually-guided motion planning for autonomous driving from interactive demonstrations

Research output: Contribution to journalArticleScientificpeer-review

4 Citations (Scopus)
42 Downloads (Pure)

Abstract

The successful integration of autonomous robots in real-world environments strongly depends on their ability to reason from context and take socially acceptable actions. Current autonomous navigation systems mainly rely on geometric information and hard-coded rules to induce safe and socially compliant behaviors. Yet, in unstructured urban scenarios these approaches can become costly and suboptimal. In this paper, we introduce a motion planning framework consisting of two components: a data-driven policy that uses visual inputs and human feedback to generate socially compliant driving behaviors (encoded by high-level decision variables), and a local trajectory optimization method that executes these behaviors (ensuring safety). In particular, we employ Interactive Imitation Learning to jointly train the policy with the local planner, a Model Predictive Controller (MPC), which results in safe and human-like driving behaviors. Our approach is validated in realistic simulated urban scenarios. Qualitative results show the similarity of the learned behaviors with human driving. Furthermore, navigation performance is substantially improved in terms of safety, i.e., number of collisions, as compared to prior trajectory optimization frameworks, and in terms of data-efficiency as compared to prior learning-based frameworks, broadening the operational domain of MPC to more realistic autonomous driving scenarios.

Original languageEnglish
Article number105277
Number of pages10
JournalEngineering Applications of Artificial Intelligence
Volume116
DOIs
Publication statusPublished - 2022

Keywords

  • Autonomous driving
  • Deep learning
  • Human in the loop
  • Interactive Imitation Learning
  • Model Predictive Control
  • Motion planning

Fingerprint

Dive into the research topics of 'Visually-guided motion planning for autonomous driving from interactive demonstrations'. Together they form a unique fingerprint.

Cite this