TY - JOUR
T1 - Learning Interaction-Aware Guidance for Trajectory Optimization in Dense Traffic Scenarios
AU - Brito, Bruno
AU - Agarwal, Achin
AU - Alonso-Mora, Javier
PY - 2022
Y1 - 2022
N2 - Autonomous navigation in dense traffic scenarios remains challenging for autonomous vehicles (AVs) because the intentions of other drivers are not directly observable and AVs have to deal with a wide range of driving behaviors. To maneuver through dense traffic, AVs must be able to reason how their actions affect others (interaction model) and exploit this reasoning to navigate through dense traffic safely. This paper presents a novel framework for interaction-aware motion planning in dense traffic scenarios. We explore the connection between human driving behavior and their velocity changes when interacting. Hence, we propose to learn, via deep Reinforcement Learning (RL), an interaction-aware policy providing global guidance about the cooperativeness of other vehicles to an optimization-based planner ensuring safety and kinematic feasibility through constraint satisfaction. The learned policy can reason and guide the local optimization-based planner with interactive behavior to pro-actively merge in dense traffic while remaining safe in case other vehicles do not yield. We present qualitative and quantitative results in highly interactive simulation environments (highway merging and unprotected left turns) against two baseline approaches, a learning-based and an optimization-based method. The presented results show that our method significantly reduces the number of collisions and increases the success rate with respect to both learning-based and optimization-based baselines.
AB - Autonomous navigation in dense traffic scenarios remains challenging for autonomous vehicles (AVs) because the intentions of other drivers are not directly observable and AVs have to deal with a wide range of driving behaviors. To maneuver through dense traffic, AVs must be able to reason how their actions affect others (interaction model) and exploit this reasoning to navigate through dense traffic safely. This paper presents a novel framework for interaction-aware motion planning in dense traffic scenarios. We explore the connection between human driving behavior and their velocity changes when interacting. Hence, we propose to learn, via deep Reinforcement Learning (RL), an interaction-aware policy providing global guidance about the cooperativeness of other vehicles to an optimization-based planner ensuring safety and kinematic feasibility through constraint satisfaction. The learned policy can reason and guide the local optimization-based planner with interactive behavior to pro-actively merge in dense traffic while remaining safe in case other vehicles do not yield. We present qualitative and quantitative results in highly interactive simulation environments (highway merging and unprotected left turns) against two baseline approaches, a learning-based and an optimization-based method. The presented results show that our method significantly reduces the number of collisions and increases the success rate with respect to both learning-based and optimization-based baselines.
KW - Deep reinforcement learning
KW - dense traffic
KW - motion planning
KW - safe learning
KW - trajectory optimization.
UR - http://www.scopus.com/inward/record.url?scp=85127832049&partnerID=8YFLogxK
U2 - 10.1109/TITS.2022.3160936
DO - 10.1109/TITS.2022.3160936
M3 - Article
AN - SCOPUS:85127832049
JO - IEEE Transactions on Intelligent Transportation Systems
JF - IEEE Transactions on Intelligent Transportation Systems
SN - 1524-9050
ER -