TY - GEN
T1 - Towards Efficient Personalized Driver Behavior Modeling with Machine Unlearning
AU - Song, Qun
AU - Tan, Rui
AU - Wang, Jianping
PY - 2023
Y1 - 2023
N2 - Driver Behavior Modeling (DBM) aims to predict and model human driving behaviors, which is typically incorporated into the Advanced Driver Assistance System to enhance transportation safety and improve driving experience. Inverse reinforcement learning (IRL) is a prevailing DBM technique with the goal of modeling the driving policy by recovering an unknown internal reward function from human driver demonstrations. However, the latest IRL-based design is inefficient due to the laborious manual feature engineering processes. Besides, the reward function usually experiences increased prediction errors when deployed for unseen vehicles. In this paper, we propose a novel deep learning-based reward function for IRL-based DBM with efficient model personalization via machine unlearning. We evaluate our approach on a highway simulation constructed using the realistic human driving dataset NGSIM. We deploy our approach on both a server GPU and an embedded GPU. The evaluation results show that our approach achieves a higher prediction accuracy compared with the latest IRL-based DBM approach that uses a weighted sum of trajectory features as the reward function. Our model personalization method obtains the highest accuracy and lowest latency compared with the baselines.
AB - Driver Behavior Modeling (DBM) aims to predict and model human driving behaviors, which is typically incorporated into the Advanced Driver Assistance System to enhance transportation safety and improve driving experience. Inverse reinforcement learning (IRL) is a prevailing DBM technique with the goal of modeling the driving policy by recovering an unknown internal reward function from human driver demonstrations. However, the latest IRL-based design is inefficient due to the laborious manual feature engineering processes. Besides, the reward function usually experiences increased prediction errors when deployed for unseen vehicles. In this paper, we propose a novel deep learning-based reward function for IRL-based DBM with efficient model personalization via machine unlearning. We evaluate our approach on a highway simulation constructed using the realistic human driving dataset NGSIM. We deploy our approach on both a server GPU and an embedded GPU. The evaluation results show that our approach achieves a higher prediction accuracy compared with the latest IRL-based DBM approach that uses a weighted sum of trajectory features as the reward function. Our model personalization method obtains the highest accuracy and lowest latency compared with the baselines.
KW - Driver behavior modeling
KW - inverse reinforcement learning
KW - machine unlearning
KW - model personalization
KW - neural network
UR - http://www.scopus.com/inward/record.url?scp=85159780717&partnerID=8YFLogxK
U2 - 10.1145/3576914.3587489
DO - 10.1145/3576914.3587489
M3 - Conference contribution
AN - SCOPUS:85159780717
T3 - ACM International Conference Proceeding Series
SP - 31
EP - 36
BT - Proceedings of 2023 Cyber-Physical Systems and Internet-of-Things Week, CPS-IoT Week 2023 - Workshops
PB - ACM
T2 - 2023 Cyber-Physical Systems and Internet-of-Things Week, CPS-IoT Week 2023
Y2 - 9 May 2023 through 12 May 2023
ER -