Counterfactual explanations for deep learning-based traffic forecasting

Rushan Wang*, Yanan Xin, Yatao Zhang, Fernando Perez-Cruz, Martin Raubal

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

1 Downloads (Pure)

Abstract

Deep learning models are widely used in traffic forecasting and have achieved state-of-the-art prediction accuracy. However, their black-box nature presents challenges for interpretability and usability, particularly when predictions are significantly influenced by complex urban contextual features. This study aims to leverage an explainable artificial intelligence (AI) approach, counterfactual explanations, to enhance the explainability of deep learning-based traffic forecasting models and elucidate their relationships with various contextual features. We present a comprehensive framework that generates counterfactual explanations for traffic forecasting. The study first implements a graph convolutional network (GCN) to predict traffic speed based on historical traffic data and contextual variables. Counterfactual explanations are generated through a multi-objective optimization process, with four objectives, validity, proximity, sparsity, and plausibility, each emphasizing different aspects of optimization. We investigated the impact of contextual features on traffic speed prediction under varying spatial and temporal conditions. The scenario-driven counterfactual explanations integrate two types of user-defined constraints, directional and weighting constraints, to tailor the search for counterfactual explanations to specific use cases. These tailored explanations benefit machine learning practitioners who aim to understand the model's learning mechanisms and traffic domain experts who seek insights for necessity factors to alter traffic condition. The results showcase the effectiveness of counterfactual explanations in revealing traffic patterns learned by deep learning models and explaining the relationship between traffic prediction and contextual features, demonstrating its potential for interpreting black-box deep learning models.

Original languageEnglish
Article number100176
Number of pages18
JournalCommunications in Transportation Research
Volume5
DOIs
Publication statusPublished - 2025

Keywords

  • Counterfactual explanations
  • Deep learning
  • Explainable artificial intelligence
  • Traffic forecast

Fingerprint

Dive into the research topics of 'Counterfactual explanations for deep learning-based traffic forecasting'. Together they form a unique fingerprint.

Cite this