Why did you predict that? Towards explainable artificial neural networks for travel demand analysis

Ahmad Alwosheel*, Sander van Cranenburgh, Caspar G. Chorus

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

7 Citations (Scopus)
58 Downloads (Pure)

Abstract

Artificial Neural Networks (ANNs) are rapidly gaining popularity in transportation research in general and travel demand analysis in particular. While ANNs typically outperform conventional methods in terms of predictive performance, they suffer from limited explainability. That is, it is very difficult to assess whether or not particular predictions made by an ANN are based on intuitively reasonable relationships embedded in the model. As a result, it is difficult for analysts to gain trust in ANNs. In this paper, we show that often-used approaches using perturbation (sensitivity analysis) are ill-suited for gaining an understanding of the inner workings of ANNs. Subsequently, and this is the main contribution of this paper, we introduce to the domain of transportation an alternative method, inspired by recent progress in the field of computer vision. This method is based on a re-conceptualisation of the idea of ‘heat maps’ to explain the predictions of a trained ANN. To create a heat map, a prediction of an ANN is propagated backward in the ANN towards the input variables, using a technique called Layer-wise Relevance Propagation (LRP). The resulting heat map shows the contribution of each input value –for example the travel time of a certain mode– to a given travel mode choice prediction. By doing this, the LRP-based heat map reveals the rationale behind the prediction in a way that is understandable to human analysts. If the rationale makes sense to the analyst, the trust in the prediction, and, by extension, in the trained ANN as a whole, will increase. If the rationale does not make sense, the analyst may choose to adapt or re-train the ANN or decide not to use it at all. We show that by reconceptualising the LRP methodology towards the choice modelling and travel demand analysis contexts, it can be put to effective use in application domains well beyond the field of computer vision, for which it was originally developed.

Original languageEnglish
Article number103143
JournalTransportation Research Part C: Emerging Technologies
Volume128
DOIs
Publication statusPublished - 2021

Keywords

  • Artificial Neural Networks
  • Black-box issue
  • Explainability
  • Travel choice analysis

Fingerprint

Dive into the research topics of 'Why did you predict that? Towards explainable artificial neural networks for travel demand analysis'. Together they form a unique fingerprint.

Cite this