Interpretability in Neural Information Retrieval

Research output: ThesisDissertation (TU Delft)

165 Downloads (Pure)

Abstract

Neural information retrieval (IR) has transitioned from using classical human-defined relevance rules to leveraging complex neural models for retrieval tasks. While benefiting from advances in machine learning (ML), neural IR also inherits several drawbacks, including the opacity of the model’s decision-making process. This thesis aims to tackle this issue and enhance the transparency of neural IR models. Particularly, our work focuses on understanding which input features neural ranking models rely on to generate a specific ranking list. Our work draws inspiration from interpretable ML. However, we also recognize the unique aspects of IR tasks, which guide our development of methods specifically designed to interpret IR models....
Original languageEnglish
Awarding Institution
  • Delft University of Technology
Supervisors/Advisors
  • Houben, G.J.P.M., Promotor
  • Anand, A., Promotor
Award date24 Feb 2025
Electronic ISBNs978-94-6518-007-6
DOIs
Publication statusPublished - 2025

Keywords

  • interpretable-machine-learning
  • information retrieval

Fingerprint

Dive into the research topics of 'Interpretability in Neural Information Retrieval'. Together they form a unique fingerprint.

Cite this