LLM-Based Evaluation Methodology of Explanation Strategies

Ege Soyarar*, Reyhan Aydogan, Berk Buzcu, Davide Calvaresi

*Corresponding author for this work

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

Abstract

As data privacy regulations, such as the EU AI Act and EU Data Act, become increasingly stringent, processing real user data for AI models like movie recommendation systems has grown more challenging. Moreover, the human-centric data collection and evaluation of Explainable AI (XAI) systems are often costly and time-consuming; making it hard to sustain. Hence, this study adopts the Synthetic Behavior Generation (SBG) approach, leveraging large language models (LLMs) to evaluate AI explanations while ensuring compliance with regulations and providing cost-effective solutions for human feedback. To assess the quality of these explanations, we utilize three different LLMs, which are fed synthetically generated user behaviors to evaluate explanations of an AI system as if they were real users. The evaluation focuses on key criteria such as convincingness, clarity, accuracy, and the impact on decision-making, facilitating a thorough assessment of explanation effectiveness. The results indicated that LLMs can deliver structured and consistent evaluations based on the provided synthetic user behavior.

Original languageEnglish
Title of host publicationExplainable, Trustworthy, and Responsible AI and Multi-Agent Systems - 7th International Workshop, EXTRAAMAS 2025, Revised Selected Papers
EditorsDavide Calvaresi, Amro Najjar, Andrea Omicini, Giovanni Ciatto, Reyhan Aydogan, Rachele Carli, Kary Främling, Simona Tiribelli
PublisherSpringer
Pages85-103
Number of pages19
ISBN (Print)9783032013989
DOIs
Publication statusPublished - 2026
Event7th International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems, EXTRAAMAS 2025 - Detroit, United States
Duration: 19 May 202520 May 2025

Publication series

NameLecture Notes in Computer Science
Volume15936 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference7th International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems, EXTRAAMAS 2025
Country/TerritoryUnited States
CityDetroit
Period19/05/2520/05/25

Bibliographical note

Green Open Access added to TU Delft Institutional Repository as part of the Taverne amendment. More information about this copyright law amendment can be found at https://www.openaccess.nl. Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

Keywords

  • Explainable AI (XAI)
  • Explanation Evaluation
  • Large Language Models (LLMs)
  • Recommender Systems
  • Synthetic Data Generation

Fingerprint

Dive into the research topics of 'LLM-Based Evaluation Methodology of Explanation Strategies'. Together they form a unique fingerprint.

Cite this