Leveraging Large Language Models to Identify the Values Behind Arguments

Rithik Appachi Senthilkumar, Amir Homayounirad*, Luciano Cavalcante Siebert

*Corresponding author for this work

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

Abstract

Human values capture what people and societies perceive as desirable, transcend specific situations and serve as guiding principles for action. People’s value systems motivate their positions on issues concerning the economy, society and politics among others, influencing the arguments they make. Identifying the values behind arguments can therefore help us find common ground in discourse and uncover the core reasons behind disagreements. Transformer-based large language models (LLMs) have exhibited remarkable performance across language generation and analysis. However, leveraging LLMs in sociotechnical systems that assist with discourse and argumentation necessitates systematically evaluating their ability to analyse and identify the values behind arguments, an under-explored research direction. Using a multi-level human value taxonomy inspired by the Schwartz Theory of Basic Human Values, we present a systematic and critical evaluation of GPT-3.5-turbo in human value identification from a dataset of multi-cultural arguments, across the zero-shot, few-shot and chain-of-thought prompting strategies, carrying forward from prior research on this task which leveraged a fine-tuned BERT model. We observe that prompting strategies exhibit performance levels close to, but still behind fine-tuning for value classification. We also detail some challenges associated with value classification with LLMs, offering potential directions for future research.

Original languageEnglish
Title of host publicationValue Engineering in Artificial Intelligence - 2nd International Workshop, VALE 2024, Revised Selected Papers
EditorsNardine Osman, Luc Steels
PublisherSpringer
Pages87-103
Number of pages17
ISBN (Print)9783031854620
DOIs
Publication statusPublished - 2025
Event2nd International Workshop on Value Engineering in Artificial Intelligence, VALE 2024 - Santiago de Compostela, Spain
Duration: 19 Oct 202424 Oct 2024

Publication series

NameLecture Notes in Computer Science
Volume15356 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference2nd International Workshop on Value Engineering in Artificial Intelligence, VALE 2024
Country/TerritorySpain
CitySantiago de Compostela
Period19/10/2424/10/24

Bibliographical note

Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care
Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

Keywords

  • Human Values
  • Large Language Models
  • Prompting

Fingerprint

Dive into the research topics of 'Leveraging Large Language Models to Identify the Values Behind Arguments'. Together they form a unique fingerprint.

Cite this