Evaluating the alignment of AI with human emotions

J. Derek Lomas*, Willem van der Maden, Sohhom Bandyopadhyay, Giovanni Lion, Nirmal Patel, Gyanesh Jain, Yanna Litowsky, Haian Xue, Pieter Desmet

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

8 Downloads (Pure)

Abstract

Generative AI systems are increasingly capable of expressing emotions through text, imagery, voice, and video. Effective emotional expression is particularly relevant for AI systems designed to provide care, support mental health, or promote wellbeing through emotional interactions. This research aims to enhance understanding of the alignment between AI-expressed emotions and human perception. How can we assess whether an AI system successfully conveys a specific emotion? To address this question, we designed a method to measure the alignment between emotions expressed by generative AI and human perceptions. Three generative image models—DALL-E 2, DALL-E 3, and Stable Diffusion v1—were used to generate 240 images expressing five positive and five negative emotions in both humans and robots. Twenty-four participants recruited via Prolific rated the alignment of AI-generated emotional expressions with a string of text (e.g., “A robot expressing the emotion of amusement”). Our results suggest that generative AI models can produce emotional expressions that align well with human emotions; however, the degree of alignment varies significantly depending on the AI model and the specific emotion expressed. We analyze these variations to identify areas for future improvement. The paper concludes with a discussion of the implications of our findings on the design of emotionally expressive AI systems.
Original languageEnglish
Pages (from-to)88-97
Number of pages10
JournalAdvanced Design Research
Volume2
Issue number2
DOIs
Publication statusPublished - 2025

Keywords

  • Emotional design
  • Generative AI
  • Machine psychology
  • AI alignment
  • Affective computing

Fingerprint

Dive into the research topics of 'Evaluating the alignment of AI with human emotions'. Together they form a unique fingerprint.

Cite this