Towards creating a conversational memory for long-term meeting support: predicting memorable moments in multi-party conversations through eye-gaze

Maria Tsfasman, Kristian Fenech, Morita Tarvirdians, Andras Lorincz, Catholijn Jonker, Catharine Oertel

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

1 Citation (Scopus)
155 Downloads (Pure)

Abstract

When working in a group, it is essential to understand each other's viewpoints to increase group cohesion and meeting productivity. This can be challenging in teams: participants might be left misunderstood and the discussion could be going around in circles. To tackle this problem, previous research on group interactions has addressed topics such as dominance detection, group engagement, and group creativity. Conversational memory, however, remains a widely unexplored area in the field of multimodal analysis of group interaction. The ability to track what each participant or a group as a whole find memorable from each meeting would allow a system or agent to continuously optimise its strategy to help a team meet its goals. In the present paper, we therefore investigate what participants take away from each meeting and how it is reflected in group dynamics.As a first step toward such a system, we recorded a multimodal longitudinal meeting corpus (MEMO), which comprises a first-party annotation of what participants remember from a discussion and why they remember it. We investigated whether participants of group interactions encode what they remember non-verbally and whether we can use such non-verbal multimodal features to predict what groups are likely to remember automatically. We devise a coding scheme to cluster participants' memorisation reasons into higher-level constructs. We find that low-level multimodal cues, such as gaze and speaker activity, can predict conversational memorability. We also find that non-verbal signals can indicate when a memorable moment starts and ends. We could predict four levels of conversational memorability with an average accuracy of 44 %. We also showed that reasons related to participants' personal feelings and experiences are the most frequently mentioned grounds for remembering meeting segments.

Original languageEnglish
Title of host publicationICMI 2022 - Proceedings of the 2022 International Conference on Multimodal Interaction
PublisherAssociation for Computing Machinery (ACM)
Pages94-104
Number of pages11
ISBN (Electronic)9781450393904
DOIs
Publication statusPublished - 2022
Event24th ACM International Conference on Multimodal Interaction, ICMI 2022 - Bangalore, India
Duration: 7 Nov 202211 Nov 2022

Publication series

NameACM International Conference Proceeding Series

Conference

Conference24th ACM International Conference on Multimodal Interaction, ICMI 2022
Country/TerritoryIndia
CityBangalore
Period7/11/2211/11/22

Keywords

  • conversational memory
  • multi-modal corpora
  • multi-party interaction
  • social signals

Fingerprint

Dive into the research topics of 'Towards creating a conversational memory for long-term meeting support: predicting memorable moments in multi-party conversations through eye-gaze'. Together they form a unique fingerprint.

Cite this