No-Audio Multimodal Speech Detection in Crowded Social Settings task at MediaEval 2018

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

23 Downloads (Pure)


This overview paper provides a description of the automatic Human Behaviour Analysis (HBA) task for the MediaEval 2018. In its first edition, the HBA task focuses on analyzing one of the most basic elements of social behavior: the estimation of speaking status. Task participants are provided with cropped videos of individuals while interacting freely during a crowded mingle event that
was captured by an overhead camera. Each individual is also wearing a badge-like device hung around the neck recording tri-axial acceleration.
The goal of this task is to automatically estimate if a person is speaking or not using these two alternative modalities. In contrast to conventional speech detection approaches, no audio is used for this task. Instead, the automatic estimation system must exploit the natural human movements that accompany speech. The task seeks to achieve competitive estimation performance
compared to audio-based systems by exploiting the multi-modal aspects of the problem.
Original languageEnglish
Title of host publicationWorking Notes Proceedings of the MediaEval 2018 Workshop
EditorsMartha Larson, Piyush Arora, Claire-Hélène Demarty, Michael Riegler, Benjamin Bischke, Emmanuel Dellandrea, Mathias Lux, Alastair Porter, Gareth J.F. Jones
Number of pages3
Publication statusPublished - 2018
EventMediaEval 2018: Multimedia Benchmark Workshop - EURECOM, Sophia-Antipolis, France
Duration: 29 Oct 201831 Oct 2018

Publication series

NameCEUR Workshop Proceedings
ISSN (Print)1613-0073


WorkshopMediaEval 2018
Internet address

Fingerprint Dive into the research topics of 'No-Audio Multimodal Speech Detection in Crowded Social Settings task at MediaEval 2018'. Together they form a unique fingerprint.

Cite this