Automatic Smile and Frown Recognition with Kinetic Earables

Seungchul Lee, Chulhong Min, Alessandro Montanari, Akhil Mathur, Youngjae Chang, Junehwa Song, Fahim Kawsar

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

10 Citations (Scopus)


In this paper, we introduce inertial signals obtained from an earable placed in the ear canal as a new compelling sensing modality for recognising two key facial expressions: Smile and frown. Borrowing principles from Facial Action Coding Systems, we first demonstrate that an inertial measurement unit of an earable can capture facial muscle deformation activated by a set of temporal microexpressions. Building on these observations, we then present three different learning schemes - shallow models with statistical features, hidden Markov model, and deep neural networks to automatically recognise smile and frown expressions from inertial signals. The experimental results show that in controlled non-conversational settings, we can identify smile and frown with high accuracy (F1 score: 0.85).

Original languageEnglish
Title of host publicationAH2019
Subtitle of host publicationProceedings of the 10th Augmented Human International Conference 2019
Place of PublicationNew York, NY
PublisherAssociation for Computing Machinery (ACM)
Number of pages4
ISBN (Print)978-1-4503-6547-5
Publication statusPublished - 2019
Event10th Augmented Human International Conference, AH 2019 - Reims, France
Duration: 11 Mar 201912 Mar 2019

Publication series

NameACM International Conference Proceeding Series


Conference10th Augmented Human International Conference, AH 2019


  • Earable
  • Facs
  • Kinetic modeling
  • Smile and frown recognition


Dive into the research topics of 'Automatic Smile and Frown Recognition with Kinetic Earables'. Together they form a unique fingerprint.

Cite this