Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences

M Pantic, I Patras

Research output: Contribution to journalArticleScientificpeer-review

437 Citations (Scopus)

Abstract

Automatic analysis of human facial expression is a challenging problem with many applications. Most of the existing automated systems for facial expression analysis attempt to recognize a few prototypic emotional expressions, such as anger and happiness. Instead of representing another approach to machine analysis of prototypic facial expressions of emotion, the method presented in this paper attempts to handle a large range of human facial behavior by recognizing facial muscle actions that produce expressions. Virtually all of the existing vision systems for facial muscle action detection deal only with frontal-view face images and cannot handle temporal dynamics of facial actions. In this paper, we present a system for automatic recognition of facial action units (AUs) and their temporal models from long, profile-view face image sequences. We exploit particle filtering to track 15 facial points in an input face-profile sequence, and we introduce facial-action-dynamics recognition from continuous video input using temporal rules. The algorithm performs both automatic segmentation of an input video into facial expressions pictured and recognition of temporal segments (i.e., onset, apex, offset) of 27 AUs occurring alone or in a combination in the input face-profile video. A recognition rate of 87% is achieved.
Original languageUndefined/Unknown
Pages (from-to)433-449
Number of pages17
JournalIEEE Transactions on Systems, Man and Cybernetics, Part B: Cybernetics
Volume36
Issue number2
Publication statusPublished - 2005

Bibliographical note

issn 1083-4419

Keywords

  • academic journal papers
  • ZX CWTS 1.00 <= JFIS < 3.00

Cite this