A Deep Learning Framework for Recognizing Both Static and Dynamic Gestures

O. Mazhar, Sofiane Ramdani, Andrea Cherubini

Research output: Contribution to journalArticleScientificpeer-review

3 Citations (Scopus)
14 Downloads (Pure)


Intuitive user interfaces are indispensable to interact with the human centric smart environments. In this paper, we propose a unified framework that recognizes both static and dynamic gestures, using simple RGB vision (without depth sensing). This feature makes it suitable for inexpensive human-robot interaction in social or industrial settings. We employ a pose-driven spatial attention strategy, which guides our proposed Static and Dynamic gestures Network—StaDNet. From the image of the human upper body, we estimate his/her depth, along with the region-of-interest around his/her hands. The Convolutional Neural Network (CNN) in StaDNet is fine-tuned on a background-substituted hand gestures dataset. It is utilized to detect 10 static gestures for each hand as well as to obtain the hand image-embeddings. These are subsequently fused with the augmented pose vector and then passed to the stacked Long Short-Term Memory blocks. Thus, human-centred frame-wise information from the augmented pose vector and from the left/right hands image-embeddings are aggregated in time to predict the dynamic gestures of the performing person. In a number of experiments, we show that the proposed approach surpasses the state-of-the-art results on the large-scale Chalearn 2016 dataset. Moreover, we transfer the knowledge learned through the proposed methodology to the Praxis gestures dataset, and the obtained results also outscore the state-of-the-art on this dataset.
Original languageEnglish
Article number2227
Number of pages20
Issue number6
Publication statusPublished - 2021


  • gestures recognition
  • operator interfaces
  • human activity recognition
  • commercial robots and applications
  • cyber-physical systems


Dive into the research topics of 'A Deep Learning Framework for Recognizing Both Static and Dynamic Gestures'. Together they form a unique fingerprint.

Cite this