Capturing Head Poses Using FMCW Radar and Deep Neural Networks

Nakorn Kumchaiseemak, Francesco Fioranelli*, Theerawit Wilaiprasitporn

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

Abstract

This paper presents the first subject-specific head pose estimation approach using only one Frequency Modulated Continuous Wave (FMCW) radar data frame. Specifically, the proposed method incorporates a deep learning (DL) framework to estimate head pose rotation and orientation frame-by-frame by combining a Convolutional Neural Network operating on Range-Angle radar plots and a PeakConv network. The proposed method is validated with an in-house collected dataset, including annotated head movements that varied in roll, pitch, and yaw, and these were recorded in two different indoor environments. It is shown that the proposed model can estimate head poses with a relatively small error of approximately 6.7-14.4 degrees for all rotational axes and is capable of generalizing to unseen, new environments when trained in one scenario (e.g., lab) and tested in another (e.g., office), including in the cabin of a car.

Original languageEnglish
JournalIEEE Transactions on Aerospace and Electronic Systems
DOIs
Publication statusE-pub ahead of print - 2025

Keywords

  • Deep learning
  • FMCW radar
  • subject-specific head pose estimation

Fingerprint

Dive into the research topics of 'Capturing Head Poses Using FMCW Radar and Deep Neural Networks'. Together they form a unique fingerprint.

Cite this