Detecting Perceived Appropriateness of a Robot’s Social Positioning Behavior from Non-Verbal Cues: ‘A robot study in scarlet’

Jered Vroon, Gwenn Englebienne, Vanessa Evers

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

55 Downloads (Pure)

Abstract

What if a robot could detect when you think it got too close to you during its approach? This would allow it to correct or compensate for its social ‘mistake’. It would also allow for a responsive approach, where that robot would reactively find suitable approach behavior through and during the interaction. We investigated if it is possible to automatically detect such social feedback cues in the context of a robot approaching a person.

We collected a dataset in which our robot would repeatedly approach people (n=30) to verbally deliver a message. Approach distance and environmental noise were manipulated, and our participants were tracked (position and orientation of upper body and head). We evaluated their perception of the robot’s behavior through questionnaires and found no single or joint effects of the manipulations. This showed that, in this case, personal differences are more important than contextual cues – thus highlighting the importance of responding to behavioral feedback. This dataset is being made publicly available as part of this publication (http://doi.org/10.4121/uuid:b76c3a6f-f7d5-418e-874a-d6140853e1fa).

On this dataset, we then trained a random forest classifier to infer people’s perception of the robot’s approach behavior from features generated from the response behaviors. This resulted in a set of relevant features that perform significantly better than chance for a participant-dependent classifier; which implies that the behaviors of our participants, even with our relatively limited tracking, contain interpretable information about their perception of the robot’s behavior.

Our findings demonstrate, for this specific context, that the observable behavior of people does indeed contain usable information about their subjective perception of a robot’s behavior. As such they, together with the dataset, provide a stepping stone for future research into the automatic detection of such social feedback cues, e.g. with other or more fine-grained observations of people’s behavior (such as facial expressions), with more sophisticated machine learning techniques, and/or in different contexts.
Original languageEnglish
Title of host publicationThe First IEEE International Conference on Cognitive Machine Intelligence
EditorsPhilip S. Yu, Dino Pedreschi
Place of PublicationPiscatway, NJ, USA
PublisherIEEE
Publication statusPublished - 2019
Event1st IEEE International Conference on Cognitive Machine Intelligence - Los Angeles, United States
Duration: 12 Dec 201914 Dec 2019
Conference number: 1
http://www.sis.pitt.edu/lersais/cogmi/2019/

Conference

Conference1st IEEE International Conference on Cognitive Machine Intelligence
Abbreviated titleCogMI 2019
Country/TerritoryUnited States
CityLos Angeles
Period12/12/1914/12/19
Internet address

Bibliographical note

Invited paper

Keywords

  • Social robotics
  • Social positioning
  • Responsiveness
  • Social feedback cues
  • Social interaction dynamics

Fingerprint

Dive into the research topics of 'Detecting Perceived Appropriateness of a Robot’s Social Positioning Behavior from Non-Verbal Cues: ‘A robot study in scarlet’'. Together they form a unique fingerprint.

Cite this