Sonifying the location of an object: A comparison of three methods

Pavlo Bazilinskyy, W. van Haarlem, H. Quraishi, C. Berssenbrugge, J. Binda, Joost de Winter

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

1 Citation (Scopus)
20 Downloads (Pure)

Abstract

Auditory displays are promising for informing operators about hazards or objects in the environment. However, it remains to be investigated how to map distance information to a sound dimension. In this research, three sonification approaches were tested: Beep Repetition Rate (BRR) in which beep time and inter-beep time were a linear function of distance, Sound Intensity (SI) in which the digital sound volume was a linear function of distance, and Sound Fundamental Frequency (SFF) in which the sound frequency was a linear function of distance. Participants (N = 29) were presented with a sound by means of headphones and subsequently clicked on the screen to estimate the distance to the object with respect to the bottom of the screen (Experiment 1), or the distance and azimuth angle to the object (Experiment 2). The azimuth angle in Experiment 2 was sonified by the volume difference between the left and right ears. In an additional Experiment 3, reaction times to directional audio-visual feedback were compared with directional visual feedback. Participants performed three sessions (BRR, SI, SFF) in Experiments 1 and 2 and two sessions (visual, audio-visual) in Experiment 3, 10 trials per session. After each trial, participants received knowledge-of-results feedback. The results showed that the three proposed methods yielded an overall similar mean absolute distance error, but in Experiment 2 the error for BRR was significantly smaller than for SI. The mean absolute distance errors were significantly greater in Experiment 2 than in Experiment 1. In Experiment 3, there was no statistically significant difference in reaction time between the visual and audio-visual conditions. The results are interpreted in light of the Weber-Fechner law, and suggest that humans have the ability to accurately interpret artificial sounds on an artificial distance scale.

Original languageEnglish
Title of host publicationIFAC-PapersOnLine
Subtitle of host publication13th IFAC Symposium on Analysis, Design, and Evaluation of Human-Machine Systems HMS 2016
EditorsT. Sawaragi
Pages531-536
Volume49 - 19
DOIs
Publication statusPublished - 2016
Event13th IFAC Symposium on Analysis, Design, and Evaluation of Human-Machine Systems - Kyoto, Japan
Duration: 30 Aug 20162 Sep 2016

Conference

Conference13th IFAC Symposium on Analysis, Design, and Evaluation of Human-Machine Systems
Abbreviated titleHMS 2016
CountryJapan
CityKyoto
Period30/08/162/09/16

Keywords

  • auditory display
  • detecting elements
  • driver support
  • driving simulator
  • human-machine interface
  • road safety

Fingerprint Dive into the research topics of 'Sonifying the location of an object: A comparison of three methods'. Together they form a unique fingerprint.

Cite this