Who is where: Matching People in Video to Wearable Acceleration During Crowded Mingling Events

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

10 Citations (Scopus)


We address the challenging problem of associating acceleration data from a wearable sensor with the corresponding spatio-temporal region of a person in video during crowded mingling scenarios. This is an important first step for multisensor behavior analysis using these two modalities. Clearly, as the numbers of people in a scene increases, there is also a need to robustly and automatically associate a region of the video with each person’s device. We propose a hierarchical association approach which exploits the spatial context of the scene, outperforming the state-of-the-art approaches significantly. Moreover, we present experiments on matching from 3 to more than 130 acceleration and video streams which, to our knowledge, is significantly larger than prior works where only up to 5 device streams are associated.
Original languageEnglish
Title of host publicationProceedings of the 2016 ACM Multimedia Conference, MM 2016
Place of PublicationNew York, NY
PublisherAssociation for Computing Machinery (ACM)
Number of pages5
ISBN (Electronic)978-1-4503-3603-1
Publication statusPublished - 2016
EventMM'16 the ACM Multimedia Conference: 24th ACM Multimedia Conference - Amsterdam, Netherlands
Duration: 15 Oct 201619 Oct 2016


ConferenceMM'16 the ACM Multimedia Conference


  • Mingling
  • wearable sensor
  • computer vision
  • association


Dive into the research topics of 'Who is where: Matching People in Video to Wearable Acceleration During Crowded Mingling Events'. Together they form a unique fingerprint.

Cite this