ConfLab: A Rich Multimodal Multisensor Dataset of Free-Standing Social Interactions In-the-Wild

Dataset

Description

ConfLab is a multimodal multisensor dataset of in-the-wild free-standing social conversations. It records a real-life professional networking event at the international conference ACM Multimedia 2019. Involving 48 conference attendees, the dataset captures a diverse mix of status, acquaintance, and networking motivations. Our capture setup improves upon the data fidelity of prior in-the-wild datasets while : 8 overhead-perspective videos (1920 x 1080, 60fps), and custom personal wearable sensors with onboard recording of body motion (full 9-axis IMU), privacy-preserving low-frequency audio (1250Hz), and Bluetooth-based proximity. Additionally, we developed custom solutions for distributed hardware synchronization at acquisition, and time-efficient continuous annotation of body keypoints and actions at high sampling rates.


---------------------

General information:

The dataset contains:

EULA: (DOI: 10.4121/20016194, required for access): End-User License Agreement needs to be filled out in order to request access because data contains pseudonymized data. Once completed, please return to SPCLabDatasets-insy@tudelft.nl. Private links to download the data will be sent to you when your credentials are reviewed and approved. Note for reviewers: please follow the same procedure described above. TUDelft Human Research Ethics Committee or a member of the Admin staff will handle your access requests for the review period to ensure single blind standard.
Datasheet (DOI: 10.4121/20017559): data sheet summary of the ConfLab Data
Samples (DOI: 10.4121/20017682): sample data of the ConfLab dataset
Raw-Data (DOI: 10.4121/20017748): raw video and wearable sensor data of the ConfLab dataset.
Processed-Data (DOI: 10.4121/20017805): processed video and wearable sensor data of the ConfLab dataset. Used for annotations and processed for usability.
Annotations (DOI: 10.4121/20017664): annotations of pose, speaking status, and F-formations.

Please scroll down the page to see and access all the components in the ConfLab dataset. For more information, please see the respective readme.


---------------------

Baseline tasks

Baseline tasks include: keypoint pose estimation, speaking status estimation, and F-formation (conversation group) estimation.


Code related to the baseline tasks can be found here:

https://github.com/TUDelft-SPC-Lab/conflab

---------------------

Annotation tool

The annotation tool developed and used for annotating keypoints and speaking status of the ConfLab dataset, is provided here: https://github.com/josedvq/covfee


More information can be found here: Quiros, Jose Vargas, et al. "Covfee: an extensible web framework for continuous-time annotation of human behavior." Understanding Social Behavior in Dyadic and Small Group Interactions. PMLR, 2022.

---------------------

Wearable sensor (Midge) hardware

The wearable sensor developed and used for collecting data for the ConfLab dataset. More details can be found: https://github.com/TUDelft-SPC-Lab/spcl_midge_hardware


Please contact SPCLabDatasets-insy@tudelft.nl if you have any inquiries.
Date made available9 Jun 2022
PublisherTU Delft - 4TU.ResearchData
Date of data production2022 -

Cite this