Abstract
We propose a cross-modal approach for conversational well-being monitoring with a multi-sensory earable. It consists of motion, audio, and BLE models on earables. Using the IMU sensor, the microphone, and BLE scanning, the models detect speaking activities, stress and emotion, and participants in the conversation, respectively. We discuss the feasibility in qualifying conversations with our purpose-built cross-modal model in an energy-efficient and privacy-preserving way. With the cross-modal model, we develop a mobile application that qualifies on-going conversations and provides personalised feedback on social well-being.
Original language | English |
---|---|
Title of host publication | UbiComp/ISWC 2018 - Adjunct Proceedings of the 2018 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2018 ACM International Symposium on Wearable Computers |
Editors | Rajesh K. Balan, Youngki Lee, Kai Kunze |
Place of Publication | New York, NY, USA |
Publisher | Association for Computing Machinery (ACM) |
Pages | 706-709 |
Number of pages | 4 |
ISBN (Electronic) | 978-1-4503-5966-5 |
DOIs | |
Publication status | Published - 2018 |
Event | 2018 Joint ACM International Conference on Pervasive and Ubiquitous Computing, UbiComp 2018 and 2018 ACM International Symposium on Wearable Computers, ISWC 2018 - Singapore, Singapore Duration: 8 Oct 2018 → 12 Oct 2018 |
Conference
Conference | 2018 Joint ACM International Conference on Pervasive and Ubiquitous Computing, UbiComp 2018 and 2018 ACM International Symposium on Wearable Computers, ISWC 2018 |
---|---|
Country/Territory | Singapore |
City | Singapore |
Period | 8/10/18 → 12/10/18 |
Keywords
- Earable
- Multi-sensory
- Well-being