TY - GEN
T1 - A closer look at quality-aware runtime assessment of sensing models in multi-device environments
AU - Min, Chulhong
AU - Montanari, Alessandro
AU - Mathur, Akhil
AU - Kawsar, Fahim
PY - 2019/11/10
Y1 - 2019/11/10
N2 - The increasing availability of multiple sensory devices on or near a human body has opened brand new opportunities to leverage redundant sensory signals for powerful sensing applications. For instance, personal-scale sensory inferences with motion and audio signals can be done individually on a smartphone, a smartwatch, and even an earbud - each offering unique sensor quality, model accuracy, and runtime behaviour. At execution time, however, it is incredibly challenging to assess these characteristics to select the best device for accurate and resource-efficient inferences. To this end, we look at a quality-aware collaborative sensing system that actively interplays across multiple devices and respective sensing models. It dynamically selects the best device as a function of model accuracy at any given context. We propose two complementary techniques for the runtime quality assessment. Borrowing principles from active learning, our first technique runs on three heuristic-based quality assessment functions that employ confidence, margin sampling, and entropy of models' output. Our second technique is built with a siamese neural network and acts on the premise that runtime sensing quality can be learned from historical data. Our evaluation across multiple motion and audio datasets shows that our techniques provide 12% increase in overall accuracy through dynamic device selection at the average expense of 13 mW power on each device as compared to traditional single-device approaches.
AB - The increasing availability of multiple sensory devices on or near a human body has opened brand new opportunities to leverage redundant sensory signals for powerful sensing applications. For instance, personal-scale sensory inferences with motion and audio signals can be done individually on a smartphone, a smartwatch, and even an earbud - each offering unique sensor quality, model accuracy, and runtime behaviour. At execution time, however, it is incredibly challenging to assess these characteristics to select the best device for accurate and resource-efficient inferences. To this end, we look at a quality-aware collaborative sensing system that actively interplays across multiple devices and respective sensing models. It dynamically selects the best device as a function of model accuracy at any given context. We propose two complementary techniques for the runtime quality assessment. Borrowing principles from active learning, our first technique runs on three heuristic-based quality assessment functions that employ confidence, margin sampling, and entropy of models' output. Our second technique is built with a siamese neural network and acts on the premise that runtime sensing quality can be learned from historical data. Our evaluation across multiple motion and audio datasets shows that our techniques provide 12% increase in overall accuracy through dynamic device selection at the average expense of 13 mW power on each device as compared to traditional single-device approaches.
KW - Multi-device environments
KW - Quality assessment
KW - Sensing models
UR - http://www.scopus.com/inward/record.url?scp=85076628597&partnerID=8YFLogxK
U2 - 10.1145/3356250.3360043
DO - 10.1145/3356250.3360043
M3 - Conference contribution
T3 - SenSys 2019 - Proceedings of the 17th Conference on Embedded Networked Sensor Systems
SP - 271
EP - 284
BT - SenSys 2019 - Proceedings of the 17th Conference on Embedded Networked Sensor Systems
A2 - Zhang, Mi
PB - ACM
T2 - 17th ACM Conference on Embedded Networked Sensor Systems, SenSys 2019
Y2 - 10 November 2019 through 13 November 2019
ER -