TY - GEN
T1 - Characterising the Role of Pre-Processing Parameters in Audio-based Embedded Machine Learning
AU - Toussaint, Wiebke
AU - Mathur, Akhil
AU - Ding, Aaron Yi
AU - Kawsar, Fahim
PY - 2021
Y1 - 2021
N2 - When deploying machine learning (ML) models on embedded and IoT devices, performance encompasses more than an accuracy metric: inference latency, energy consumption, and model fairness are necessary to ensure reliable performance under heterogeneous and resource-constrained operating conditions. To this end, prior research has studied model-centric approaches, such as tuning the hyperparameters of the model during training and later applying model compression techniques to tailor the model to the resource needs of an embedded device. In this paper, we take a data-centric view of embedded ML and study the role that pre-processing parameters in the data pipeline can play in balancing the various performance metrics of an embedded ML system. Through an in-depth case study with audio-based keyword spotting (KWS) models, we show that pre-processing parameter tuning is a remarkable tool that model developers can adopt to trade-off between a model's accuracy, fairness, and system efficiency, as well as to make an embedded ML model resilient to unseen deployment conditions.
AB - When deploying machine learning (ML) models on embedded and IoT devices, performance encompasses more than an accuracy metric: inference latency, energy consumption, and model fairness are necessary to ensure reliable performance under heterogeneous and resource-constrained operating conditions. To this end, prior research has studied model-centric approaches, such as tuning the hyperparameters of the model during training and later applying model compression techniques to tailor the model to the resource needs of an embedded device. In this paper, we take a data-centric view of embedded ML and study the role that pre-processing parameters in the data pipeline can play in balancing the various performance metrics of an embedded ML system. Through an in-depth case study with audio-based keyword spotting (KWS) models, we show that pre-processing parameter tuning is a remarkable tool that model developers can adopt to trade-off between a model's accuracy, fairness, and system efficiency, as well as to make an embedded ML model resilient to unseen deployment conditions.
KW - audio keyword spotting
KW - embedded machine learning
KW - fairness
KW - pre-processing parameters
UR - http://www.scopus.com/inward/record.url?scp=85120851237&partnerID=8YFLogxK
U2 - 10.1145/3485730.3493448
DO - 10.1145/3485730.3493448
M3 - Conference contribution
AN - SCOPUS:85120851237
T3 - SenSys 2021 - Proceedings of the 2021 19th ACM Conference on Embedded Networked Sensor Systems
SP - 439
EP - 445
BT - SenSys 2021 - Proceedings of the 2021 19th ACM Conference on Embedded Networked Sensor Systems
PB - Association for Computing Machinery (ACM)
T2 - 19th ACM Conference on Embedded Networked Sensor Systems, SenSys 2021
Y2 - 15 November 2021 through 17 November 2021
ER -