Abstract
Releasing state samples generated by a dynamical system model, for data aggregation purposes, can allow an adversary to perform reverse engineering and estimate sensitive model parameters. Upon identification of the system model, the adversary may even use it for predicting sensitive data in the future. Hence, preserving a confidential dynamical process model is crucial for the survival of many industries. Motivated by the need to protect the system model as a trade secret, we propose a mechanism based on differential privacy to render such model identification techniques ineffective while preserving the utility of the state samples for data aggregation purposes. We deploy differential privacy by generating noise according to the sensitivity of the query and adding it to the state vectors at each time instant. We derive analytical expressions to quantify the bound on the sensitivity function and estimate the minimum noise level required to guarantee differential privacy. Furthermore, we present numerical analysis and characterize the privacy-utility trade-off that arises when deploying differential privacy. Simulation results demonstrate that through differential privacy, we achieve acceptable privacy level sufficient to mislead the adversary while still managing to retain high utility level of the state samples for data aggregation.
Original language | English |
---|---|
Pages (from-to) | 309-314 |
Journal | IFAC-PapersOnLine |
Volume | 52 |
Issue number | 20 |
DOIs | |
Publication status | Published - 2019 |
Event | 8th IFAC Workshop on Distributed Estimation and Control in Networked Systems - Chigago, United States Duration: 16 Sept 2019 → 17 Sept 2019 https://www.sciencedirect.com/science/journal/24058963/52/20 |
Keywords
- Differential Privacy
- State Trajectories
- Model Parameters
- Data Aggregation