TY - JOUR
T1 - Scalable and Privacy-Aware Online Learning of Nonlinear Structural Equation Models
AU - Money, Rohan
AU - Krishnan, Joshin
AU - Beferull-Lozano, Baltasar
AU - Isufi, Elvin
PY - 2023
Y1 - 2023
N2 - An online topology estimation algorithm for nonlinear structural equation models (SEM) is proposed in this paper, addressing the nonlinearity and the non-stationarity of real-world systems. The nonlinearity is modeled using kernel formulations, and the curse of dimensionality associated with the kernels is mitigated using random feature approximation. The online learning strategy uses a group-lasso-based optimization framework with a prediction-corrections technique that accounts for the model evolution. The proposed approach has three properties of interest. First, it enjoys node-separable learning, which allows for scalability in large networks. Second, it offers privacy in SEM learning by replacing the actual data with node-specific random features. Third, its performance can be characterized theoretically via a dynamic regret analysis, showing that it is possible to obtain a linear dynamic regret bound under mild assumptions. Numerical results with synthetic and real data corroborate our findings and show competitive performance w.r.t. state-of-the-art alternatives.
AB - An online topology estimation algorithm for nonlinear structural equation models (SEM) is proposed in this paper, addressing the nonlinearity and the non-stationarity of real-world systems. The nonlinearity is modeled using kernel formulations, and the curse of dimensionality associated with the kernels is mitigated using random feature approximation. The online learning strategy uses a group-lasso-based optimization framework with a prediction-corrections technique that accounts for the model evolution. The proposed approach has three properties of interest. First, it enjoys node-separable learning, which allows for scalability in large networks. Second, it offers privacy in SEM learning by replacing the actual data with node-specific random features. Third, its performance can be characterized theoretically via a dynamic regret analysis, showing that it is possible to obtain a linear dynamic regret bound under mild assumptions. Numerical results with synthetic and real data corroborate our findings and show competitive performance w.r.t. state-of-the-art alternatives.
KW - Network topology inference
KW - time-varying graph learning
KW - structural equation models
KW - random feature approximation
UR - http://www.scopus.com/inward/record.url?scp=85148432828&partnerID=8YFLogxK
U2 - 10.1109/OJSP.2023.3241580
DO - 10.1109/OJSP.2023.3241580
M3 - Article
VL - 4
SP - 61
EP - 70
JO - IEEE Open Journal of Signal Processing
JF - IEEE Open Journal of Signal Processing
SN - 2644-1322
ER -