Abstract
Applying deep neural networks (DNNs) for system identification (SYSID) has attracted more andmore attention in recent years. The DNNs, which have universal approximation capabilities for any measurable function, have been successfully implemented in SYSID tasks with typical network structures, e.g., feed-forward neural networks and recurrent neural networks (RNNs). However, DNNs also have limitations. First, DNNs can easily overfit the training data due to the model complexity. Second, DNNs are normally regarded as black-box models, which lack interpretability and cannot be used for white-box modelling. In this thesis, we develop sparse Bayesian deep learning (SBDL) algorithms that can address these limitations in an effectivemanner.
Original language | English |
---|---|
Qualification | Doctor of Philosophy |
Awarding Institution |
|
Supervisors/Advisors |
|
Award date | 12 May 2022 |
Electronic ISBNs | 978-94-6384-329-4 |
DOIs | |
Publication status | Published - 2022 |
Keywords
- Deep learning
- System identification
- Hessian calculation
- Sparse Bayesian learning
- Symbolic regression
- Neural architecture search
- Network compression