Interpretable neural network with limited weights for constructing simple and explainable HI using SHM data

M. Moradi*, P. Komninos, R. Benedictus, D. Zarouchas

*Corresponding author for this work

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

3 Citations (Scopus)
71 Downloads (Pure)

Abstract

Recently, companies all over the world have been focusing on the improvement of autonomous health management systems in order to enhance performance and reduce downtime costs. To achieve this, the remaining useful life predictions have been given remarkable attention. These predictions depend on the proper designing process and the quality of health indicators (HI) generated from structural health monitoring sensors based on prior established multiple prognostic evaluation criteria. Constructing such HIs from noisy sensory data demands powerful models that enable the automatic selection and fusion of features taken from those relevant measurements. Deep learning models are promising to autonomously extract features in scenarios with a huge volume of data without requiring considerable domain expertise. Nonetheless, the features established by artificial neural networks are complicated to comprehend and cannot be regarded as physical system characteristics. In this regard, the goal of this paper is to extend a new model; an interpretable artificial neural network that enables the automatic selection and fusion of features to construct the most appropriate HIs with remarkably fewer parameters. This model consists of additive and multiplicative layers that provide a feature fusion that better reflects the system’s physical properties. Additionally, the weights are discretized in two ways: a) using a ternary form with values {-1, 0, 1}, and b) relaxing the aforementioned ternary form by rounding the weights at the first decimal point in the range of [-1, 1]. Both discretization techniques have the ability to softly control the number of parameters that should be ignored. This trick guarantees interpretability for the neural network by extracting simple yet powerful equations representing the constructed HIs. Finally, the model’s performance is evaluated and compared with other approaches using a practical case study. The results show that the proposed approach's designed HIs are both interpretable and of high quality according to the criteria of the HI's evaluation.
Original languageEnglish
Title of host publicationAnnual Conference of the PHM Society
EditorsChetan Kulkarni, Abhinav Saxena
PublisherPHM Society
Number of pages11
Volume14
Edition1
ISBN (Electronic)9781936263370
DOIs
Publication statusPublished - 2022
EventAnnual Conference of the PHM Society 2022 - Nashville, United States
Duration: 1 Nov 20224 Nov 2022
Conference number: 14

Publication series

NameProceedings of the Annual Conference of the Prognostics and Health Management Society, PHM
Number1
Volume14
ISSN (Print)2325-0178

Conference

ConferenceAnnual Conference of the PHM Society 2022
Country/TerritoryUnited States
City Nashville
Period1/11/224/11/22

Keywords

  • Prognostics and Health Management (PHM)
  • Structural Health Monitoring (SHM)
  • Intelligent health indicator
  • Interpretable neural network
  • C-MAPSS turbofan engines
  • Machine learning
  • Artificial Intelligent (AI)

Fingerprint

Dive into the research topics of 'Interpretable neural network with limited weights for constructing simple and explainable HI using SHM data'. Together they form a unique fingerprint.

Cite this