Constructing explainable health indicators for aircraft engines by developing an interpretable neural network with discretized weights

Morteza Moradi*, Panagiotis Komninos, Dimitrios Zarouchas

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

4 Downloads (Pure)

Abstract

Abstract: Remaining useful life predictions depend on the quality of health indicators (HIs) generated from condition monitoring sensors, evaluated by predefined prognostic metrics such as monotonicity, prognosability, and trendability. Constructing these HIs requires effective models capable of automatically selecting and fusing features from pertinent measurements, given the inherent noise in sensory data. While deep learning approaches have the potential to automatically extract features without the need for significant specialist knowledge, these features lack a clear (physical) interpretation. Furthermore, the evaluation metrics for HIs are nondifferentiable, limiting the application of supervised networks. This research aims to develop an intrinsically interpretable ANN, targeting qualified HIs with significantly lower complexity. A semi-supervised paradigm is employed, simulating labels inspired by the physics of progressive damage. This approach implicitly incorporates nondifferentiable criteria into the learning process. The architecture comprises additive and newly modified multiplicative layers that combine features to better represent the system’s characteristics. The developed multiplicative neurons are not restricted to pairwise actions, and they can also handle both division and multiplication. To extract a compact HI equation, making the model mathematically interpretable, the number of parameters is further reduced by discretizing the weights via a ternary set. This weight discretization simplifies the extracted equation while gently controlling the number of weights that should be overlooked. The developed methodology is specifically tailored to construct interpretable HIs for commercial turbofan engines, showcasing that the generated HIs are of high quality and interpretable.

Original languageEnglish
Article number143
Number of pages19
JournalApplied Intelligence
Volume55
Issue number2
DOIs
Publication statusPublished - 2025

Bibliographical note

Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care
Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

Keywords

  • Artificial neural network
  • Feature fusion
  • Interpretable health indicator
  • Multiplicative neuron
  • Prognostics and health management
  • Ternary weights

Fingerprint

Dive into the research topics of 'Constructing explainable health indicators for aircraft engines by developing an interpretable neural network with discretized weights'. Together they form a unique fingerprint.

Cite this