Abstract

To deal with class imbalance learning (CIL) problems, a novel algorithm is proposed based on kernel extreme learning machine (KELM), named KELM-CIL. To solve it, two algorithms are developed from the dual and primal spaces, respectively, thus yielding D-KELM-CIL and P-KELM-CIL. However, both D-KELM-CIL and P-KELM-CIL are not sparse algorithms. Hence, a sparse strategy based on Cholesky factorization is utilized to realize their sparseness, producing CD-KELM-CIL and CP-KELM-CIL. For large-size problems, a probabilistic trick is applied to accelerate them further, hence obtaining PCD-KELM-CIL and PCP-KELM-CIL. To test the effectiveness and efficacy of the proposed algorithms, experiments on benchmark datasets are carried out. When the proposed algorithms are applied to fault detection of aircraft engine, they show good generalization performance and real-time performance, especially for CP-KELM-CIL and PCP-KELM-CIL, which indicates that they can be developed as candidate techniques for fault detection of aircraft engine.

References

1.
Kobayashi
,
T.
, and
Simon
,
D. L.
,
2007
, “
Integration of on-Line and Off-Line Diagnostic Algorithms for Aircraft Engine Health Management
,” ASME
ASME J. Eng. Gas Turbines Power
,
129
(
4
), pp.
986
993
.10.1115/1.2747640
2.
Sadough Vanini
,
Z.
,
Khorasani
,
K.
, and
Meskin
,
N.
,
2014
, “
Fault Detection and Isolation of a Dual Spool Gas Turbine Engine Using Dynamic Neural Networks and Multiple Model Approach
,”
Inf. Sci.
,
259
, pp.
234
251
.10.1016/j.ins.2013.05.032
3.
Sina Tayarani-Bathaie
,
S.
,
Sadough Vanini
,
Z.
, and
Khorasani
,
K.
,
2014
, “
Dynamic Neural Network-Based Fault Diagnosis of Gas Turbine Engines
,”
Neurocomputing
,
125
, pp.
153
165
.10.1016/j.neucom.2012.06.050
4.
Kobayashi
,
T.
, and
Simon
,
D. L.
,
2003
, “
Application of a Bank of Kalman Filters for Aircraft Engine Fault Diagnostics
,” National Aeronautics and Space Administration, Washington, DC, Report No.
NASA/TM-2003-212526
.https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20030067984.pdf
5.
Borguet
,
S.
, and
Leonard
,
O.
,
2009
, “
Coupling Principal Component Analysis and Kalman Filtering Algorithms for on-Line Aircraft Engine Diagnostics
,”
Control Eng. Pract.
,
17
(
4
), pp.
494
502
.10.1016/j.conengprac.2008.09.008
6.
Simon
,
D. L.
, and
Garg
,
S.
,
2009
, “
Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation
,”
ASME
Paper No. GT2009-59684.10.1115/GT2009-59684
7.
Armstrong
,
J. B.
, and
Simon
,
D. L.
,
2012
, “
Constructing an Efficient Self-Tuning Aircraft Engine Model for Control and Health Management Applications
,”
Proceedings of the Annual Conference of the Prognostics and Health Management Society 2012
(
PHM 2012
), Minneapolis, MN, Sept. 23–27, pp.
134
146
.https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20130001700.pdf
8.
Simon
,
D. L.
, and
Rinehart
,
A. W.
,
2016
, “
Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics
,”
ASME J. Eng. Gas Turbines Power
,
138
(
7
), p. 071201.10.1115/1.4032339
9.
Kyriazis
,
A.
, and
Mathioudakis
,
K.
,
2009
, “
Gas Turbine Fault Diagnosis Using Fuzzy-Based Decision Fusion
,”
J. Propul. Power
,
25
(
2
), pp.
335
343
.10.2514/1.38629
10.
Schwabacher
,
M.
, and
Goebel
,
K.
,
2007
, “
A Survey of Artificial Intelligence for Prognostics
,”
Artificial Intelligence for Prognostics
, Arlington, VA, Nov. 9–11, pp.
107
114
.https://ti.arc.nasa.gov/m/pub-archive/1382h/1382%20(Schwabacher).pdf
11.
Shahzad
,
K.
, and
Manarvi
,
I. A.
,
2012
, “
Defect Trend Analysis of t56 Engine After Overhaul
,”
IEEE Aerospace Conference Proceedings
, Big Sky, MT, Mar. 3–10, pp.
1
6
.10.1109/AERO.2012.6187381
12.
Naderi
,
E.
, and
Khorasani
,
K.
,
2018
, “
Data-Driven Fault Detection, Isolation and Estimation of Aircraft Gas Turbine Engine Actuator and Sensors
,”
Mech. Syst. Signal Process.
,
100
, pp.
415
438
.10.1016/j.ymssp.2017.07.021
13.
He
,
H.
, and
Garcia
,
E. A.
,
2009
, “
Learning From Imbalanced Data
,”
IEEE Trans. Knowl. Data Eng.
,
21
(
9
), pp.
1263
1284
.10.1109/TKDE.2008.239
14.
Yang
,
Q.
, and
Wu
,
X.
,
2006
, “
10 Challenging Problems in Data Mining Research
,”
Int. J. Inf. Technol. Decis. Making
,
5
(
4
), pp.
597
604
.10.1142/S0219622006002258
15.
Vapnik
,
V. N.
,
1995
,
The Nature of Statistical Learning Theory
,
Springer
,
New York
.
16.
Cortes
,
C.
, and
Vapnik
,
V.
,
1995
, “
Support-Vector Networks
,”
Mach. Learn.
,
20
(
3
), pp.
273
297
.10.1007/BF00994018
17.
Vapnik
,
V.
,
1999
, “
An Overview of Statistical Learning Theory
,”
IEEE Trans. Neural Networks
,
10
(
5
), pp.
988
999
.10.1109/72.788640
18.
Liu
,
Y.-H.
, and
Chen
,
Y.-T.
,
2007
, “
Face Recognition Using Total Margin-Based Adaptive Fuzzy Support Vector Machines
,”
IEEE Trans. Neural Networks
,
18
(
1
), pp.
178
192
.10.1109/TNN.2006.883013
19.
Batuwita
,
R.
, and
Palade
,
V.
,
2010
, “
Fsvm-Cil: Fuzzy Support Vector Machines for Class Imbalance Learning
,”
IEEE Trans. Fuzzy Syst.
,
18
(
3
), pp.
558
571
.10.1109/TFUZZ.2010.2042721
20.
Dai
,
H.-L.
,
2015
, “
Class Imbalance Learning Via a Fuzzy Total Margin Based Support Vector Machine
,”
Appl. Soft Comput.
,
31
, pp.
172
184
.10.1016/j.asoc.2015.02.025
21.
Huang
,
G.-B.
,
Zhu
,
Q.-Y.
, and
Siew
,
C.-K.
,
2004
, “
Extreme Learning Machine: A New Learning Scheme of Feedforward Neural Networks
,”
IEEE International Conference on Neural Networks
, Budapest, Hungary, July 25–29, pp.
985
990
.10.1109/IJCNN.2004.1380068
22.
Huang
,
G.-B.
,
Zhu
,
Q.-Y.
, and
Siew
,
C.-K.
,
2006
, “
Extreme Learning Machine: Theory and Applications
,”
Neurocomputing
,
70
(
1–3
), pp.
489
501
.10.1016/j.neucom.2005.12.126
23.
Elanayar
,
S.
, and
Shin
,
Y. C.
,
1994
, “
Radial Basis Function Neural Network for Approximation and Estimation of Nonlinear Stochastic Dynamic Systems
,”
IEEE Trans. Neural Networks
,
5
(
4
), pp.
594
603
.10.1109/72.298229
24.
Park
,
J.
, and
Sandberg
,
I. W.
,
1991
, “
Universal Approximation Using Radial-Basis-Function Networks
,”
Neural Computation
,
3
(
2
), pp.
246
257
.10.1162/neco.1991.3.2.246
25.
Zhao
,
Y.-P.
,
Li
,
Z.-Q.
,
Xi
,
P.-P.
,
Liang
,
D.
,
Sun
,
L.
, and
Chen
,
T.-H.
,
2017
, “
Gram-Schmidt Process Based Incremental Extreme Learning Machine
,”
Neurocomputing
,
241
, pp.
1
17
.10.1016/j.neucom.2017.01.049
26.
Huang
,
G.
,
Huang
,
G.-B.
,
Song
,
S.
, and
You
,
K.
,
2015
, “
Trends in Extreme Learning Machines: A Review
,”
Neural Networks
,
61
, pp.
32
48
.10.1016/j.neunet.2014.10.001
27.
Scholk¨Opf
,
B.
,
Mika
,
S.
,
Burges
,
C. J. C.
,
Knirsch
,
P.
,
Muller
,
K.-R.
,
Ratsch
,
G.
, and
Smola
,
A. J.
,
1999
, “
Input Space Versus Feature Space in Kernel-Based Methods
,”
IEEE Trans. Neural Networks
,
10
(
5
), pp.
1000
1017
.10.1109/72.788641
28.
Huang
,
G.-B.
,
Wang
,
D. H.
, and
Lan
,
Y.
,
2011
, “
Extreme Learning Machines: A Survey
,”
Int. J. Mach. Learn. Cybern.
,
2
(
2
), pp.
107
122
.10.1007/s13042-011-0019-y
29.
Zhao
,
Y.-P.
,
2016
, “
Parsimonious Kernel Extreme Learning Machine in Primal Via Cholesky Factorization
,”
Neural Networks
,
80
, pp.
95
109
.10.1016/j.neunet.2016.04.009
30.
Zhang
,
X.
,
2004
,
Matrix Analysis and Applications
,
Tsinghua University Press
,
Beijing, China
.
31.
Scholk¨Opf
,
B.
,
Herbrich
,
R.
, and
Smola
,
A. J.
,
2001
, “
A Generalized Representer Theorem
,”
Lecture Notes in Computer Science
, Springer,
Amsterdam, The Netherlands
, pp.
416
426
.https://alex.smola.org/papers/2001/SchHerSmo01.pdf
32.
Bottou
,
L.
,
Chapelle
,
O.
,
Decoste
,
D.
, and
Weston
,
J.
,
2007
, “
Training a Support Vector Machine in the Primal
,”
Neural Comput.
,
19
(
5
), pp.
1155
1178
.10.1162/neco.2007.19.5.1155
33.
Zhao
,
Y.
, and
Sun
,
J.
,
2008
, “
Robust Support Vector Regression in the Primal
,”
Neural Networks
,
21
(
10
), pp.
1548
1555
.10.1016/j.neunet.2008.09.001
34.
Zhao
,
Y.
, and
Sun
,
J.
,
2009
, “
Recursive Reduced Least Squares Support Vector Regression
,”
Pattern Recognit.
,
42
(
5
), pp.
837
842
.10.1016/j.patcog.2008.09.028
35.
Caruana
,
R.
,
Lawrence
,
S.
, and
Giles
,
L.
,
2001
, “
Overfitting in Neural Nets: Backpropagation, Conjugate Gradient, and Early Stopping
,”
Advances in Neural Information Processing Systems
, MIT Press,
Denver, CO
.https://papers.nips.cc/paper/1895-overfitting-in-neural-nets-backpropagation-conjugate-gradient-and-early-stopping.pdf
36.
Smola
,
A. J.
, and
Schölkopf
,
B.
,
2000
, “
Sparse Greedy Matrix Approximation for Machine Learning
,”
Seventeenth International Conference on Machine Learning
, Stanford University, Stanford, CA, June 29–July 2, pp.
911
918
. https://pdfs.semanticscholar.org/dd09/78a594290f6dc530e65983d79a056874185c.pdf
37.
Borguet
,
S.
, and
Léonard
,
O.
,
2010
, “
Comparison of Adaptive Filters for Gas Turbine Performance Monitoring
,”
J. Comput. Appl. Math.
,
234
(
7
), pp.
2202
2212
.10.1016/j.cam.2009.08.075
38.
Pratt and Whintey
,
1997
,
Module Analysis Program Network (MAPNET) Training Guide
,
Pratt & Whintey Customer Training Center
, Beijing, China.
39.
Zhao
,
Y.-P.
,
Song
,
F.-Q.
,
Pan
,
Y.-T.
, and
Li
,
B.
,
2017
, “
Retargeting Extreme Learning Machines for Classification and Their Applications to Fault Diagnosis of Aircraft Engine
,”
Aerosp. Sci. Technol.
,
71
, pp.
603
618
.10.1016/j.ast.2017.10.004
40.
Chawla
,
N. V.
,
Bowyer
,
K. W.
,
Hall
,
L. O.
, and
Kegelmeyer
,
W. P.
,
2002
, “
Smote: Synthetic Minority Oversampling Technique
,”
J. Artif. Intell. Res.
,
16
, pp.
321
357
.10.1613/jair.953
You do not currently have access to this content.