Loading [MathJax]/extensions/MathMenu.js
A Polynomial Kernel Induced Distance Metric to Improve Deep Transfer Learning for Fault Diagnosis of Machines | IEEE Journals & Magazine | IEEE Xplore

A Polynomial Kernel Induced Distance Metric to Improve Deep Transfer Learning for Fault Diagnosis of Machines


Abstract:

Deep transfer-learning-based diagnosis models are promising to apply diagnosis knowledge across related machines, but from which the collected data follow different distr...Show More

Abstract:

Deep transfer-learning-based diagnosis models are promising to apply diagnosis knowledge across related machines, but from which the collected data follow different distribution. To reduce the distribution discrepancy, Gaussian kernel induced maximum mean discrepancy (GK-MMD) is a widely used distance metric to impose constraints on the training of diagnosis models. However, the models using GK-MMD have three weaknesses: 1) GK-MMD may not accurately estimate distribution discrepancy because it ignores the high-order moment distances of data; 2) the time complexity of GK-MMD is high to require much computation cost; 3) the transfer performance of GK-MMD-based diagnosis models is sensitive to the selected kernel parameters. In order to overcome the weaknesses, a distance metric named polynomial kernel induced MMD (PK-MMD) is proposed in this article. Combined with PK-MMD, a diagnosis model is constructed to reuse diagnosis knowledge from one machine to the other. The proposed methods are verified by two transfer learning cases, in which the health states of locomotive bearings are identified with the help of data respectively from motor bearings and gearbox bearings in laboratories. The results show that PK-MMD enables to improve the inefficient computation of GK-MMD, and the PK-MMD-based diagnosis model presents better transfer results than other methods.
Published in: IEEE Transactions on Industrial Electronics ( Volume: 67, Issue: 11, November 2020)
Page(s): 9747 - 9757
Date of Publication: 18 November 2019

ISSN Information:

Funding Agency:


I. Introduction

With the rapid development of deep learning, intelligent fault diagnosis (IFD) has got a number of achievements in recent years [1]–[3]. The successes of IFD are subjected to a common assumption: there are sufficient labeled data to train reliable diagnosis models [4], [5]. In engineering scenarios, however, it is difficult to collect sufficient labeled data because of the huge human labor in labeling data. Consequently, the unlabeled data from real-case machines may not train the diagnosis models that are able to provide accurate results.

Contact IEEE to Subscribe

References

References is not available for this document.