Abstract:
Deep learning has become increasingly important in fault diagnosis, but it relies on a large amount of high-quality labeled data. Collecting data from distributed machine...Show MoreMetadata
Abstract:
Deep learning has become increasingly important in fault diagnosis, but it relies on a large amount of high-quality labeled data. Collecting data from distributed machines can expand the dataset, but it usually leads to privacy concerns. Moreover, since the operating conditions are complex in real-world applications, the collected training data and the test data often have different distributions. Therefore, a well-trained model on the training data may not be suitable for test data due to the domain shift. To preserve privacy and to mitigate the domain shift, in existing federated transfer learning fault diagnosis methods, distributed machines exchange model parameters and features rather than raw data with the central server. However, such methods suffer from a single point of failure and high communication burden. To address these issues, we propose a fully decentralized federated transfer learning fault diagnosis method. More specifically, the proposed method obtains a pretrained model among source nodes with labeled training data where each source node exchanges model parameters with its neighboring source nodes. Moreover, a novel transfer learning strategy is proposed, which aligns features of test data at the target node with features of training data at its connected source nodes to mitigate misclassifications resulting from the domain shift. The effectiveness of the proposed method is verified by various experiments on two public bearing datasets.
Published in: IEEE Transactions on Industrial Informatics ( Volume: 21, Issue: 2, February 2025)