Abstract:
The growing trend of employing deep learning-based models in electrocardiogram (ECG) and phonocardiogram (PCG) classification is evident. However, the weakness and suscep...Show MoreMetadata
Abstract:
The growing trend of employing deep learning-based models in electrocardiogram (ECG) and phonocardiogram (PCG) classification is evident. However, the weakness and susceptibility to noise in ECG and PCG signals present challenges for accurate analysis and classification using existing methods. This paper addresses this issue by introducing a multimodal network called MCT for ECG and PCG classification. MCT combines a convolutional neural network (CNN) and Transformer, allowing the model to effectively capture both local and global information from ECG and PCG signals. Furthermore, this paper propose a novel attention mechanism-based multimodal fusion method. This fusion method adjusts the weights on both intra-modal and inter-modal patterns, enabling effective fusion of multimodal features. By exploiting complementary information across modalities, MCT improves the accuracy of ECG and PCG classification. The evaluation of MCT is conducted on both a public human motion state recognition ECG and PCG multimodal dataset and a real-world cardiovascular disease diagnosis ECG and PCG multimodal dataset. The experimental results reveal that MCT outperforms previous models, achieving better performance in ECG and PCG classification.
Date of Conference: 08-14 December 2023
Date Added to IEEE Xplore: 29 December 2023
ISBN Information: