A study on 3D multimodal resonance brain tumor image segmentation model | IEEE Conference Publication | IEEE Xplore

A study on 3D multimodal resonance brain tumor image segmentation model


Abstract:

Automatic segmentation of tumor regions from 3D multi-modal brain images is of great reality for brain tumor diagnosis. A fully automated dual-path brain tumor MRI image ...Show More

Abstract:

Automatic segmentation of tumor regions from 3D multi-modal brain images is of great reality for brain tumor diagnosis. A fully automated dual-path brain tumor MRI image segmentation model, MEMU-Net, is proposed to address the problems of complex network structure and difficulties in multiscale feature extraction in multi-modal brain tumor medical image segmentation. Firstly, the network model is constructed by referring to the U-Net model's dual-path encoding-decoding structure to enhance the network's accuracy in extracting features while retaining the advantages of the original U-Net. Secondly, the M-RepVGG module is used for upsampling and downsampling. The module can extract richer spatial features during training while reducing the number of parameters computed in the network. A simple network architecture for inference aims to improve the speed of the network. Finally, the Expectation-maximization attention is embedded before the upsampling for the better fusion of features. Experiment results on BraTS 2019 validation set can achieve Dice scores of 76.28%, 89.16%, 80.11% for enhancing tumor, whole tumor, and tumor core, respectively.
Date of Conference: 20-22 December 2021
Date Added to IEEE Xplore: 30 May 2022
ISBN Information:
Conference Location: Haikou, Hainan, China

I. Introduction

Brain tumor is a disease with high morbidity, among which glioma occupies the first place within the intracranial tumors and seriously impacts human life safety. Preemptive and proactive treatments can be delivered to realize personalized, pervasive, and patient-centralized healthcare [1]. Magnetic Resonance Imaging (MRI) can provide physicians with comprehensive information about brain tumors by obtaining high-resolution, high soft-tissue contrast information of the patient's brain without damage [2]. Since MRI is generally a 3D image with high processing cost and complexity, considerable differences between patients, and unclear boundaries of tumors, the manual segmentation workload is significant and unreasonable. Deep learning (DCNN) can extract features of the input information autonomously from a large amount of data and does not require a priori knowledge, a quality that precisely meets the requirements for automatic segmentation of MRI images. Havaei M et al. [3] demonstrated the validity of deep learning models in MRI brain tumor segmentation in 2017. As medical treatment advances, DCNN-like methods are required to extract more detailed and precise information from the tumor image. The emergence of the fully connected network (FCN) has quickly become the baseline network in image segmentation [4]. FCN implements end-to-end training to complete semantic segmentation of images by performing pixel-level classification. U-Net [5] is a popular network for FCN in image segmentation. However, there are still problems with the U-Net-based image segmentation of brain tumors. The convolutional neural network cannot obtain long-range semantic information of the image. Pure convolutional networks cannot learn the subtle features of tumor images, resulting in poor segmentation accuracy. The networks with high segmentation accuracy are often accompanied by more complex network architectures and slow inference.

Contact IEEE to Subscribe

References

References is not available for this document.