Self-Supervised Interactive Embedding for One-Shot Organ Segmentation | IEEE Journals & Magazine | IEEE Xplore

Self-Supervised Interactive Embedding for One-Shot Organ Segmentation


Abstract:

One-shot organ segmentation (OS2) aims at segmenting the desired organ regions from the input medical imaging data with only one pre-annotated example as the reference. B...Show More

Abstract:

One-shot organ segmentation (OS2) aims at segmenting the desired organ regions from the input medical imaging data with only one pre-annotated example as the reference. By using the minimal annotation data to facilitate organ segmentation, OS2 receives great attention in the medical image analysis community due to its weak requirement on human annotation. In OS2, one core issue is to explore the mutual information between the support (reference slice) and the query (test slice). Existing methods rely heavily on the similarity between slices, and additional slice allocation mechanisms need to be designed to reduce the impact of the similarity between slices on the segmentation performance. To address this issue, we build a novel support-query interactive embedding (SQIE) module, which is equipped with the channel-wise co-attention, spatial-wise co-attention, and spatial bias transformation blocks to identify “what to look”, “where to look”, and “how to look” in the input test slice. By combining the three mechanisms, we can mine the interactive information of the intersection area and the disputed area between slices, and establish the feature connection between the target in slices with low similarity. We also propose a self-supervised contrastive learning framework, which transforms knowledge from the physical position to the embedding space to facilitate the self-supervised interactive embedding of the query and support slices. Comprehensive experiments on two large benchmarks demonstrate the superior capacity of the proposed approach when compared with the current alternatives and baseline models.
Published in: IEEE Transactions on Biomedical Engineering ( Volume: 70, Issue: 10, October 2023)
Page(s): 2799 - 2808
Date of Publication: 11 September 2023

ISSN Information:

PubMed ID: 37695956

Funding Agency:


I. Introduction

As a fundamental problem in medical image analysis and understanding, organ segmentation for the volumetric medical image can improve the clinical practice of pathological diagnosis. It provides critical information in auxiliary diagnosis and preoperative planning. Deep learning-based models have already achieved significant progress in organ segmentation at the cost of a large amount of annotated data [1], [2]. Nevertheless, due to the limitation of inherent ethical concern and the high cost of volumetric medical (CT, MRI) image annotations, using the large-scale annotation database to train the organ segmentation model is not available in the field of medical imaging. Moreover, the traditional deep learning methods train the segmentation models with many manual annotations for each target category. However, for specific clinicopathological analysis, the number of desired segmentation regions is numerous [3], it is impractical to train the segmentation model by providing manual annotations for new areas of interest. Therefore, segmenting a new desired organ region is still challenging problem.

Description

The supplemental material includes additional information related to the main paper.
Review our Supplemental Items documentation for more information.
Contact IEEE to Subscribe

References

References is not available for this document.