Loading [MathJax]/extensions/MathMenu.js
Localized Statistical Shape Models for Large-Scale Problems With Few Training Data | IEEE Journals & Magazine | IEEE Xplore

Localized Statistical Shape Models for Large-Scale Problems With Few Training Data


Abstract:

Objective: Statistical shape models have been successfully used in numerous biomedical image analysis applications where prior shape information is helpful such as organ ...Show More

Abstract:

Objective: Statistical shape models have been successfully used in numerous biomedical image analysis applications where prior shape information is helpful such as organ segmentation or data augmentation when training deep learning models. However, training such models requires large data sets, which are often not available and, hence, shape models frequently fail to represent local details of unseen shapes. This work introduces a kernel-based method to alleviate this problem via so-called model localization. It is specifically designed to be used in large-scale shape modeling scenarios like deep learning data augmentation and fits seamlessly into the classical shape modeling framework. Method: Relying on recent advances in multi-level shape model localization via distance-based covariance matrix manipulations and Grassmannian-based level fusion, this work proposes a novel and computationally efficient kernel-based localization technique. Moreover, a novel way to improve the specificity of such models via normalizing flow-based density estimation is presented. Results: The method is evaluated on the publicly available JSRT/SCR chest X-ray and IXI brain data sets. The results confirm the effectiveness of the kernelized formulation and also highlight the models’ improved specificity when utilizing the proposed density estimation method. Conclusion: This work shows that flexible and specific shape models from few training samples can be generated in a computationally efficient way by combining ideas from kernel theory and normalizing flows. Significance: The proposed method together with its publicly available implementation allows to build shape models from few training samples directly usable for applications like data augmentation.
Published in: IEEE Transactions on Biomedical Engineering ( Volume: 69, Issue: 9, September 2022)
Page(s): 2947 - 2957
Date of Publication: 10 March 2022

ISSN Information:

PubMed ID: 35271438

Funding Agency:


I. Introduction

Generative statistical models of shape and appearance variations built based on the ideas presented in a series of seminal papers [1]–[4] have a long history in biomedical image analysis [5]. Over the years, they have been applied to many different kinds of shape and appearance representations including point sets [2], dense deformation fields [6], level-sets [7], image patches [8], and full images [1], [9]. Largely independent of the actual type of data representation used, those models always rely on the same general idea: Valid instances of the data modeled lie close to or in a low-dimensional manifold embedded in a high-dimensional space. More specifically, this manifold is assumed to be an affine subspace of the embedding space. The translation vector of the subspace is usually the sample mean of the training population and its orthonormal basis encodes the main directions of variation seen in the training set. Due to their linear nature, simple and computationally efficient closed-form solutions for the transport between the latent subspace and the embedding space exist.

Contact IEEE to Subscribe

References

References is not available for this document.