Abstract:
The rapid advancements in medical imaging have led to a growing demand for high-performance lossless compression of large 3D medical image datasets. Unlike natural images...Show MoreMetadata
Abstract:
The rapid advancements in medical imaging have led to a growing demand for high-performance lossless compression of large 3D medical image datasets. Unlike natural images, medical images typically feature three-dimensional structures, and high bit-depth, necessitating specialized compression techniques. Based on a decoder-only transformer, we propose a learnable dual-decoder model for lossless compression of 3D medical images. Our approach packs voxels into patches, which are processed by a patch-level decoder to extract the patch feature. The voxels, along with the patch feature, are subsequently fed into a voxel-level decoder to model each voxel. This coarse-to-fine modeling strategy reduces the computational time for each voxel and enables long-range modeling dependencies. Experimental results demonstrate that our proposed model achieves state-of-the-art compression performance, with an approximately 15% improvement in compression performance over the traditional JP3D benchmark on various datasets.
Published in: 2024 IEEE International Conference on Visual Communications and Image Processing (VCIP)
Date of Conference: 08-11 December 2024
Date Added to IEEE Xplore: 27 January 2025
ISBN Information: