Loading [MathJax]/extensions/MathMenu.js
Fixed-Rate Compressed Floating-Point Arrays | IEEE Journals & Magazine | IEEE Xplore

Fixed-Rate Compressed Floating-Point Arrays


Abstract:

Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management...Show More

Abstract:

Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4d values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.
Published in: IEEE Transactions on Visualization and Computer Graphics ( Volume: 20, Issue: 12, 31 December 2014)
Page(s): 2674 - 2683
Date of Publication: 06 November 2014

ISSN Information:

PubMed ID: 26356981

Funding Agency:


1 Introduction

Current trends in high-performance computing point to an exponential increase in core count and commensurate decrease in memory bandwidth per core. Similar bandwidth shortages are already observed for I/O, inter-node communication, and between CPU and GPU memory. This trend suggests that the performance of future computations will be dictated in large part by the amount of data movement. Moreover, with large data sets often being generated remotely, e.g. on shared compute clusters or in the cloud, the cost of transferring the results of the computation for visual exploration, quantitative analysis, and archival storage can be substantial.

Contact IEEE to Subscribe

References

References is not available for this document.