I. Introduction
The approximation or compression of an observed signal is a central and widely studied problem in signal processing and communication. The Karhunen–Loève transform (KLT), also referred to as principal component analysis (PCA), [3]–[5], has always played a pivotal role in this context. Assume, for instance, that the observed signal is a random vector with covariance matrix and that the statistics of the source are known. Then to solve the approximation problem, one can apply the KLT to to obtain uncorrelated components and the optimal linear least squares -order approximation of the source is given by the components corresponding to the largest eigenvalues of . In the case of compression, the uncorrelated components can be compressed independently and more rate can be allocated to the components related to the largest eigenvalues of , according to a principle that is sometimes referred to as “reverse water-filling,” see, e.g., [6, p. 349]. This compression process is widely known as transform coding and, if the input source is a jointly Gaussian source, it is possible to show that it is optimal [7]. For an excellent review on transform coding and a discussion of the optimality of the KLT in this context, we refer to the exposition in [8].
The distributed KLT problem: Distributed compression of multiple correlated vector sources. Encoder provides a description of its observation. This paper investigates the case where is a -dimensional approximation of the observation, and the case where is a bit stream of bits per observation.