I. Introduction
We consider principal component analysis (PCA) in Gaussian graphical models. PCA is a classical nonparametric dimensionality reduction method which is frequently used in statistics and machine learning in order to reduce sample variance with minimal loss of information [1], [11]. The first few principal components can be interpreted as the best low-dimensional linear approximation to the sample. On the other hand, Gaussian graphical models, also known as covariance selection models, exploit conditional independence structure within the assumed multivariate Gaussian sample distribution [7], [16]. These models represent the sample distribution on a graph, and allow for efficient distributed implementation of statistical inference algorithms, e.g., the well-known belief propagation method and the junction tree algorithm [13], [20]. In particular, decomposable graphs, also known as chordal or triangulated graphs, provide computationally simple inference methods. Our main contribution is the application of decomposable graphical models to PCA which we nickname DPCA, where “D” denotes both Decomposable and Distributed.