1. INTRODUCTION
Statistical dependencies among wavelet coefficients are commonly represented by trees or graphical models such as hidden Markov trees (HMTs) [9]. HMTs provide superior denoising results compared to independent coefficient-wise thresholding/shrinkage methods, like the lasso [15]. Fast exact and/or approximate inference algorithms exist in many situations, but not all. In linear inverse problems (e.g., deconvolution, tomography, and compressed sensing) the presence of a sensing/observation matrix can linearly mix the Markovian dependency structure so that simple and exact inference algorithms no longer exist. Past work has dealt with this issue by resorting to greedy or suboptimal iterative reconstruction methods such as those based on belief propagation [10], iterative reweighting [7], or variants of the Orthogonal Matching Pursuit [3], [2] (see also [13], [8]). In this paper, we propose a new modeling approach based on group-sparsity penalties that leads to convex optimizations that can be solved exactly and efficiently. Our results show that the approach performs much better in deblurring and compressed sensing applications, while being as computationally efficient as standard coefficient-wise approaches. Our work uses the group lasso with overlap formulation introduced in [4], which we further modify to better represent dependencies among wavelet coefficients. Note here that we use the norm formulation. Similar work could be performed by using the norm [1].