I. Introduction
We investigate the construction of overcomplete dictionaries based on training data, also called dictionary learning, which is of great interest in the signal processing community [1]. Given a dataset and a target sparsity (maximum number of atoms allowed in each representation) the problem is to create dictionary and the sparse representations matrix such that . The problem can be formulated as: \eqalignno{\mathop{{\rm minimize}}\limits_{{\mbi D},{\mbi X}} \ &\Vert {\mbi Y}-{\mbi DX}\Vert_{F}\cr {\rm subject\ to}\ &\Vert {\mbi x}_{i}\Vert_{0}\leq s,\ 1\leq i\leq N\cr &\Vert{\mbi d}_{j}\Vert_{2}=1,\ 1\leq j\leq m,&\hbox{(1)}} where is the pseudo-norm (the number of nonzero components in column ), is the Frobenius norm and the columns of the dictionary , called atoms, are normalized. Most popular solutions to (1) involve an alternating optimization process:
Keep dictionary fixed and optimize the sparse representations by using an approximate sparse reconstruction algorithm (e.g. OMP [2], [3]).
Keep the representations matrix fixed and update the dictionary . Two popular update methods include MOD [4] and K-SVD [5] (AK-SVD [6]).