I. Introduction
Sparse factorization (also known as sparse representation or sparse coding) of signals has received tremendous attention in recent years [1]– [5] and many valuable theoretical results have been obtained [1] [2]–[5]. Donoho and Elad [5] discussed optimal sparse representation in general (nonorthogonal) dictionaries via minimization. An interesting result that they obtained is that less than 50% of the concentration implies an equivalence between the -norm solution and the -norm solution. The sparse factorization approach can be used in blind source separation (BSS). In several recent studies, the mixing matrix and the sources were estimated using the maximum posterior approach, the maximum likelihood approach, and the expectation maximization algorithm, etc., [6]–[11]. A two-step approach is often used for BSS, in which the mixing matrix is estimated using the K-means or C-means clustering method, while the sources are estimated using a linear programming method [6]. Li and his colleagues discussed sparse representation and the applications of the two-step approach in BSS [12], [13]. The authors used a probabilistic approach and obtained the equivalence results of the -norm solution and the -norm solution. These results showed that if the sources are sufficiently sparse in the analyzed domain, they are more likely to be equal to the -norm solution, which can be obtained using a linear programming method. However, precisely estimating the mixing matrix remains a fundamental problem in the two-step approach. Fuzzy C-means clustering and K-means clustering algorithms are locally convergent, and are sensitive to the lack of source sparseness; thus, they are not very effective in estimating the mixing matrix.