Loading [a11y]/accessibility-menu.js
On rival penalization controlled competitive learning for clustering with automatic cluster number selection | IEEE Journals & Magazine | IEEE Xplore

On rival penalization controlled competitive learning for clustering with automatic cluster number selection


Abstract:

The existing rival penalized competitive learning (RPCL) algorithm and its variants have provided an attractive way to perform data clustering without knowing the exact n...Show More

Abstract:

The existing rival penalized competitive learning (RPCL) algorithm and its variants have provided an attractive way to perform data clustering without knowing the exact number of clusters. However, their performance is sensitive to the preselection of the rival delearning rate. In this paper, we further investigate the RPCL and present a mechanism to control the strength of rival penalization dynamically. Consequently, we propose the rival penalization controlled competitive learning (RPCCL) algorithm and its stochastic version. In each of these algorithms, the selection of the delearning rate is circumvented using a novel technique. We compare the performance of RPCCL to RPCL in Gaussian mixture clustering and color image segmentation, respectively. The experiments have produced the promising results.
Published in: IEEE Transactions on Knowledge and Data Engineering ( Volume: 17, Issue: 11, November 2005)
Page(s): 1583 - 1588
Date of Publication: 30 November 2005

ISSN Information:

No metrics found for this document.

1 Introduction

Competitive learning has been widely applied to a variety of applications such as vector quantization [9], [14], data visualization [8], [13], and particularly to unsupervised clustering [1], [6], [21], [24]. In the literature, k-means [15] is a popular competitive learning algorithm, which trains seed points (also called units hereinafter), denoted as , in a way that they converge to the data cluster centers by minimizing the mean-square-error (MSE) function. In general, k-means algorithm has at least two major drawbacks: 1) It suffers from the dead-unit problem. If the initial positions of some units are far away from the inputs (also called data points interchangeably) in Euclidean space compared to the other units, these distant units will have no opportunity to be trained and, therefore, immediately become dead units. 2) If the number of clusters is misspecified, i.e., is not equal to the true cluster number , the performance of k-means algorithm deteriorates rapidly. Eventually, some of the seed points are not located at the centers of the corresponding clusters. Instead, they are either at some boundary points between different clusters or at points biased from some cluster centers [24].

Usage
Select a Year
2025

View as

Total usage sinceJan 2011:344
00.20.40.60.811.2JanFebMarAprMayJunJulAugSepOctNovDec010000000000
Year Total:1
Data is updated monthly. Usage includes PDF downloads and HTML views.
Contact IEEE to Subscribe

References

References is not available for this document.