Loading [MathJax]/extensions/MathMenu.js
Self-Weighted Supervised Discriminative Feature Selection | IEEE Journals & Magazine | IEEE Xplore

Self-Weighted Supervised Discriminative Feature Selection


Abstract:

In this brief, a novel self-weighted orthogonal linear discriminant analysis (SOLDA) problem is proposed, and a self-weighted supervised discriminative feature selection ...Show More

Abstract:

In this brief, a novel self-weighted orthogonal linear discriminant analysis (SOLDA) problem is proposed, and a self-weighted supervised discriminative feature selection (SSD-FS) method is derived by introducing sparsity-inducing regularization to the proposed SOLDA problem. By using the row-sparse projection, the proposed SSD-FS method is superior to multiple sparse feature selection approaches, which can overly suppress the nonzero rows such that the associated features are insufficient for selection. More specifically, the orthogonal constraint ensures the minimal number of selectable features for the proposed SSD-FS method. In addition, the proposed feature selection method is able to harness the discriminant power such that the discriminative features are selected. Consequently, the effectiveness of the proposed SSD-FS method is validated theoretically and experimentally.
Published in: IEEE Transactions on Neural Networks and Learning Systems ( Volume: 29, Issue: 8, August 2018)
Page(s): 3913 - 3918
Date of Publication: 07 September 2017

ISSN Information:

PubMed ID: 28910778

Funding Agency:


I. Introduction

The purpose of feature selection is to choose relevant and informative features such that the selected features strengthen the capability of generalization while avoiding overfitting. According to the strategy of utilizing the label information, feature selection algorithms can be classified as unsupervised [1]–[4], semisupervised [5], [6], or supervised [7]–[11]. On the other hand, feature selection methods can also be categorized as filter methods [12], wrapper methods [13], or embedded methods [14]. These categories differ in how the learning algorithm is incorporated to evaluate the features. The filter method computes a score for each feature to achieve reasonable complexity. Additionally, the filter method is classifier-independent, since the features are chosen based on intrinsic properties of the input data. The wrapper method is associated with the algorithmic performance of a specific classifier. Hence, the wrapper method can achieve better classification performance under certain feature subsets. The embedded method incorporates feature selection into an optimization problem such that preponderant classification performance can be achieved along with rational computational cost.

Contact IEEE to Subscribe

References

References is not available for this document.