Loading [MathJax]/extensions/MathMenu.js
Improved Moth-Flame Optimization Based on Opposition-Based Learning for Feature Selection | IEEE Conference Publication | IEEE Xplore

Improved Moth-Flame Optimization Based on Opposition-Based Learning for Feature Selection


Abstract:

In this paper, an improvement for the Moth-flame Optimization (MFO) algorithm is proposed based on Opposition-Based Learning (OBL), that enhances the exploration of the s...Show More

Abstract:

In this paper, an improvement for the Moth-flame Optimization (MFO) algorithm is proposed based on Opposition-Based Learning (OBL), that enhances the exploration of the search space through computing the opposition values of solutions generated by MFO. Moreover, such an approach increases the efficiency of MFO as multiple regions in the search space are investigated at the same time. The proposed algorithm (referred to as OBMFO) avoids the limitations of MFO (and other swarm intelligence algorithms) that result from the moving in the direction of the best solution, especially if this direction does not lead to the global optimum. Experiments are run using classical six benchmark functions to compare the performance of OBMFO against MFO. Moreover, OBMFO is used to solve the feature selection problem, using eight UCI datasets, in order to improve the classification performance through removing irrelevant and redundant features. The comparison results show that the OBMFO superiors to MFO for the tested benchmark functions. It also outperforms another three swarm intelligence algorithms in terms of the classification performance.
Date of Conference: 06-09 December 2019
Date Added to IEEE Xplore: 20 February 2020
ISBN Information:
Conference Location: Xiamen, China

I. INTRODUCTION

The increase of in the amount of data collected from multiple sources requires using different strategies for data analysis. Such strategies fall under the areas of data mining and machine learning. One of these strategies is classification, which aims to divide the dataset into different groups according to some selected features. There are several methods used to improve the accuracy of classification, these approaches can be classified into two categories. The first category aims to improve the classification by using meta-heuristic (MH) approaches. For example, the works in [1], [2] and [3] used artificial bee colony and particle swarm optimization to improve the performance of support vector machines. On the other hand, the second category involves preparing the dataset before being used by any classifier to remove irrelevant features that may result in degrading the performance of classifier. Therefore, selecting the relevant features is required for posterior classification processes and this leads to improved the classification accuracy and reduced classification time. Feature selection (FS) is a method used to extract the most representative features from a large set of data. FS is important step that used to reduce the dimensionality of the dataset [4], [5]. Its application is reflected in the speed of the entire processing method, and the performance of the learning model applied in other postprocessing steps [6]. FS can be used to tackle realworld applications, for example signal processing, computer graphics, data mining and biology [7].

Contact IEEE to Subscribe

References

References is not available for this document.