I. Introduction
Feature reduction is always an important issue to be discussed and researched, especially for the exploration of machine-learning orientation. The reason is very simple that using all the features from the original data set to complete training and classification risks of causing insufficient accuracy for machine learning models [1]. This problem gets serious while manipulating the original data set with hundreds offeatures and leads to high computing complexity. However, too less features also cause accuracy degradation. How to reduce the amount of features to the suitable combination is worthy to study. Hence, genetic algorithm (GA) is a superior approach to overcome this difficulty [2] as compared with the exhausted search approach. In the current literature, the feature reduction with GA has been already developed [3–4]. Unfortunately, each weight of the feature in [3] remains unchangeable, lacking of the adjusting ability. It only assumes the importance of features without actual validation on machine learning model. In [4], the searching procedure only evaluates based on accuracy status while verified with only one data set. It cannot decide the extent of reduced feature amount by the observation of accuracy decreasing or increasing. Instead, we want to develop a systematic and flexible feature reduction method with providing the users to adjust parameters for different situations, giving a design trade-off between accuracy and reduced feature amount.