I. Introduction
A traditional single layer perceptron performs the inner products of the weight vector to the input feature vectors. Then, assign the input feature vectors to one of two classes of objects based on the signs of the inner products. This is the simplest neural network [1]. However, it is limited for performing the two classes linearly separable pattern recognitions. To address this issue, a single layer perceptron with the domain of its activation function has more than two pieces is proposed for performing some two classes nonlinear separable pattern recognitions. Here, after performing the inner products of the weight vector to the input feature vectors, a different activation function is applied on the inner products. In particular, the domain of activation function of this new perceptron has more than two pieces where that of the conventional perceptron only has two pieces. In other words, the activation function of this new perceptron looks like a pulse train function where that of the conventional perceptron looks like a two level quantization function. Similarly, a training algorithm is employed for finding the weight vector. However, the training algorithm for this new perceptron [2]–[5] in general does not guarantee to be converged. To address this issue, the optimization approach is employed for finding the weight vector [6]. Nevertheless, it is required to determine the total number of these new perceptrons and design the codewords for these new perceptrons before formulating the design problem as an optimization problem. Therefore, the total number of the perceptrons and the design of the codewords play the important roles for using these new perceptrons for performing the pattern recognitions.