I Introduce
The Extreme Learning Machine is a single hidden layer feedforward neural network which was proposed by Prof. Guangbin Huang in Nanyang Technological University in 2004[1]. Compared to the traditional gradient-based learning algorithms, it has faster learning speed, better generalization performance and simpler implementation. It also can avoid trapping in the local optimal value. These all because that it randomly assign values to the input weights and biases of hidden nodes, calculates the output weights by using the Moore Processor generalized inverse of hidden output weights. It needs computation only once. ELM is widely used in classification and regression. It is found that the extreme learning machine have a more complex network structure and slower response speed than BP algorithm because it assigns the values of input weights and biases randomly [2].