I. Introduction
Feed-forward neural networks (FFNN) are one of the most popular architectures in artificial neural networks (ANN) to tackle complex classification and regression problems. FFNNs consist of simple components called neurons as well as connections among them. In FFNN, inputs move in one direction and pass, through hidden layers, to the output layer. Each connection benefits from one weight, representing its strength. Training in FFNNs is to find proper weights so that the error between the actual and predicted outputs is minimised. Gradient-based approaches such as back-propagation algorithm are so popular in the literature, while they have a tendency towards local optimum [1].