I. Introduction
The challenge for any software company is to assess the effort and cost of developing a software product. The lack of appropriate access to software estimation so far is reflected in a large number of researchers and practitioners who use individual techniques and tools without opening the possibility for their mutual combination. Combining traditional mathematical models and machine learning algorithms makes it possible to improve the precision and accuracy of the estimation process [1]. By constructing artificial neural networks based on Taguchi's orthogonal plans and using the appropriate methodology, a model that will eliminate the existing shortcomings can be created [2], [3], [4]. In constructing the proposed ANN architecture in this paper, the Taguchi Orthogonal Array was used to optimize the design parameters. In ANN orthogonal-array-based methodology, the engineer must recognize the application problem well. The advantage of this approach over other nonlinear models is based on estimating any function with optional precision. This experiment uses Taguchi Orthogonal Array to simplify optimization problems that represent the MFFN (Multilayer Feed Forward Neural Network) class, which has a crucial role in solving many types of problems in science, engineering, medicine, pattern and speech recognition, nuclear sciences, and other fields [1], [2], [3]. In order to build a high-performance MFFN, no clearly defined theory allows the calculation of ideal parameter settings. This leads to the assumption that even small changes in parameters can cause vast differences in the behavior of almost all ANN networks. Considering this, we adopted a trial and error strategy because most existing theoretical works for generalization fail to explain the performance of neural networks in practice. In addition, to select suitable methods, such as clustering to reduce data wildlife and to phase to homogenize data, it is essential to select the appropriate activation function. With this function, it is necessary to direct the input size of chosen datasets [5]. The choice and use of a particular dataset for all three phases of our proposed experiment depend on the unit of measurement of the input signals. Today, there are plenty of offered, publicly available datasets, but not all of them can be used in every software estimation technique. In this paper, datasets such as COCOMO2000, NASA60, and Kemerer15, which are the most commonly used and a unit of measure, observe the size of the project through the actual effort expressed in person-months [PM], were chosen. This paper is structured as follows: in the second section, a proposal of similar recent research dealing with related techniques is given, in the third section is presented the applied methodology on the conducted experiment, which consists of three phases: training, testing, and validation. The fourth section is devoted to the discussion of the results obtained, while the fifth section contains concluding remarks.