Introduction
Individuals and medical practitioners (such as physicians and surgeons) who are dealing with heart illness must take various aspects into account when using the phrase “heart disease” [1]. Finally, we’ll talk about potential treatments. This study’s major objective is to construct an expert system that is especially targeted at producing a better model for detecting heart disease effectively [2]. It is vital to note that such features will primarily be considered throughout this research. Medical care providers and patients now face obstacles in diagnosing heart illness that can only be overcome by developing a system based on artificial neural networks [3]. The system would be specially designed to reduce the mortality rate associated with heart disease while simultaneously increasing rates of effective diagnosis [4].
By using neural networks to detect and eradicate errors in medical expertise, such as misdiagnosis, a prompt preventative approach might be implemented to eliminate the illness at its source [5]. The development of such a model, however, would need the use of a specialized data set that would include the data of numerous individuals who had previously been diagnosed with a suspected cardiac condition [6]. It is expected that this study would help to determine the pattern of the brain network that connects the diagnosed patients with other patients. It is suggested in this study that a new Neural Network expert system be developed for the detection of heart disease. By their symptoms, they suggested some of the parameters used in the proposed system as input [7].
Thallium, a radioactive tracer that is injected during a stress test, is also used as an input to the knowledge base of the expert neural network system, which is comprised of a database and a rule base and is critical to the system’s performance [8]. Chronic diseases are long-term illnesses that are recorded by the World Health Organization (WHO) and include a variety of illnesses that are fundamental and major health problems throughout the world. Chronic diseases are defined as illnesses that last for an extended period [9]. Factors such as other diseases, and all have the same risk factors. Dietary characteristics associated with chronic illnesses include a high intake of meat and oil/fats, as well as a low intake of grains and vegetables, among other things [10]. This sort of diet promoted an inactive lifestyle, which resulted in greater exposure to cigarettes and alcohol [11].
In the past 10 years, it has been noted that the number of older persons is fast increasing, with many of them suffering from chronic conditions, making healthcare services necessary. There is a pressing need to transform healthcare and give patients an efficient, creative, and economical solution that can be delivered anywhere and at any time in a friendly and cost-effective way [12]. A paradigm of illness prevention that may postpone the need for treatment and effective intervention is required by us as a society at this time [13]. Healthcare and medicine must make use of self-tracking technologies, which have shown to be effective in motivating and encouraging such excellent healthcare habits [14]. Over the last year, there has been increased interest in the application of self-tracking to the idea of self-identification. Self-tracking devices are primarily wireless biosensors that may be utilized in the home, automobile, office, and other environments, as well as implanted in the body. Some healthcare applications and services are digitally accessible via a mobile device. Self-monitoring has the potential to improve health daily [15]. The purpose of this evaluation is to establish the present demand and market offer, as well as to identify chances for excellent health. The possibility for adult, middle-aged, and young individuals to live independently and with a high standard of living, with large-scale societal and economic factors advantages, exists [16].
The main cause of death in the world is heart disease, and these types of disorders may be classified in a variety of different ways. Cardiovascular disease is the leading cause of mortality in every country around the globe [17]. In 2019, heart-related disorders claimed the lives of 616,000 individuals. Employed that is based on radio waves and magnetic. It is also possible to assess the detailed architecture of the heart using echo and other technologies that may be employed in cardiac computed tomography (CT) scans that use x-ray images [18].
The present advancement problematic concerns:
Smooth integration of heterogeneous devices
Estimation of relevant to the patient’s activities
Communication of the accident to the doctor, who is in charge of recognizing disease and impairment heart auscultation is the term used to describe the characteristics of the heart sound.
There are certain classic methods of diagnosis, such as a physical examination and a review of the patient’s medical history. The tests listed below may provide findings for the risk factor for heart disease diagnosis and can be used to determine whether a person is at risk. Cardiovascular disease is a term used to describe disorders of the heart and blood arteries. The correct functioning of the heart is very essential for the existence of all living things. If you or a loved one has heart problems and is not aware that he or she is suffering from the ailment, it might result in death. Diagnosis of coronary artery disease in its early stages is critical for the preservation of human life. The number of people dying from heart disease is increasing dramatically. Cardiovascular disease is often considered to be the leading cause of mortality around the world, according to some estimates. The challenge is to determine the most accurate method of diagnosing cardiac disease. To increase the accuracy of detection, several machine-learning algorithms have been applied to a set of chosen characteristics.
In this research paper, we aim to investigate the influence of the actual use of a Neural Network System on the accuracy of heart disease detection. By exploring various aspects of the Neural Network System’s implementation and performance, we hope to gain insights into its practical implications in reducing misdiagnosis caused by human error. Moreover, we will examine whether a well-designed neural network system has the potential to lower the likelihood of heart disease in humans. By analyzing relevant studies and data, we seek to determine the effectiveness of such systems in detecting early signs of heart disease and facilitating timely interventions. Additionally, we aim to provide valuable advice to medical professionals based on our findings. By identifying key areas for improvement and highlighting best practices, we intend to support healthcare providers in their efforts to diagnose heart disease accurately and efficiently. By addressing these research questions, we hope to contribute to the ongoing advancements in the field of medical diagnostics, particularly in relation to heart disease detection, ultimately improving patient outcomes and reducing the burden of cardiovascular diseases in the population.
The novelty of this study is that it has a broad application in various settings such as homes, offices, markets, and public places, in addition to trauma centers, hospitals, medical units, THQ hospitals, primary health care units, medical, DHQ hospitals, national-level hospitals, and other locations. In the disciplines of medicine and pattern recognition, the development of smart medical devices for the monitoring of cardiac disease is an exciting research problem. The ultimate goal of this research is to develop technologies that can automatically detect illnesses linked to disasters before they occur. These biomarkers, which are derived from perspiration and skin breath and emitted radiations, may one day be utilized to diagnose diseases of the heart as the primary goal of this research. In addition, the following are research contribution:
The goal of this paper is to develop a neural network system that is successful in diagnosing cardiac problems.
To determine if a neural network system can be used to remove the possibility of misdiagnosis in the case of cardiac illness.
To compare the efficacy of a neural network system for heart disease diagnosis to that of previous frameworks for heart disease diagnosis.
For using current frameworks and neural network systems, determine the amount of misdiagnosis likelihood of cardiac illness at the cellular level.
This manuscript is structured as follows. State-of-the-art is discussed in Section II. In Section III, the context of materials and methods is presented. In which we will go through the research methodological phases and the biometrical model. Then we describe the identification procedure, which includes sensing systems, testing and testing preprocessing, and feature extraction of data. And then we discuss architecture sensing system for sensing data. Section IV consists of extensive experimental results and their discussions. The conclusion and future work of this study is described in Section V.
Literature Review
When it comes to the early identification of cardiac illness, emphasized the relevance of categorization of the ECG using methodologies combined with Artificial Neural Networks.
The researchers experiment with many components of the ECG, such as various databases, extraction methodologies, and different classifiers, concluding that artificial neural networks (ANNs) have a positive influence on the early diagnosis of cardiac illness in both humans and animals. The application of machine learning methods is becoming more popular in medical decision support systems. Medical diagnosis aids in the identification of many characteristics that indicate the various versions of the illness. It is conceivable that a disease will present with signs that are relevant, irrelevant, and redundant, depending on the diagnostic processes used to identify [19]. The presence of redundant characteristics contributes to the incorrect categorization of the condition. As a result, deleting unnecessary characteristics decreases the quantity of the data as well as the difficulty of the calculation. Identifying an appropriate feature subset for efficient classification is a difficult issue that requires much effort. A thorough search of the data set’s sample space is required to get this result [20]. According to, increasingly contribute to the advancement of some developing results for a potentially successful framework for the detection of heart disease. The use of an ensemble-based strategy, which was based on a mix of previous data models and current cardiac problems, may have been the basis for their findings. Wissler and colleagues have argued that the use of a data-driven framework will continue to have a significant impact on just how medical practitioners detect cardiac chronic conditions in the future [21]. Because medical practitioners have only been able to detect heart illness with a limited number of signs and with a limited number of instruments in the past, the authors were very concerned about the possibility of a mistake occurring. Has been argued repeatedly, with the result that the possibility of misdiagnosis has been reduced to a great extent. However, no such model has been developed yet. The authors have used data mining classification approaches to the cardiovascular disease dataset, which has been evaluated by them. As part of their investigation and construction of a multistage-multivariate quality management system leveraging artificial neural networks, the investigation concluded that the network that had been created in this manner was capable of properly detecting the signals [22].
Using dynamic recognition patterns of the ECG, investigated another major use of (electrocardiogram) ECG approaches, in which the ECG was utilized to classify cardiovascular illness. In this study, the use and demonstration of technological tools in the classification and identification of CV D are carried out via the analysis of the FuWai and PTB databases, respectively. It was noted that the use of ANN for the categorization of cardiovascular and diabetes diseases should be considered. An ANN approach such as is used to test the hypothesis of greater accuracy in data gathering when identifying diabetes and heart-related disorders and diagnoses, which is supported by the results. This is also the sort of ANN that has been employed the most often in the articles that have been evaluated from 2008 to 2017 [23]. A report published was the starting point for an investigation into a suggested method for general practitioners to consult on the treatment provided to patients suffering from chest discomfort. The researchers concluded that 8.4 percent of patients who experienced chest discomfort were at risk of developing a serious type of heart disease by the completion of their trial. The data set for the research included 22,294 patients, with 28 individuals having chest pain. The application of machine learning methods is becoming more popular in medical decision support systems. Medical diagnosis aids in the identification of many characteristics that indicate the various versions of the illness. It is conceivable that a disease will present with signs that are relevant, irrelevant, and redundant, depending on the diagnostic processes used to identify [24]. The presence of redundant characteristics contributes to the incorrect categorization of the condition. As a result, deleting unnecessary characteristics decreases the quantity of the data as well as the difficulty of the calculation. Identifying an appropriate feature subset for efficient classification is a difficult issue that requires much effort. A thorough search of the data set’s sample space is required to get this result [25]. One of the leading causes of death across the world is heart failure, particularly in developing countries. The diagnosis of heart failure is difficult, particularly in undeveloped and underdeveloped nations where there is a scarcity of human specialists and specialized technology. As a result, the models that were created outperformed the data that was tested. We provide a unique diagnostic system and a new pruning strategy to construct an intelligent system that performs well on both training and testing data. that combines pre-pruning and post-pruning to boost classification accuracy and tree size in this work. We can generate a decision tree using this strategy [26]. The experimental findings are derived from 18 benchmark data sets obtained from the University of California, Irvine A repository for machine learning. Based on the data, we can see that our tree pruning technique reduces tree size significantly while simultaneously improving algorithm accuracy [27].
A novel approach based on machine learning techniques is proposed in this study to predict the likelihood of coronary artery atherosclerosis developing. To estimate missing values in the atherosclerosis data sets, a ridge expectation maximization imputation approach has been developed. The conditional likelihood maximization approach is used to eliminate unimportant properties from the feature space and lower the size of the feature space, allowing for faster learning. The performance of two classification algorithms for the prediction of heart disease is examined and compared to earlier research. The impact of missing value imputation on prediction performance is also investigated, and it is discovered that the suggested REMI method outperforms standard approaches by a large margin [28]. Modern machine learning and data mining methods are essential in healthcare systems since they efficiently translate all available data into useful information and improve patient outcomes. According to literary works, there is a potential of a 12 percent inaccuracy in the diagnosis of illnesses by medical practitioners, which may be confirmed. Furthermore, when it comes to effective disease risk prediction in medical analysis, the area under the curve with accuracy as an assessment measure is given more weight than the accuracy of the forecast [29]. The AUC’s function, on the other hand, has not been well defined in the prior literature. For a robust and effective disease risk prediction, the suggested NFR model incorporates two methodologies that make use of the AUC and accuracy to attain a high degree of precision. The application of machine learning methods is becoming more popular in medical decision support systems [30]. Medical diagnosis aids in the identification of many characteristics that indicate the various versions of the illness. It is conceivable that a disease will present with signs that are relevant, irrelevant, and redundant, depending on the diagnostic processes used to identify it. The presence of redundant characteristics contributes to the incorrect categorization of the condition [31]. As a result, unnecessary characteristics decrease the quantity of the data as well as the difficulty of the calculation. Identifying an appropriate feature subset for efficient classification is a difficult issue that requires much effort. A thorough search of the data set’s sample space is required to get this result [32].
Material and Method
This section contains the methodology of the paper.
A. Research Methodology
Figure 1 shows the process of the research methodology.
1) Sensing
Sensors are used to detect the stench of people in the first place. There are three distinct kinds of glands in human skin, and it is these glands that are responsible for excreting volatile organic chemicals, as well as the volatile compounds themselves, which do not vary over time. During the production of glands, bacterial activity occurs on the human skin, causing an odor. Volatile organic compounds (VOCs) are excreted by people in large amounts, but only a small amount of these main volatile chemicals is required for this study, and the amount excreted has no relation to age or the surroundings. There are many glands in the human body that produce volatile organic molecules, making it difficult to conduct a comprehensive study of human odor. According to research, the body excretes various VOCs in distinct locations. Some of the VOCs, such as aldehydes, ketones, alcohols, carboxylic acids, esters, hydrocarbons, and others, are targeted for this procedure. Several scent sensors and a series of MQ sensors are employed in this procedure. For this procedure, there is a total of nine MQ-series sensors are employed; these sensors are numbered 2, 3, 4, 5, 6, 7, 8, 9, 135. Each sensor has a unique set of abilities, such as being able to handle a variety of volatile substances. Other scent sensors are used to assess the quality of the air. The TGS-2602 smell sensor is also used to measure air quality and detect pollutants such as ammonia, VOC, and hydrochloric acid in the air, whereas the QS-01 sensor is primarily used to monitor water quality. These 11 sensors detect the excreted targeted computation from the human body or skin in this procedure.
2) Transmission
Arduino microcontroller has been used to transmit features with 14 pins.
3) Receiving
The Arduino is used to convert analog impulses to digital ones, and the sensors are used to gather the data. The information obtained is in the form of digitally stored raw data derived from the smell of human skin. The Arduino is used to capture and transmit the raw data to the computer for additional processing and authentication purposes.
4) Processing
There is a requirement for data binning at this stage since the length of digital data is not determined. Binning, also known as bucketing, is a pre-processing method used to collect data. Data binning is necessary for the storage of data since data is constantly being collected and there is no limit to the length of data that may be obtained.
B. Acquisition Analysis
Chemicals excreted from different areas of the body have varying levels of similarity. A variety of sensors are employed for the detection of common compounds, depending on their capacity to detect them. The biometric system uses a unique set of data strings for the chemicals acquired from the body parts of people.
1) Quality Check Module
Data quality and the ability to re- sense it is both verified by this module form. To determine whether the sample resolution is primarily for high resolution, it is also the responsibility of the same module or system to make this determination. A module should be used to construct it after it has been determined that it is for high resolution.
2) Feature Generator
The biometric samples collect the biometric characteristics produced by this module. The feature generator generates and extracts biometric traits in the form of digital data. By combining all of these traits, a biometric template is created.
3) Bio Metric Template
A biometric template is a collection of biometric characteristics generated from a set of samples using a feature generator.
4) Matcher Module
When using the matcher module, you may compare newly stored templates against those that have previously been saved. Using one or more previously recorded templates, we may do this comparison.
5) Decision Module
As a last step, a decision module is used to make a match that is deemed acceptable. This module’s primary goal is to establish a personal threshold value for each individual and to emphasize privacy.
C. Biometrical Model
Figure 2 model is a design, which is biometric. This session discusses the research questions and methodology used to address the hypothesis in the introduction. The ANN model is briefly explained, and real-time data sets or self-generated data sets are described to achieve the highest accuracy. A variety of research methodologies are proposed based on relevant factors and the study’s objectives. Secondary research is emphasized to identify previous theoretical approaches and factors relevant to the study’s goals. The study uses 300 patients’ medical records from various hospitals and derived heart disease diagnosis system input variables. The output is divided into two categories: “absence of illness” and “presence of illness.” Heart disease can be predicted using the Back Propagation Algorithm, a learning method commonly employed in Artificial Neural Networks (ANNs). The data set includes patients aged 29 to 79, with sexual orientation indicating sex for men and women in a patient’s territory unit. Cardiopathy can be demonstrated by four types of pain: type one angina, which is caused by constrained coronary veins, non-angina torment caused by various factors, and the fourth type is unlikely to be a sign of cardiovascular disease. The results are analyzed using treetops, the examination of resting authority per unit area, and Chol, the steroid’s alcohol problem counterpart. The quick glucose level is measured using Fbs, while resting clinical instrument results, thalach, and exacting are indicators of angina triggered by activity. Old pinnacles include ST discouragement, incline, and ca. The activity check’s duration is referred to as the numb and its classification is numb. A method called “order” is used to predict future outcomes based on existing data. A group of classifiers is used to improve order exactness and distinguish proof of heart condition. The preparation data set is divided into a training and validation set, and the test data set is used to test the classifiers’ intensity.
Figure 3 is a block diagram illustrating the training and testing methodology used to identify an individual.
D. Identification Process
Using a sensor system and a pattern recognition system separates this process into two halves.
1) Sensing System
Is capable of distinguishing between several scents in its immediate surroundings. To collect samples, this system is used. An array of MQ sensors and other scent sensors forms the basis of the whole concept. This array of sensors is capable of detecting various VOCs. In this operation, a metal oxide array is employed since each sensor is responsible for a particular duty and detects a distinct gas. These sensors have a tiny heater. The heaters in sensors are responsible for producing the electrochemical when the VOCs that are evaporating off human skin come into touch with the heaters present in the sensors. The data on human odor is gathered after an electric current has passed through the circuit. This model can be utilized at ambient temperature because of the technology employed in it. As an odor identification system, a database is developed to store and organize information about the many types of smells.
2) Pattern Recognition System
For this method of pattern recognition, it’s important to know that a pattern in the digital world may be mathematically expressed as well as seen in real life. Pattern recognition uses a variety of machine learning methods. Analysis of data stored in databases, as well as pattern recognition of data stored in databases, is used to classify the data.
3) Pattern Recognition Learning and Testing
Giving the system some unique inputs allow it to learn. The training system is becoming a lot more complex. A system’s performance determines the algorithm used to analyze the data. Afterward, the data set is split into two categories: training and testing.
4) Training
To create a working model of a training system, this method is used. At this point, a model is being trained by exposing it to a collection of data in the form of human odor. This data set is used to run the algorithms, which establish connections between the input and output data to create a model and extract the obtained data.
5) Testing
The testing data set is used to create the data set for the testing phase. It is at this point that the system’s accuracy is determined to determine whether or not it is working correctly, and whether or not it is generating the expected results.
E. Pre-Processing
Sensor array response is compressed and sample-to-sample discrepancies are reduced during the pre-processing step. Compression and Normalization are two common techniques for dealing with this issue.
1) Feature Extraction
Using a feature extraction approach, it is possible to glean useful and important information from raw data. After that, discrete clusters of significant data were created from the retrieved data. Since there are a lot of variables in a rough data set, this is done to save computational resources. Because raw data contains so many resources, additional computational power was necessary. Feature extraction techniques integrate relevant variables into features. Fewer computer resources are needed to analyze this data and also accurately reflect the real wide data
F. General Model of Sensing System
The sensor array detects odors, and an artificial neural network algorithm determines their order, as shown in fig 4. In the recommended approach, the gathering of data is illustrated in the following graphic portrayal of the sensors.
The graphic above depicts the VOCs that are exhaled by the hands of a human being. Volatile organic compounds (VOCs) are detected when a person’s hands come into contact with an array of sensors. The heaters within the sensors interact with the human body’s volatile organic compounds when this happens. Analog signals are received from sensors and converted into digital signals by a microcontroller. Only the last five seconds’ worth of data have been collected and are shown in the table 1 underneath.
G. Analyzer of ECG
Figure 5 explains the ECG analysis process.
2) Atrial Fibrillation
Figure 7 explains atrial fibrillation which explains the ECG graph to detect a first-degree heart attack.
H. Signals Generating
2) Step II
MATLAB is used to create a new Simulink model shown in Figure 10, which is then used to build the signals.
Results and Experiments
An electronic nose has been built for the tests and outcomes. In this e-nose, gas sensors from the MQ2 through MQ9 series, as well as TGS2602, mq135, and Qs01, are used. These gas sensors assess organic volatile chemicals in human body odor with the use of these gas sensors. The TGS 2602 sensor can detect isovaleric acid in human body odor, unlike the MQ series sensors that incorporate heaters. The term “electrochemical sensor” refers to a device whose heaters interact with the human body to produce an electrochemical response.
At room temperature, sensors can detect the odor of the human body, allowing them to collect information about a person’s health. The smell may be used to identify a person’s odor, and sensors act appropriately. Sensors for detecting various gases are addressed in the first chapter. The odor of a human body is utilized to collect data on an Arduino UNO microcontroller. The data is sent using a module from Arduino. The Arduino UNO module is basic and straightforward to use, and the hardware may be programmed with ease. Sensors on this development board allow it to read inputs and convert them to outputs. These are the features of Arduino UNO, a microcontroller based on the Atmega328p. A typical Arduino board has 14 input/output pins, six of which are dedicated to analog signals while the other six are utilized for output. The clock speed of Arduino is 16 MHZ, and it has two buttons: a start and a reset button. Also, include a USB cord and a power jack. So that the Arduino can communicate the data from sensors to the PC. Using Arduino’s six analog input/output pins, data is converted to a digital format. After the raw data in the form of human odor is obtained, the binning procedure is carried out, and then a neural network code is worked on in MATLAB for authentication purposes. Twenty people’s records with five odor samples each are maintained in the database.
A. First Experiment
Sensors are used to collect odor samples from the hands of five people in the first experiment, and analog signals from human hands are picked up by the 10 sensors. The sensors are utilized one at a time, and each one takes around five seconds to process. Data from sensors are analog, and it is this analog data that Arduino transforms into digital data. For example, the first five seconds of a sensor’s continuous output data are collected. The 25 samples of odor were collected from five people in five seconds, and this data was entered into a database. A neural network approach known as the feedforward technique is used to classify patterns in sensory data. MATLAB’s neural network recognition software uses the NN (neural network) method on samples of human body odor it receives.
Figure 17 shows the outcomes. In the NN algorithm, a person’s hand-created fragrance patterns are identified. Accuracy parameters are specified for standard measurement assessments that are 73±2.
B. Second Experiment
The fragrance of each person’s hands is gathered for testing in the second experiment, which includes 10 persons. Currently, data is being gathered from a variety of sensors, including the 10 sensors that detect analog impulses transmitted by human hands. To apply the array approach, two arrays are built, and data is gathered by arranging sensors sequentially. The first six sensors are gathered, and then the remaining four sensors are collected. Because the data is in analog form, it is transformed into digital with the aid of Arduino. The data from sensors is continuous, thus only the first five seconds of data are acquired. The data from five people are used to collect 50 samples of odor in five seconds, and this data is saved in a database. A neural network approach known as the feed-forward technique is utilized for pattern recognition from sensory data. The recognition program of the neural network of MATLAB performs the NN (neural network) method on samples of odor received from the human body, Figure 18 shows the obtained findings. The NN algorithm recognizes the created patterns of scents obtained by human hands. A parameter of 80±2 is specified for the precision of standard measure assessment.
C. Third Experiment
In the third experiment, data from ten people is collected for tests by removing the odor from each person’s hands. At this moment, data is collected using various sensors, and analog signals sent to the ten sensors can detect the presence of human hands. To apply the array approach, two arrays are built, and data is gathered by arranging sensors sequentially. In the beginning, data from the first six sensors are gathered. And then data is collected from the last four sensors. Because the data is in analog form, it is transformed into digital with the aid of Arduino. The data from sensors is continuous, thus only the first five seconds of data are acquired. The data from five people are used to collect 50 samples of odor in five seconds, and this data is saved in a database. A neural network approach known as the feed-forward technique is utilized for pattern recognition from sensory data. The recognition program of the neural network of MATLAB performs the NN (neural network) method on samples of odor received from the human body. The obtained findings are shown in Figure 19. The NN algorithm recognizes the created patterns of scents that are collected by human hands. A parameter of 86±2 is specified for the precision of standard measure assessment.
D. Forth Experiment
To gather data for the third experiment, odors from the hands of 20 participants were removed and analyzed. Currently, data is being gathered from a variety of sensors, including the 10 sensors that detect analog impulses transmitted by human hands. To apply the array approach, two arrays are built, and data is gathered by arranging sensors sequentially. In the beginning, data from the first six sensors are gathered. And then data is collected from the last four sensors. Because the data is in analog form, it is transformed into digital with the aid of Arduino. The data from sensors is continuous, thus only the first five seconds of data are acquired. According to the data from five people, 100 samples of odor are collected in five seconds and stored in a database. A neural network approach known as the feed-forward technique is utilized for pattern recognition from sensory data. The recognition program of the neural network of MATLAB performs the NN (neural network) method on samples of odor received from the human body. The obtained findings are shown in Figure 20. The NN algorithm recognizes the created patterns of scents that are collected by human hands. A parameter of 90±4 is specified for the accuracy of standard measure assessment.
E. Experiment No.5
To collect data for the third experiment, odors from the hands of each of the 20 participants were removed. Analog signals from human hands are picked up by the 10 sensors at this point. Sensors are placed in a series using the array approach. This necessitates the creation of two arrays and the acquisition of data. Initially, six sensors are used, and then the three remaining sensors are used. Because the information is analog, Arduino is used to convert it to electronic. There is a constant stream of data from sensors. Thus, only the first five seconds of data are collected. According to the data of five people, 100 samples of odor are acquired in five seconds and stored in a database. A neural network approach known as the feed-forward technique is used to classify patterns from sensory input. The recognition program of the neural network of MATLAB performs NN (neural network) algorithm on samples of odor received from the human body. Fig 21 depicts the findings obtained. The ANN algorithm is used to identify the patterns of scents that are created by human hands. Standard measurements are evaluated using a 95 % accuracy parameter.
Five different trials are carried out to train the system. In the initial experiment, five people’s hands were used to gather smell samples, and data were acquired using a variety of sensors, including 10 sensors that sensed analog impulses from the hands. Table 2 shows that 71% of the time, the results are correct. The average accuracy is attained by repeating tests several times on the same data set. The second experiment is carried out to improve accuracy by increasing the number of data sets. For this reason, the number of people investigated is raised. The data from 10 people is acquired for tests in the second experiment by collecting the odor of each person’s hands. The ten sensors detect analog impulses given by human hands at this same time utilizing a variety of sensors. To apply the array approach, two arrays are built, and data is gathered by arranging sensors sequentially. Each individual collects ten samples for the experiment.
The acquired findings are displayed in Fig 21, which demonstrates that increasing the number of subjects increases accuracy to 85 %. Using the same data set, the number of trials is expanded to get an average accuracy of 80.2 %. In the third experiment, a six-sensor array is employed to gather data. The odor of ten participants is collected at various intervals of time for studies, and the findings reveal that the accuracy level has grown to 84.40 %. Experiments are repeated several times with the same data sets to reach the average accuracy, which is 86.2 %. Another experiment is being conducted to boost the accuracy level. For this aim, the number of individuals has been raised to twenty for the odor samples. Using scent samples from 20 people, the accuracy rate is 86.30 %. No matter how many times you do the same experiment, you only ever get a 95 % accuracy rate once. As a result, your overall success rate is 86 %. Table 3 shows a sample data set.
Furthermore, to verify the robustness of the proposed model, table 4 results using several machine learning techniques (classifiers) on the data set acquired locally were produced.
Conclusion
Human body odor is thought to be a great identifier and is employed in wearable devices for biometric identification.
The odor samples are subjected to a neural network algorithm for the identification of olfactory patterns. Many researchers employed various ways, but in one study, an array of diverse sensors utilized as an electronic nose can detect human odor proved to be the cheapest way between all systems. The analog singles are transformed into digital signals using an Arduino microcontroller for authentication and pattern classification using a neural network. The information on samples acquired from various sensors is shown below. In the first experiment, 10 sensors are utilized to collect data from five people. For data collecting, each sensor is utilized independently. According to the statistics in Table 4, the accuracy is 71%. The average accuracy is 73.2 % as a result of repeated tests on the same data set, with the number of datasets increasing as the accuracy increases. There are indeed 10 data sets throughout the second experiment, and sensor arrays are used to capture the data. Two sensor arrays are developed in the second experiment. Data from 10 person’s samples were used as trials for each array. Table 4 shows the results of the study, which show that the accuracy of the results is 80.5 %. The average is obtained by repeating tests on the same datasets 80.2 % of the time. The third experiment employs an array of sis sensors. The results reveal that the accuracy is enhanced in comparison to the previously achieved accuracy using odor samples gathered at various periods of 10 people. At this moment, the accuracy is 84.40 %. The average accuracy is 86.2 % as a result of repeated repetitions of tests done on the same data sets. The accuracy of the earlier collected samples is raised by increasing the number of samples, which is set to twenty. Using the same materials, additional studies are carried out on the odors of twenty persons. The acquired accuracy is 86 % most of the time, and only once does it reach 95 %, therefore ultimate accuracy is 86 %. It is concluded that certain scents are not correctly detected, but other samples are correctly identified, resulting in an average of 86 %. In the future, effort must be done to improve accuracy by improving sensor performance and sensitivity. The sensitivity of the sensors must be increased to enhance accuracy. When a person uses deodorants, there are issues with accuracy that must be addressed. Detecting human scent became a pressing need in the marketplace. Regardless of whether a person applies deodorant or not, enhanced technology should be able to distinguish between human aroma and other odors.