A Method for Predicting Production Costs Based on Data Fusion from Multiple Sources for Industry 4.0: Trends and Applications of Machine Learning Methods

There is a growing need for manufacturing processes that improve product quality and production rates while reducing costs. With the advent of multisensory information fusion technology, individuals can acquire a broader range of information. Several data fusion and machine learning methods have been discussed in this article within the context of the Industry 4.0 paradigm. Depending on its purpose, a prognostic method can be categorized as descriptive, predictive, or prescriptive. ANN and CNN models are applied to predicting production costs using neural networks based on multisource information fusion, and multisource information fusion theory is examined and applied to ANNs and CNNs. In this study, ANN and CNN predictions have been compared. CNN has demonstrated more remarkable skill in predicting the six cost categories than ANN. When predicting the true value of each cost category, CNN is superior to ANN. As a result, CNN's forecast error for the current month's total income is 0.0234. Because of its improved prediction accuracy and more straightforward training technique, CNN is better suited to incorporating information from several sources. Furthermore, both neural networks overestimate indirect costs, including direct material costs and item consumption prices.


Introduction
In manufacturing, the fourth industrial revolution refers to a general movement to adopt new communication systems and protocols, cyber security norms, display devices that can display multiple devices simultaneously, mobile and compact communication devices with ever-increasing computation capabilities, and artifcial intelligence methods.As this international trend has grown, the Internet has expanded to permeate every facet of human life, including economics and social life [1][2][3].Digital technologies have also been widely implemented within industrial manufacturing procedures and investments due to this paradigm shift.Essentially, the smart factories of tomorrow will be built on the convergence of the physical and digital worlds.Despite the growing popularity of deep learning and neural networks, there are still obstacles to combining multiple sources of data and information.Deep learning and neural networks remain challenging when combining information from multiple sources.In decision-making, Bayesian reasoning provides a rigorous method for quantifying uncertainty [4].Bayesian inference quantifes uncertainty by combining multiple data sources and considering uncertainty related to model parameters.With multiple data sources and a Bayesian framework, Chandra and Kapoor proposed a method for transfer learning based on neural networks.Tey used the Markov chain Monte Carlo method to get samples from the posterior distribution in a multisource Bayesian transfer learning framework.Despite the ambiguous experimental results, the framework ofers a robust probability-based foundation for decision-making.Pattern recognition and artifcial intelligence communities have focused on selfcentered activity recognition due to its wide application to human systems, such as dietary and physical activity assessment and patient monitoring [5,6].Te authors created a simple probability table based on a knowledgedriven multisource fusion architecture to provide frequent information regarding self-centered activities (ADLs) in everyday life.Using statistics and support vector machines based on information theory, a well-trained convolutional neural network creates a set of text labels from regular information and other sensor data.Te proposed method can accurately recognize several previously challenging sedentary activities, including 15 predefned ADL categories.Compared to previous methods, this method provides better results when applied to data collected using wearable devices.Tis research has not yet been widely adopted, despite an average accuracy of 85.4% for 15 ADLs.
Several robotics-related research domains have recently benefted from artifcial neural networks (ANNs) because of their superior spatial feature abstraction and keyframe prediction capabilities.An ANN is a connectionist model, which makes them inherently wrong at making long-term plans, thinking logically, and making multistep decisions.In their study, Zuo et al. developed an enhanced ANN (SANN) model of state calculator and result (SOAR) that combines the feature detection abilities of ANN with the long-term cognitive planning capabilities of SOAR [7].A logical planning module is added to the classic ANN to improve its performance by imitating the cognitive operation of the human brain.Te SOAR planning probability vector was merged with the original feature array of data via a data fusion module [7][8][9][10][11][12].Experiments have shown that the suggested architecture is efcient and accurate and has excellent potential for more challenging tasks requiring quick categorization, planning, and learning.It is possible to recognize grasping sequences when multiple objects are involved and perform metaobject cooperative grabbing.However, the benefts of these applications are limited [3].A diagnosis based on data fusion is an exciting application of the Industrial Internet of Tings for the efcient use of motor monitoring data.A multimodal neural network (DRMNN) based on dynamic routing was introduced by Wang et al. to follow the concept of deep multimodal learning (MDL) [8,9].Tey proposed a strategy for dimensionality reduction and invariant feature capture using vibration and stator current signals to extract multimodal features from multisource data.Te decision-making layer implements a dynamic routing method to assign appropriate weights to various modes based on the relative relevance of each mode.DRMNN is practical and durable in a motor test platform trial.
To implement robot demonstration programming, Wang et al. suggested an implicit interaction technique based on forearm sEMG (surface electromyography) and inertial multisource information fusion [10].An M-DDPG method for modifying assembly parameters was presented based on the demonstrator's demonstrations and lessons learned to improve adaptability to diverse assembly components.To improve generalization performance and accuracy of gesture identifcation, they proposed an improved PCNN (1D-PCNN) based on one-dimensional convolution and pooling to extract feature inertia and EMG.Previous studies found that retailers' prior disclosure of imprecise information fow would reduce the supply chain's proftability and cost retailers' money.By mentioning the possibility of manufacturers infltrating and confronting uneconomical or economical manufacturing, Zhao and Li expand the study on information sharing.Manufacturing costs do not have to be addressed when retailers expropriate manufacturers and share demand information with producers [11].A further incentive may be provided by producers to retailers in order to increase the accuracy of their demand estimations.
Te manufacturer infringes and experiences production diseconomy, the retailer benefts from information exchange, and the manufacturer benefts from minimal production combined with exceptional conditions.It has not been examined whether retailers gain more from the following factors when demand becomes more variable or when demand signals become more accurate [6].Tere is a tendency in the educational publishing industry to create a great deal of stock for "on-demand manufacturing," but modifying the item might lead to obsolescence problems.He et al. addressed two distinct but related problems [12].A variety of printed items are forecasted and managed using predictive models.Demand estimates can now be more precise, and inventory obsolescence can be reduced.Also, educational publishing merchants beneft from contracts that have knowledge asymmetries.
Consequently, proft margins have not been optimized throughout the supply chain, and manufacturer profts are also low.In order to increase the proftability of the supply chain, the report recommends encouraging merchants to provide accurate data.Te suggested approach was validated based on an empirical investigation of Taiwan's top education publishers.As a result of the suggested printing choice model, prediction accuracy is increased by 3.7%, and costs are reduced by 8.3%.As a result of the contract design, the manufacturer's and supply chain's proftability increases by 0.5% and 2.7%, respectively [7].However, the initial capital expenditure is excessive [13].Multisource information fusion and neural networks are discussed in this study.Te research has contributed to the advancement of related professions.Data analysis and methodologies can provide us with a great deal of knowledge.However, the use of neural networks to predict production costs has received very little attention.Te study of this feld requires a thorough implementation of these algorithms.
We present two neural network-based information fusion techniques that combine neural networks with information fusion systems.In order to combine the benefts of multisource signals with neural networks, it is necessary to integrate the multisource data processing mechanism within the neural network.Using artifcial neural networks (ANNs) and conclusion neural networks (CNNs), we demonstrate the superiority of multisource data fusion over single-source data fusion.For the neural network to learn, it does not require any new data; it can assemble information from multiple sources and conduct experiments as it learns.

Related Literature
By measuring the geometric characteristics of a product, the dimension meter determines whether it meets its geometric tolerance criteria (form, orientation, profle, runout, size, and location).A dimension measurement can be obtained by rigorous monitoring methods, such as complex measuring, and automated inspection methods, such as automatic measuring machines, such as CMMs and OMPs [14][15][16].Manual inspection techniques can sufer from unpredictable error sources, including operator errors, causing signifcant measurement errors.As a result of its efciency, versatility, and precision, coordinate metrology has become crucial for industrial dimension metrology [17,18].A meaningful uncertainty statement requires considerable efort because CMM measurements are sensitive to various variables, including random and systematic infuences.Also, CMSs can be used to compare coordinate measurements of workpieces with measurements of calibrated master components that have the same nominal geometry [19][20][21][22].In order to assess the uncertainty associated with coordinate's observations, a signifcant percentage of the systematic efects associated with the CMS must be modeled.Since comparative coordinate measurements are based on relative measurements, it is not easy to establish the traceability route associated with them.
In addition, workpieces that inherit part of their measurement uncertainty from the calibration process of the master part will do so as well.Nevertheless, this uncertainty component is often straightforward to calculate [13,[23][24][25].A nonrepeatable fxturing confguration is particularly susceptible to process variations.Tese can include part misalignments due to rotations of geographic coordinate frames generated during the mastering and measurement modes.In addition, calibrating a part using a calibrated CMM is often necessary to produce a master component for comparator measurement.CMMs and manual measuring tools are often used in traditional component quality evaluation techniques, creating production bottlenecks and slowing down production.As process control and monitoring methods are being developed using Artifcial Intelligence (AI) methods and live monitoring data, there has been a push to minimize nonvalue-adding activities such as dimensional inspections and make timely judgments.Machine learning process models have been used to map process variables to product quality criteria, such as surface roughness, as part of Business 4.0 [26].
Based on material hardness, process parameters, and force data, an artifcial neural network (ANN) can predict surface roughness and tool wear in dry hard turning [27].A single inexpensive accelerometer sensor was used to generate vibration data that could enhance surface quality monitoring in CNC turning [28].Plaza et al. [29] developed least squares support vector machines (LS-SVMs) based on cutting settings and tool geometry parameters.Huang developed a neural-fuzzy monitoring system for end-milling operations to predict surface roughness based on process parameters and force data.According to Huang et al. [30], surface roughness is modeled by fuzzy logic and regression analysis based on machining parameters.A factorial design was used by Kovac et al. [31] to predict cutting forces and waviness based on feed per tooth, tool diameter, and radial and axial depth of cut during thin-wall component machining.A variable-parameter drilling approach was introduced by Bolar et al. [32] for multihole components made of difcult-to-cut materials.Te spindle speed, feed rate, outside corner wear, thrust force, and torque were used to predict hole surface roughness using radial basis function (RBF) networks.Based on vibration and power data, Han et al. [33] proposed a machine learning-based monitoring system for milling machines and processes.Moore et al. [34] employed Bayesian networks and ANNs to predict surface roughness in high-speed milling based on workpiece geometry, material hardness, machining parameters, and cutting forces.As a result of this classifcation challenge, Bayesian networks proved to be easier to read and performed better than ANNs.
Correa et al. [35] presented a multisensor DP multisensor fusion decision-theoretic method that combines force, vibration, and acoustic emission (AE) inputs to detect anomalous process drifts in ultraprecision machining.In contrast to standard classifcation techniques such as ANNs and SVMs, their system can identify ultraprecision machining process drifts with greater accuracy.Beyca et al. [36] developed an adaptive experimental strategy for determining the ideal combination of parameters to maximize the material removal rate by using a Bayesian learning technique.Te authors of Karandikar et al. [37] proposed using vision and sound to monitor the material removal rate during the grinding process.A prediction model for material removal rate monitoring was developed with the help of a light gradient boosting machine and the best feature subsets.Wang et al. [38] proposed an online tool condition monitoring system based on sensor fusion and machine learning.Even though power and sound sensors are more informative for forecasting tool conditions than displacement and AE sensor signals, they evaluated several classifcation algorithms using experimental data.In a simulation of the operation of a water heater, Nazir and Shao [39] outlined a principle for monitoring systems based on Bayesian networks.Under the assumption that sensors are uncorrelated, Atoui et al. [40] developed a method of sensor monitoring based on a linear state-space model that simultaneously estimates measurement noise covariance and state variable probabilities.A quadruple water tank experiment was used to evaluate the variational Bayesian inference Computational Intelligence and Neuroscience procedure by estimating the joint posterior distribution using two diferent proposal distributions.
In the ramp-up phase of an MMP, Zhao et al. [41] developed a Bayesian monitoring approach based on a linear state-space model to estimate control parameters and set cause-selecting chart limits.As suggested by Tran et al., two one-sided Shewhart-type charts are proposed to account for situations in which the production run is fnite.Teir solution to the quality control problem was based on simulation data from the food industry.In order to detect minor to moderate shifts in the process mean, Du et al. [42] developed Bayesian posterior predictive exponentially weighted moving average control charts.As of yet, no validation has been conducted in the manufacturing sector.Riaz et al. [43] employed a lab-scale distillation column to combine the fndings of numerous heterogeneous defect detection and identifcation approaches to overcome the limitations of individual heterogeneous defect detection and identifcation approaches.Ghosh et al. [44] merged the fndings of various approaches for detecting and identifying defects in industrial processes using a fusion system.As a step in preprocessing data, resampling was used to improve the performance of the fusion system.To merge judgments from several models, the Dempster-Shafer evidence theory was applied.From categorization to fault assessment, Bayesian and machine learning approaches have been applied to several production phases and industrial processes.
In-process form and inspection data have never been merged for monitoring dimension product health, to the best of our knowledge.Te most common approach to detecting the condition of fnish-machined components, specifcally surface metrology features, has been based on machine learning algorithms that monitor only the machining process and do not improve predictions when new data becomes available.An approach based on multisensor fusion is presented in this paper to address this defciency.Data collected from various manufacturing phases will be used to develop an intelligent, dimensional product health monitoring system that delivers probabilistic predictions about the fnal product's status.A Bayesian information fusion technique is developed to update this forecast using fresh measurements, such as OMP.A Bayesian updating procedure combines machine learning knowledge with new information collected from OMP. Tis paper evaluates the performance of an EN24T steel-bearing housing component fabricated based on a case study.Manufacturing procedures include all stages of production, including heat treatment, grinding, hardness testing, machining, and in-process and postprocess inspections.

Methods and Materials
3.1.Data Fusion Background.Many applications of data fusion can be found, including surveillance and reconnaissance, environmental monitoring, and environmental danger identifcation [2][3][4][5].It has been possible to integrate multiple sensors under heterogeneous data confgurations using several approaches described in the literature.Multiple approaches to data fusion are being developed due to the numerous sensors and the heterogeneous nature of the data.Several felds were involved in developing these approaches, such as machine learning, pattern recognition, and statistical estimation.As a result of this extensive literature, trafc engineering has naturally benefted.A wide range of methodologies can be applied, depending on the application, ranging from sample arithmetic means to more advanced DF approaches.A three-way split might be proposed: (i) Data mining engines use weighted combinations, multivariate statistical analyses, and their most modern versions to evaluate data.Using the arithmetic mean is the simplest way to merge data.
Although DF approaches have been used to model complex systems for a long time, their use in transportation systems is gaining popularity [18][19][20].DF approaches might be able to deliver the expected advantages in the case of road trafc.Although such techniques are feasible and efective, analyzing their feasibility, efectiveness, and utility presents considerable challenges [21][22][23].With the introduction of ITS, DF has become an increasingly popular topic in the trafc engineering literature.Te DF was frst discussed in Sumner's article in the early 1990s [24].In ITS systems, DF plays an essential role in enhancing efciency.Te use of DF in engineering has been discussed in several articles [22,25,13].
In order to better manage trafc on streets and highways, data processing methods designed by the Department of Defense can now be used [26][27][28][29]45].Data fusion in the DoD is organized hierarchically into fve stages.Data from the source is processed at Level 0 as a preliminary step.Data can be normalized, formatted, sequenced, compressed, and batch processed [2,28].Tere may even be a method for identifying subobjects or data characteristics that will be used in Level 1 processing.Trafc management at the Level 1 level involves the collection of data from all relevant sources, including real-time point and wide-area trafc fow sensors, transit operators, toll data, cell phone calls, emergency call carton reports, investigating vehicle and roving tow truck texts, and commercial truck transmissions [13,46].Based on Level 1 processing, Level 2 processing identifes the likely 4 Computational Intelligence and Neuroscience scenario behind observed data and events using additional sources and databases.In addition to patrol reports and databases, road layout drawings, local and national weather reports, trafc mix predictions, and dates of construction and special events, this information can also be compiled using data from patrol reports and databases.Te Level 3 processing identifes trafc patterns based on the likelihood of trafc events (e.g., trafc congestion, incidents, construction, preplanned special events, fres, or police actions) afecting trafc fows.During level 4 processing, predictions and evaluations are continually improved, and new sources of information are analyzed to enhance the overall data fusion procedure.Tere are times when a sixth level is added to address concerns regarding an individual's ability to comprehend and implement the conclusions reached by the fusion process.According to trafc literature, the DF process includes fundamental functions such as aligning input data chronologically or geographically, combining data, and mining data for knowledge extraction.It is also possible to achieve this goal through the fusion of multiple sources of information [30].

Neural Network.
Te neural network is based on biological nervous systems, which utilize many parallel features.A structure consists of an input layer, one or more hidden layers, and an output layer.Interconnected neurons receive related information during the preceding layers [47].Data collected and analyzed by neural networks can recognize patterns, categorize data, and predict the future.Compared with a group data management technique and linear regression, Ghritlahre and Verma [48] concluded that neural networks produced the lowest error rate.As reported by Ghritlahre et al. [49], neural networks are superior to group data handling methods when it comes to predicting the thermal performance of a solar air heater.Ghritlahre and Prasad found that the neural network model had the lowest root mean square error compared to numerous predictive models.According to Ghritlahre and Prasad [50,51], radial basis function networks provide the best energy efciency for solar air heaters.With 14 neurons and one hidden layer, Ghritlahre and Prasad [52] could predict the performance of a solar air heater with a shallow error rate.

Machine Learning and Data Fusion for Industrial
Forecasting.To classify and analyze industrial prognosis literature, a variety of criteria can be used, including the industrial sector, the data handled by the models, and the asset/process for which prognostic models are helpful.As opposed to prediction as a data-driven approach designed to attain one of three objectives, this research focuses on the following: (i) Using the data collected in the industrial plant, characterize the investigated use case without making assumptions about its origin or signifcance.As a result, descriptive prognostic models do not depend on any a priori assumptions that might afect their insights, focusing instead on blind, unbiased extraction of added value.Many documented practical applications of industrial prediction rely on clustering algorithms and outlier detection methods.(ii) Predict when and how a failure in monitored equipment will occur and its consequences.Predictive prognostic models will typically use historical fault data from which a learning algorithm can determine whether a particular asset's data are associated with a particular target variable (such as a probability, severity, or the point in the process chain where the fault occurs).Supervised learning is predominant in machine learning.(iii) As soon as a plant malfunction alert is received, prescribe optimum actions.A prescriptive prognostic model adjusts the operational parameters and variables of the industrial process to reduce the likelihood of a fault occurring before a predictive model raises an alert.By optimizing rerouting assets or allocating human resources for unscheduled repairs, a model from this category would minimize the impact of a confrmed fault on an industry's output.(iv) Predictive prognostic models are often used to determine this scenario's objectives, often treated as an optimization problem.Optimization solutions dominate this category.
Despite the categorization, contributions to the industrial prognosis literature will not be distinguished and categorized only as descriptive, predictive, or prescriptive.For multiple objectives, distinct model types are often hybridized to meet the needs of a particular application situation.In one of the most representative and intuitive examples of this combination of approaches, predictive prognosis-e.g., predicting whether a machine will have a fault-is combined with prescriptive prognosis-adjusting the machine confguration so that faults are less likely to occur.Tis analytical criterion will now analyze the most recent literature on industrial forecasting.Data-based prognosis has been studied in several industrial areas in recent years, including models and data fusion methods.Figure 1 shows the industrial prognostics scenarios and data-driven methodologies discussed in this study.

Establishment and Quantifcation of Index System.
Several factors infuence manufacturing costs.In summary, production costs are composed of direct materials, direct labor, and manufacturing overhead.Predicting production costs is primarily guided by factors related to space and time.Due to the intricate impression aspects in various periods and the human factor inherent in historical data, we do not focus much on the time-infuencing variables.Among the spatial elements of manufacturing costs, site circumstances and natural disasters play a signifcant role.To assess the impact of geographical factors on product production, the Computational Intelligence and Neuroscience complexity coefcients are computed using the unconfrmed assessment model.Based on the current fnancial system, the policy elements are computed directly.
In type 1, cost items are evaluated not only by space elements but also by time factors; in type 2, cost items concentrate mainly on time factors; and in type 3, cost items are determined by national and business fnancial policies.As a result, it becomes essential to accurately describe normalcy to identify deterioration patterns or trends, following the basic structure in Figure 2. Using mathematical algorithms (machine learning models) on training data obtained from the process or asset under investigation, one can describe behavioral patterns of interest.A variety of problems (hypotheses) can then be solved with new, unknown data (test data), including prediction, classifcation, and anomaly detection.Modern monitoring systems and intelligent devices require enormous amounts of data and extra information, making this task particularly challenging.

Cost and Quality Relationship System and Impact.
In order to meet industry growth requirements and cost projections, manufacturing cost components will change from being uncontrollable to controllable as time passes and the frm expands.Figure 3 shows the current relationship between production cost and quality in the manufacturing industry and its impacting factors.
Tere is no guiding concept or guideline for choosing which several layers of layer nodes to use in the ANN.In order to obtain an adequate number of hidden layer nodes, it is necessary to repeat the process several times.Te training error for neural networks with hidden layer nodes is calculated based on the exact training durations (3000 and 5000 times).Based on the training prediction model, Figure 4 shows the prediction result for the test sample.

Prediction of Production Costs Using a Multisource Information Fusion Machine Learning
Te learning and performance of the ANN are afected by the number of layers of layer nodes.However, there are no guidelines or concepts for choosing them.Repeated attempts are needed to obtain enough hidden layer nodes.Training a neural network with various hidden layer nodes using the training sample set results in Figures 4 and 5. Figures 4 and 5 illustrate the training errors for the same training periods.When there are too few or too many nodes, the error gained from training the number of model nodes may be substantial.Figures 4 and 5 illustrate it.As the number of training sample sets rises, both ANN and CNN neural network models decrease their prediction errors, with CNN performing better than ANN.An analysis of the quantitative link between the various cost predictions and the actual value in production is conducted with a neural network with 50-82 nodes.Te impact of various cost categories on the real value of various expenses is shown in Table 1 of the production model.Tere is a need to compare the predictions of the neural network for each of the six major categories of production costs in order to determine which one is the best.Table 2 shows that the CNN predicts the production cost better than the ANN.In contrast, Table 3 shows the overall outcomes of the production cost prediction compared to CNN (see Figure 6).After the MATLAB neural network analysis, performance metrics are presented in Figures 7 and 8. Te 6 Computational Intelligence and Neuroscience performance value graph trains, verifes, and evaluates input and output data to predict tool wear.Figures 7 and 8 show that CNN is better at predicting the real value in each of the six cost categories than the ANN.In this case, CNN network has a prediction error of 0.0234 for the current month's total income.As a result of its superior prediction result and more efortless training procedure, CNN is better suited for combining multisource information than ANN.Furthermore, the two types of neural networks misestimate indirect costs, such as direct material costs and item consumption prices.ANN has a prediction error of 0.0453, and CNN has a prediction error of 0.0234 in the direct material cost prediction.Direct material costs are the sources of huge mistakes since they are unpredictable,    Computational Intelligence and Neuroscience not be able to accurately estimate targets due to the increasing complexity of modern technology.

Discussion
An integral part of the Industry 4.0 revolution is the concept of intelligent services and "servitization," which reinvent asset maintenance.To minimize output damage, it is imperative to allow production line assets to fail early and predict why, how, and when they will fail, as well as to respond autonomously to these failures, including selfhealing capabilities.In the new aftersales industry, factory equipment is maintained in a proactive, intelligent manner rather than a preventive or corrective manner.Creating a cloud-based service to deliver personalized and prognostic services would be possible by vertically integrating data monitored at the asset level with service processes in cloudbased back-end systems.Tese cloud-based technologies will reduce unscheduled equipment breakdowns and maintenance costs.As part of the integrated production and processes, the workbench components and the asset or product will have intelligence embedded.Tus, this decentralization will increase the importance of addressing emerging distributed computing paradigms, including edge analytics and privacy-aware federated learning, with profound implications for data fusion techniques and prognostic modeling.For data-based technologies to be implemented in business, highly trained and specialized personnel are required.Due to the manufacturing industry's digital transformation, data scientists, engineers, architects, database administrators, and business analysts are in greater demand.As a result, there is a lack of professional profles that can fully leverage all asset information and manufacturing data in this area, a cause of difculty in attracting and retaining bright professionals.Hopefully, more academic degrees in industrial forecasting will become available, and more staf training courses will be completed to resolve this problem.Intelligent services and "servitization," reinventing asset maintenance, are another part of the Industry 4.0 revolution.It is essential to allow production line assets to fail early and predict why, how, and when, as well as to respond autonomously to this failure, including self-healing capabilities, to minimize output damage.In the new aftersales industry, factory equipment is maintained in a proactive, intelligent manner rather than a preventive or corrective manner.
Creating a cloud-based service to deliver personalized and prognostic services would be possible by vertically integrating data monitored at the asset level with service processes in cloud-based back-end systems.Tese cloudbased technologies will reduce unscheduled equipment breakdowns and maintenance costs.As part of the integrated production and processes, the workbench components and the asset or product will have intelligence embedded.Tus, this decentralization will increase the importance of addressing emerging distributed computing paradigms, including edge analytics and privacy-aware federated learning, with profound implications for data fusion techniques and prognostic modeling.Data-based technologies necessitate highly specialized and technical personnel to implement them in business.Due to the manufacturing industry's digital transformation, data scientists, engineers, architects, database administrators, and business analysts are in greater demand.As a result, there is a lack of professional profles that can fully leverage all asset information and manufacturing data in this area, causing difculty in attracting and retaining bright professionals.Hopefully, more academic degrees in industrial forecasting will become available and more staf training courses will be completed to resolve this problem.Businesses must predict product costs, which, in part, afect pricing, cost analysis, and management in terms of efcacy and scientifc nature.Fusing data from multiple sources will undoubtedly become essential for controlling and processing sophisticated manufacturing equipment and warfare systems.A method for developing and evaluating the information fusion system and correctly evaluating its results are essential to achieving the results of multisource data fusion.Information fusion has just begun to develop and be applied as a relatively new feld.Future developments in information fusion technology will follow this course.As needed, neural network modules can also be added to enhance tracking and prediction capabilities in actual applications.

Conclusions
In this article, we have covered several methodologies for data fusion and machine learning algorithms within the context of the Industry 4.0 paradigm.Depending on their primary objective, prognostic schemes can be classifed as descriptive, predictive, or prescriptive.An assessment of the various methodologies available within each category has been conducted to conduct a well-informed analysis of the research activity in this feld, focusing mainly on the challenges and industries that have implemented the reported approaches.Te literature review identifes research trends and directions in data-driven industry forecasting that will capture the research community's attention.Due to the implementation of data-based modeling and fusion, there are some signifcant questions and open technical challenges, not only regarding highly imbalanced data, nonstationarity, and heterogeneity of information but also regarding their application in real-world industrial settings.As a result of new developments in data-driven prediction, such as those in this study, such challenges will undoubtedly be resolved in the coming years.Te implementation of data-based technologies requires highly specialized and technical personnel.In response to the digital transformation of their industries, manufacturers need data scientists, engineers, architects, database administrators, and business analysts.Tus, there is a shortage of talent who can fully leverage asset information and manufacturing data, making it challenging to attract and retain bright professionals in this area.It is hoped that more academic degrees will be ofered in industrial forecasting and that more staf training courses will be conducted to resolve this problem.

10
Computational Intelligence and Neuroscience

Figure 7 :Figure 8 :
Figure 7: Performance value of the neural network.
Te basic structure of the proposed method.

Table 1 :
Te true value of various costs in the production model.

Table 2 :
Estimated production costs for ANN and CNN.

Table 3 :
ANN and CNN predictions of total production costs.