DMTC: Optimize Energy Consumption in Dynamic Wireless Sensor Network Based on Fog Computing and Fuzzy Multiple Attribute Decision-Making

Department of Computer Engineering, Islamic Azad University of Hamadan, P.O. Box: 8415683111, Hamedan, Iran Institute of Port, Coastal and Offshore Engineering, Ocean College, Zhejiang University, Zhoushan 316021, Zhejiang, China Department of Industrial Engineering, Urmia University of Technology (UUT), P.O. Box: 57166-419, Urmia, Iran Department of Mechanical Engineering, Urmia University of Technology (UUT), P.O. Box: 57166-419, Urmia, Iran Department of Electrical Engineering, Urmia University of Technology (UUT), P.O. Box: 57166-419, Urmia, Iran


Introduction
A sensor network consists of many sensor nodes interacting strongly with the physical environment, which receives and responds to environmental information through the sensor. The connection between these nodes is wireless. Each node works independently and has specific capabilities and a certain energy level. To perform the placement operation in some methods, several nodes are equipped with higher capabilities such as higher radio range, more energy, auxiliary equipment for movement, and a GPS receiver [1]. According to the data collection methods, the wireless sensor network can be divided into two categories: homogeneous sensor networks, including base stations and sensor nodes equipped with the same capabilities (e.g., computing power and capacity). They have the same memory. Data collection in these types of networks is based on the data structure. Heterogeneous sensor networks have a base station (complex sensor nodes equipped with advanced processing and communication capabilities) compared to conventional sensor nodes [2].
Sensor distribution (i.e., the location of sensors in the target area) is one of the leading design issues in wireless sensor networks. The location of a sensor may affect the implementation of system requirements and network performance metrics [3]. Careful placement of the sensors can be an effective optimization tool to achieve the desired design goals. For example, the total coverage is directly related to how the sensors are adequately positioned to cover the desired area on the wireless sensor network. The sensors should not be too close to each other, not overlap, and not be overused. They also should not be too far apart to prevent the formation of coverage gaps in the network. A good distribution makes it possible to perform better in gathering information and communication [3,4]. Some distribution methods also use stationary sensors to support the sensor location's dynamic adaptation, making it possible to reconfigure a dynamic distribution and improve network performance to minimize energy consumption [5]. During the design process of the network infrastructure, the creation of routes is affected by the sensors' energy limit because wireless transmission is directly related to the second (and higher) power of distance [6]. Using multistep delivery methods will result in less power loss, but using this method will cause problems in topology management and access control to the transmission environment [7]. Therefore, because in most networks, the sensors are randomly located in the network, it is not possible to use multipath methods [8].
Clustering in network routing can significantly affect the overall scalability of the system, lifespan, and energy efficiency [9]. Hierarchical routing is one of the most efficient ways to reduce energy consumption within a cluster and reduce the number of responses sent to the base station [10,11]. In contrast, a single-level network may overload the gates as traffic congestion increases. In addition, a singlelevel architecture is not scalable for a large set of nodes because sensors are usually not able to communicate over long distances. In addition, clustering can stabilize network topology along routes and reduce overhead and overall topology maintenance costs. It means that the nodes are protected only when connected to CHs. Furthermore, they are not affected by changes in levels between CHs [12]. CHs can also implement an optimized management strategy that will drive network performance and battery and network life. A CH can schedule intracluster activities so that the nodes switch to sleep mode (low power consumption) and reduce the energy dissipation rate. Nodes can also be used in a rotating order to specify a time for sending and receiving information. As a result, data retrieval is prevented [13].
One of the main goals of wireless sensor network designs is to make data transmission work to extend the network's lifespan and prevent connections from failing through power management methods. Routing protocols in such networks are affected by some challenging factors. Tolerance against error and the ability to organize and expand have been the reasons for the success of wireless sensor networks in applications [14]. Creating an efficient architecture in distributing information between nodes can meet the time lost in the abnormal data filter. These wireless sensor networks can be created within fog computing, distributing the computational load between several nodes [15] effectively. The sensor node is first used to identify the data, and CHs should evaluate it. The tendency toward the adoption of wireless sensor networks has intensified in recent years due to its extensive applications in a variety of industries. A wireless sensor network [16] is formed by linking a large number of sensor nodes. Prior to its actual application, the designed methodology must be tested. Having a live sensor network environment, on the other hand, is not always feasible. In that case, simulation is the only way to test the study before moving on with real-world implementation. To date, a wide range of modeling tools for WSN networks are accessible, some of which are dedicated to wireless sensor networks and others which are applicable to both wireless and wired networks [16]. The distance between the data center and the data source is the fundamental downside of cloud computing. Fog computing is a cloud computing technology that addresses these issues. It is one of the paradigms for distributed service computing. It makes full use of terminal devices' diverse computational features. It has paravirtualized architecture as well [17]. With strict energy and processing resource constraints, distributed detection is a critical challenge for WSNs. The appropriate threshold in most detection cases is determined by the noise power, which is subject to considerable variability in practice [18]. Fog computing adds to the power and benefits of cloud computing and services by extending data generation and analysis to the network edge [19]. Real-time location-based services and applications with mobility assistance are feasible because of the physical proximity of users and a high-speed internet connection to the cloud. To promote fog computing, load balancing approaches are utilized which may be done in two forms, static load balancing and dynamic load balancing [19].
Because most WSNs operate in unattended locations where human access and monitoring are nearly impossible, lifetime improvement has always been a critical concern. Clustering is one of the most effective approaches for organizing system operations in a coordinated manner to improve network scalability, reduce energy consumption, and extend network lifetime. During cluster creation, however, most of the prior techniques overload the cluster leader. To address this issue, various academics devised the concept of fuzzy logic, which is used in WSN decision-making [20]. The clustering hierarchy technique is another approach for data transfer in WSNs. This algorithm is one of the most potent ways for increasing the energy efficiency of WSNs and for maximizing the lifetime of WSNs. WSNs conserve energy by using hierarchical protocols centered on clustering hierarchy. Data might be collected and transmitted to a base station by nodes having more remaining energy. Nevertheless, earlier clustering hierarchy approaches [21] did not account for duplicated data acquired by nearby nodes or nodes that overlapped. Currently used clustering strategies include selecting cluster heads with higher leftover energy and rotating cluster heads on a regular basis to spread energy consumption across nodes in each cluster and lengthen network lifetime. Most earlier algorithms, on the other hand, did not take into account the predicted residual energy, which is then used to predict the remaining energy for selecting a cluster head and performing a round [22].
This study is aimed at working in a computational network of fog with a set of inhomogeneous wireless devices. The objective is to provide a computational distribution 2 Wireless Communications and Mobile Computing method that reduces energy consumption in the nodes and satisfies the limitations of the edge delay. These nodes are dynamic and can both examine nodes and measure their communication links. The network can be used in smart cities or intelligent buildings that take information from sensors such as (traffic density or temperature) from the environment and use fog computing. Other works include network clustering and routing to reduce energy consumption and extend network life. For this purpose, fuzzy MADM algorithms are used to select optimal CHs. Therefore, the main aims are summarized as the following points: (i) Provide computational distribution method in Dynamic WSN (ii) Network routing using fuzzy MADM algorithms The paper includes Introduction to present the main problem statement, and all need to be satisfied based on fog computing and routing WSN. Related Works provides a brief description of the literature review regarding fog computing. Methods and Materials represents the presented model and governing equations for both computation distribution and routing protocol. In Results and Discussion, the final findings and analysis results are illustrated. Finally, Conclusion summarizes the results and provides the future scope and direction of the study.

Related Works
Various methods have been proposed to analyze the spatial and temporal density of routing data; for example, the NMAST method [23] uses the ability of neighboring dynamics to measure the spatial and temporal density of data. k pathways can be utilized for visual investigation in applications such as traffic monitoring, public transit planning, and location selection as dynamic networks, unlike typical clustering algorithms that need several data-dependent hyperparameters [24]. Research of fog computing defined a new generation of support of WSN used in any aspect of smart cities, for example, using the system in the emergency system of fireman [25], traffic light control [26], agricultural system [27], and health monitoring system [28]. One of the challenges of WSN is data privacy. Gathering information and transferring to the base station needs some proper affords like designing security systems. For overcoming this challenge, the fog system is one efficient framework. In this case, an aggregator may be disconnected from the fog server and unable to send data directly. It can, nevertheless, share the encrypted data with an adjacent aggregator in order to send data to the fog server by adding its current collected data to the encrypted data. The relevant data values may be extracted by the fog server and saved in a local repository, which may then be updated in cloud repositories [29]. Storage, communication, transmission ratio, energy consumption, and resilience are all improved by the fog system [29]. As a result, job allocation and secure deduplication are two of the system's tasks. It detects data and protects against security risks. Sharma and Saini [30] proposed a Multi-Objective based Whale Optimization method for the modeling of a fog layer system for safe data deduplication. Average latency, customer happiness, network longevity, energy usage, and security strength all increased as a result of their work [30]. Szynkiewicz et al. [31] developed an energy-aware, secure sensing and computing system centered on static and dynamic clusters and edge and fog computing paradigms. The aggregated data stored at edges were transferred to the base station to analyze by gateways. The results of the implementation enhanced security and offload of data analysis [31].
The following are some examples of effective fuzzy algorithm applications in WSN. To model noisy power uncertainty, Mohammadi et al. [18] employed the fuzzy hypothesis test (FHT). Furthermore, using the Neyman-Pearson lemma on the FHT, they presented an optimum censoring strategy. It is demonstrated that the best censoring strategy may be found by comparing the energy of observed data to a threshold. The threshold would be determined by the local communication limitation and the noise uncertainty limitation, according to the findings [18]. Mohammadi et al. [32] looked at a decentralized detection problem for a WSN and utilized FHT to characterize the noise power uncertainty from a Bayesian perspective. The suggested method was assessed in terms of detection and false alarm probability. In the presence of noisy power uncertainty, simulations indicate that the suggested detector outperforms both the Anderson-Darling approach and the standard energy detector. Nayak and Vathasavai [20] looked into the pros and cons of a variety of clustering techniques. These algorithms are focused on CH efficiency, which might be adaptable, adaptable, and intelligent enough to transfer load across sensor nodes, extending the network lifetime. Menaria et al. [33] introduced an FT technique in WSN to manage faults that happen during data transmission from the sensor to the sink or base station due to link or node failure. An enhanced quadratic minimum spanning tree technique was used in the model. To increase fault tolerance in WSN, the revised technique introduced a unique approach to discover an alternate edge in the spanning tree in place of the broken or failed edge.
In a chapter, Kaur et al. [17] discussed the various aspects of cloud and fog computing platforms. In addition, both platforms' full architectures were provided, along with a comparison study. All application management techniques were examined, including resource coordination, distributed application deployment, and distributed data flow. Different load balancing algorithms were described by Singh et al. [19]. In fog computing settings, round robin load balancing is the simplest and most straightforward load balancing solution. The Source IP Hash load-balancing technique has a critical flaw in that each change might redirect to anybody with a different server, making it unsuitable for fog networks [19]. El Alami and Najid [21] developed an improved clustering hierarchy methodology for overlapping and nearby nodes based on the sleeping-waking process. As a result, data redundancy was reduced to a minimum, and network lifespan was increased. Unlike earlier hierarchical routing algorithms, which needed all nodes to gather and send data, the suggested technique just needed the waking nodes to do so. They use the method in both homogeneous and 3 Wireless Communications and Mobile Computing heterogeneous networks. Lee and Cheng [22] suggested a fuzzy-logic-based clustering methodology with an energy prediction extension to extend the lifetime of WSNs. The suggested methodology was found to be more efficient than previous distributed algorithms in simulations. Because edge devices have restricted computing and energy resources, efficient sensor deployment and power management are critical design concerns that must be addressed in order to carry out a significant amount of computation and extend the lifespan of a sensing system to guarantee high-quality monitoring. One of the challenges of the edge-based system is data volume in edge devices. Regarding the exponential increment of data in edges, reducing this congestion can extend the WSN lifespan and improve power consumption. For overcoming this problem, Deng et al. [44] presented a compression method base of fog computing approaches. Their autoregressive analysis method reduced data congestion significantly in conjunction with improvement in power consumption. In some ways, mobile sinks work as fog nodes to connect WSNs and cloud systems. Data are received from sensor nodes and sent to the cloud system through fog nodes(sinks) [45]. Summary of some methods and research about the use of fog computing in WSN are provided in Table 1. The presented techniques are based on fog computing, and the main aim is to decrease the computational complexity and dead nodes as well as increase energy efficiency.
There are several reasons or objectives for using fog computing in WSN. These reasons ultimately increase organizational productivity. First is the reduction of latency in the WSN. One of the most significant benefits of fog computing is reducing latency. It is no longer necessary to send data for processing to cloud data centers or base stations, and elimi-nating this problem makes data analysis and processing much better and more efficient [46]. The second is increasing performance. Not sending data to cloud computing data centers and saving time can also reduce the amount of bandwidth required to do so. In contrast, this amount of bandwidth can be used to communicate with sensors and data centers or base stations [47]. Third is extensive geographical distribution. The use of fog computing with the network's decentralization allows for wider geographical distribution than traditional networking or cloud computing. It will lead to better quality service for the end user [48][49][50][51]. Fourth is analysis instantaneously. In many environments, the ability to analyze data immediately is essential. Eliminating inefficiencies and delays in cloud services means that the user can have an accurate and instantaneous data analysis [49,52].

Governing Formulation.
In this paper, linear mathematical programming is used to optimize energy consumption in the presented DMTC system. The objective function for computing is presented as Eq. (1): where E m , E t , and E c are energy consumption for mapping, transfer, and combination stages, respectively. Also, n

Wireless Communications and Mobile Computing
is the node number. Energy consumed in mapping level is defined as follows: such that C n is the number of CPU cores for processing single bit and P n is the energy required for the process. Therefore, C n P n is the amount of energy for processing a single node. N is the number of nodes, D is distributed data, and l n is the size of the distributed file.
Moreover, in transfer level, the energy consumption is equal by Eq. (3).
In this equation, T is the number of bits for computation. E s shows shuffle level energy consumption is the WSN. It equals by Eq. (4): where p n , h n , B, Γ, and σ 2 are the power of radiofrequency of node n, wireless channel, bandwidth, SNP gap, and noise power, respectively. The following constraints are exerted to the computation: where F n is the number of CPU process per second in node n and τ n is defined as the latency of node n. According to mathematical programming, we should obtain the minimum value of energy consumption in the WSN system.

Clustering and Routing
Protocol. In WSN routing, only a small number of nodes must be connected to the base station to increase network lifespan and decrease energy consump-tion. These nodes are cluster heads (CHs). Because the nodes are dynamic, the most appropriate nodes should be selected as the CHs. In this section of the study, the Fuzzy Multiple Attribute Decision-Making (MADM) method was used to select the CHs. The Fuzzy MADM method uses three criteria: concentration, the energy level in each node, and the node's centrality. The properties of the network are as follows: (i) The base station must be away from the sensor nodes and immobile (ii) All network nodes are heterogeneous and have energy limitations (iii) Nodes have spatial information sent to the base station with the corresponding energy level in the phase adjustment phase (iv) Nodes are dynamic In this research, however, routing is based on clustering. However, the choice of CHs based on a method depends on multiple parameters. Therefore, in this study, unlike previous methods where the selection of CHs was mainly based on one criterion or a one-sided approach, in the proposed method of selection protocols, the CHs are chosen based on multicriteria.
According to the flowchart of Figure 3, first, the data is randomly distributed between the nodes. Then, the initial   Wireless Communications and Mobile Computing connection between the nodes and the sinks is established to load the data of each node in the system. The criteria should be identified using the existing constraints, and the values based on them should be calculated. In the next step, in order to update the data, the connection between the user and the sink is disconnected, and the criteria will be updated and measured in the new phase so that the final selection can be made based on the Fuzzy MADM method by modifying the existing data and taken from the nodes. In general, the space considered in the flowchart can be described in the following sections.
(i) In the first stage, the establishment of nodes in the field begins so that the mechanism of neighbor detection to discover the general network and create an initial routing tree begins (ii) In the second stage, the best route from the relay node to the sink is identified (iii) The final step involves the operation stage, these criteria are monitored, and the value is dynamically changed in response to changes in the status of the network This method optimizes the lifespan and reduces the error in the network by presenting new constraints and different assumptions. Adding node power consumption as a new constraint can have different challenges in the simulation of the proposed model. Figure 4, the first sensor network is a square network with dimensions of 100 × 100 m, with the base station (BS) placed away from the sensors. In addition, all sensor nodes are provided 0.1 J of starting energy. As a result, the network's total starting energy is 10 J. The energy parameters E fs and E mp are 10 pJ/bit/m 2 and 0.0013 pJ/bit/m 2 , respectively. E elec and E DA parameters have values of 5.5 nJ/bit and 5.5 nJ/bit, respectively. Simulation tests for 100 WSN installations were conducted to guarantee the correctness of the results. To offer a comparative description of the procedures, the average of the collected findings was employed. Experiment has numerous N clusters ranging from ten to twenty to find the ideal value of a cluster. For each value of N, the average energy consumption per cycle is determined. Moreover, the efficiency of optimum computing is studied through mathematical operations. The presented DMTC computing system includes regularly sharing w among the N nodes, without considering the nodes' computing capacities and the power of channel to access point. The used parameters for the simulation are illustrated in Table 2. Figure 4, the presented problem in the initial condition consists of 100 dynamic nodes of fogs with an access point. The solution area is 100 × 100 m the access point is located 50 m upper than the problem area. Before processing the network, the process is equally divided by each node based on the architecture of Figure 1. The presented method is implemented on the different number of nodes N. Two methods of computing are considered as optimistic and blind schemes.

Results of Presented Distribution Analysis. Regarding
The highest point of computational load for both of the schemes is calculated as the following equation.
If we consider each node's capacity as random values, computational load also is random. In this condition, interruption probability is shown in Figure 5(a) in different latency values in 10 and 20 nodes. Findings show that the optimistic distribution among nodes has a lower interruption in comparison with the blind model. In the optimistic method, the computing load is calculated as the sum of the process of each node. However, in the blind method, the load value is equal to N times the minimum value of the nodes process. The results of the distribution method in Figure 5(a) show that the rising number of the total system interruption is decreased that is one of the advantages of this method. Another advantage is the remarkedly low energy consumption of the optimistic approach shown in Figure 5(b) compared to the blind one. The process is done for 100 number nodes with one-second latency.
Regarding Eq. (1), total energy consumption in the presented system is constructed by three E m , E t , and E c as energy consumption for mapping, transfer, and combination stages, respectively. The results of total energy decomposition on the three factors of Figure 5(b) are depicted in Figure 5(c). Based on the results, high percentage of energy belongs to E m and E c for mapping and composition, respectively. With an increasing number of nodes, mapping energy decreased. Regarding Figure 5(d), with rising latency, energy consumption for mapping stage is reduced. Another advantage of the presented method is that a slower process leads to reduced energy used. We used the Fuzzy MADM method for routing the wireless sensor network based on the presented distribution algorithm in the other parts of the paper. Figure 4, the initial network consists of 100 fog nodes and one access point for connection. The computing load is randomly distributed between nodes based on the methods mentioned above.   For determining CHs based on Fuzzy MADM, first, a decision matrix is constructed. The number of rows of the matrix is the number of nodes N, and columns are equal to three numerical criteria of decision as follows: C 1 : distance between each node and access point. C 2 : number of nodes in the adjacency of nodes. C 3 : remaining energy of each node. In the next step, the matrix is standardized to be ranged between 0 and 1. We used five values of very low, low, media, high, and very high for fuzzification of the matrix based on the adaptive Neuro fuzzy system. The fuzzified standardized criteria of C 1 , C 3 using triangular and C 2 using second-order Gaussian function are depicted in Figure 6. The equations for the energy required to transmit information on WSNs comply with wireless communication laws as follows:

Results of the Clustering Process. Regarding
The last steps are determining the CHs, calculating the energy consumption to send information from nodes to CHs and from the CHs to the access point, and implementing the node allocation to the clusters based on the minimum node's distance to the CHs. Total energy consumption is calculated as the amount of energy consumed in data  Wireless Communications and Mobile Computing transmission, mapping, and composition by each node with network execution. The routing is done using two methods of optimistic and blind, and the results are illustrated in Figure 7. Figure 7(a) shows total energy consumption for the network with the optimistic scheme for computational load distribution for the number of nodes N = 20, 40, 60, 80, and 100. For all the process of optimistic scheme, computational load L optimistic (see Eq. (6)) is identical. Energy consumption until all the nodes are dead shows that a network with many nodes is lower energy consumption. However, a network with 20 nodes used a higher value of energy.
Due to the optimal formation of clusters using fuzzy logic and selection of fuzzy CHs, long-distance transmission in the network is further reduced, which CHs show low energy consumption in each sensor node. It is one of the advantages of the optimistic method that has been aforementioned in the previous process. When the remaining energy of a sensor node in a network hits zero, it is called dead. The operational capacity of the network diminishes as the number of dead nodes in the network grows. As a result, sensor node mortality has a direct impact on network operation. Therefore, optimistic method endurance is reduced with the increasing number of nodes based on Figure 7(b). While 40, 60, 80, and 100 nodes are dead, only 20% of the node of 20 cases is dead. Also, the number of packed sent to access point is decreased Figure 7(c) with the increasing number of nodes. On the other hand, in the blind method, according to Figure 7(d), the increasing number of node energy consumption also has risen. Moreover, maximum energy consumption is belonging to the 100 node cases.
In this case, nodes' death is completed in almost the same round and with an identical percentage. Also, sent packages are the same approximately. The comparison between previous research is shown in Table 3. Based on the results, the mortality rate of the Fuzzy MADM method is higher than the other methods, which means that the percentage of death has lately occurred. It shows the reliability of networks based on the high lifespan of the network between the presented distributions. The optimistic scheme enhanced network endurance and led it to be competitive with other protocols. Based on the results of complexity analysis, the presented method is processed in a lower time than the other methods.
Considering the SPIN method as a baseline, the presented optimal method and blind methods were 28% and 48%, respectively.

Conclusion
Practical engineering in data distribution between nodes in a wireless sensor network can meet the time lost in the irregular information channel. These wireless sensor systems can be made inside the structure of figuring appropriating the computational fog between a few nodes successfully. This investigation is aimed at working in the field of a computational system of fog with a lot of inhomogeneous wireless nodes. The goal is to give a computational distribution strategy that outcomes in diminished energy utilization in the network and fulfills the constraints of the edge latency. The nodes are dynamic and can both look at nodes and measure their correspondence joins. In this paper, we presented the DMTC distribution method for a dynamic wireless sensor network. In this system, one access point plays the base station roles in the system, and nodes are considered fog computing subsystems. The computational load is divided by fog nodes with two optimistic and blind models in the distribution methods. In the optimistic scheme, the computing load is distributed randomly on each node, and the total load is the sum of each node process. On the other hand, in the blind model, the load value equals N times the minimum value of fog node computation. Findings show that the optimistic distribution among nodes has a lower interruption in comparison with the blind model. Also, with the number of nodes, the total system interruption is dropped, which is one of the benefits of the presented approach. Another efficiency is the low energy consumption of the optimistic method. In addition, the high contribution of energy belongs to the mapping and composition stages of energy. Also, with the rising of fog nodes, mapping energy reduced. Moreover, with the growth of latency, energy consumption for the mapping stage is dropped and a slower process consumes a low value of energy. In the next step, the distribution system was implemented on a routing and clustering technique using Fuzzy MADM. Choosing suitable cluster heads can also significantly reduce energy consumption and increase the lifespan of the WSN. The implementation of the routing 11 Wireless Communications and Mobile Computing method on optimistic and blind schemes revealed that large networks consume lower energy in an optimistic approach than small ones. Also, energy consumption is dropped with clustering and choosing cluster heads. Because nodes' mortality rate influences WSN efficiency, increasing nodes' number network endurance is decremented; however, in the blind method, the efficiency of the network with an increasing number of nodes reduced. To be concluded, the optimistic scheme is proper for an extensive network. However, the blind method is better for a small network.
Fog node resources may be virtualized and distributed to several users. Multitenant support in fog resources and scheduling compute jobs based on their QoS needs have not been thoroughly addressed in the available literature. Future study can be directed toward addressing this gap in the literature. The development of a real-world testbed for testing the operation of fog-based rules is typically quite highly priced and not scalable in many circumstances. As a result, many academics are looking for an effective toolbox for fog simulation to conduct preliminary evaluations of fog computing systems. Nevertheless, there are just a few fog simulators on the market right now. Future research might include the construction of a more efficient simulator for fog computing.

Data Availability
In this paper, a random dataset is used. And the parameters' values are extracted from articles.

Disclosure
The funding sources had no involvement in the study design, collection, analysis, or interpretation of data, writing of the manuscript, or in the decision to submit the manuscript for publication.