IETIF: Intelligent Energy-Aware Task Scheduling Technique in IoT/Fog Networks

Nowadays, with the advent of various communication technologies such as the internet of things (IoT), a large volume of data is produced that needs to be processed in real-time. Fog computing is an appropriate solution to address the requirements of different types of IoT applications. In most cases, IoT applications consist of a set of dependent tasks that can be separately processed in a heterogeneous fog environment. Scheduling these tasks in a fog environment is an NP-hard problem that needs a vast amount of time and computation resources to solve, making it infeasible for real-time applications. In addition, reducing response time and energy consumption in fog computing is an essential issue that should be taken into account in task scheduling algorithms. To address these challenges, we aim to propose a multiobjective task scheduling model to jointly improve energy ef ﬁ ciency and response time. To solve the model, we also propose an intelligent solution named IETIF which combines and leverages the bene ﬁ ts of simulated annealing and NSGA-III algorithms. Simulation results show that IETIF outperforms the state-of-the-art methods in terms of energy consumption, response time, and speedup.


Introduction
Emerging technologies, such as the internet of things (IoT), have led to massive data production.The use of traditional and standard methods for processing various forms of data has been challenged by the abrupt increase in the volume of data generated [1,2].Many applications such as smart homes [3] and health care [4] on the IoT generate a great deal of data through a wide variety of sensors.These data necessitate latencyaware computations for real-time processing [5,6].
Cloud computing could be helpful in the information security, fast processing, dynamic access, and cost-saving [7,8].However, it is inefficient to process IoT requests over the cloud, especially for real-time IoT applications.Because there is a lot of delay in data transfer in the cloud.As a result of delays in data transfer and processing in the cloud layer, system performance is reduced.To address these challenges Bonomi et al. [9] provided the first definition of fog computing.In this model, fog devices are close to the user and are responsible for calculating and storing data in the middle layer [10].This solution reduces system latency and increases speed because IoT devices connect directly to the fog computing devices.
Fog has a lot of advantages but also causes additional challenges that need to be addressed [5,11].Task scheduling is one of the most important challenges in the fog.A task scheduling algorithm is a mechanism for allocating tasks to the system resources.Motivations such as the heterogeneity of the fog resources, dynamic changes, and resource restrictions [12] introduce task scheduling as a paradigm.Since task scheduling is an NP-complete problem, novel approaches are required to find optimal solutions [13][14][15].Because incompatible scheduling algorithms might cause hardware inefficiencies or slow down the applications [16].
To address the mentioned challenges, a method for scheduling tasks in the fog environment is presented in this research.A hybrid of improved NSGA-III and simulated annealing (SA) algorithms is used in the suggested method.The research's primary innovations and points are described below: (1) Generating and weighing DAG graphs based on network topology and transmission delay, and improved task scheduling based on unexpected network changes.(2) Simultaneous attention to energy usage and response time optimization, taking into account the impact of communication delay on the system response time.(3) Generating the initial population intelligently and improving the operators of the NSGA-III algorithm in order to generate the correct potential solutions.(4) Energy calculation based on the energy consumption model in cloud computing and using the DSVS algorithm to improve energy consumption.
Simulation results demonstrate the suggested approach outperforms RandomHEFT, HEFT, IWO-CA, and IKH-EFT algorithms.The paper is organized as follows: "Related work" section gives an outline of the study's findings.The model of scheduling and problem formulation is explained in the "System model".The "NSGA-III and SA" section of this study describes the methods employed in the proposed algorithm, which is based on the genetic algorithm with nondominated sorting and the SA algorithm."IETIF" is described in the fifth part.The findings of the experiments and simulations are presented in the "Evaluation" section, and the algorithm's results are examined in the "Conclusion" section.

Related Work
Recently, fog computing has attracted much attention due to its ability to perform computations at the edge of the network and close to the user.Due to resource constraints, IoT node mobility, and service quality limitations, scheduling has become a complex challenge for distributing resources correctly in the fog environment [17,18].Many applications such as intelligent transportation, emergency services, online gaming, telecommunications, and digital signal processing require quick processing, low latency, and high reliability [19,20].Therefore, we need a suitable scheduling algorithm for resource allocation in the fog environment.The problem of task scheduling in heterogeneous fog-cloud computing systems has been extensively studied.While some concentrate on reducing execution time, others consider reliability, energy consumption, and other factors.
Mao et al. [16] presented a combination of two timeaware and energy-aware algorithms for task scheduling in the heterogeneous computational systems.According to the requirements of the consumers, the algorithm reduces energy consumption or saves time.It should be noted that the load balance of this method still has problems.Hao et al. [21] introduced ASSD that considers the time limit and energy consumption of resources at various voltages to accelerate parallel tasks.This method considers two sorts of deadlines for completing the work: soft and hard deadlines.In addition, ASSD considers the system workload and then determines the voltage and frequency based on the system workload.The simulation results show that not only is the energy saved, but the number of tasks completed is increased.Azizi et al. [22] introduced MDAF as an efficient method for scheduling and allocating services based on QoS criteria.MDAF performs time-sensitive operations as close to the client as possible.The simulation results reveal that the algorithm has dramatically improved service delays and implementation costs.However, energy efficiency and resource utilization have not been addressed in MDAF.Jamil et al. [15] developed an approach that reduces latency and makes better use of fog resources.Although the suggested SJF method utilized minimizes average waiting time and network usage, it suffers from starvation.
Hassan et al. [13] developed MinRE for allocating services according to QoS criteria, power consumption, and resource usage.In MinRE, services are divided into two categories: critical and noncritical.Then, based on the type of service, and with the aim of reducing energy consumption, tasks are scheduled.It should be emphasized that the cost of resources and interdependence between numerous services are not considered in this method.Joint PT is a scheduling policy based on priority, power, and traffic indicators [23].The fat-tree topology is employed in this method, and an analysis of the findings demonstrates that the proposed method can reduce total network usage, energy consumption, and resource waste.FFPTS combines PSO and fuzzy approaches to improve scheduling based on the computing capacity of resources and user task requirements.In addition, latency and effective network resource utilization are taken into account in FFPTS.Also, it is considered a mobility-aware strategy, but the priority of implementation has not been taken into account [24].Javanmardi et al. [25] introduced a security-driven task scheduling approach called FUPE.Its main objective is to stop TCP/SYN flood attacks.For this reason, FUPE uses a fuzzy system to identify dangerous requests and avoid allocating resources to them.
Wang et al. [26] developed I-FASC, which used a metaheuristic algorithm for scheduling.The simulation findings show that convergence speed, workload balance, and completion time are improved.It should be emphasized that there are still many flaws in the actual program's development.The energy consumption of fog nodes, for example, is not taken into consideration when calculating work processing time and workload optimization.LBP-ACS is a combination of the LBPA and ACS algorithms proposed to solve the problem of scheduling work in the fog computing environment [27].In this algorithm, the priority of a task and its deadline are considered.Also, to address delays, the priority of tasks is taken into account.In order to minimize energy consumption, an ant colony optimization algorithm is used.There is no optimal way to schedule interdependent tasks using the above method.
TCaS uses a genetic algorithm (GA) for cost and timeaware task scheduling.It is worth noting that this method does not address the criteria of transfer cost, energy consumption, or user satisfaction [28].Xu et al. [29] presented IPSO that schedules tasks with the goal of lowering costs and increasing task efficiency.The simulation results suggest that the proposed scheduling algorithm is more cost-effective than the primary method and can outperform it.Khaledian 2 Journal of Sensors et al. [30] presented a method based on the krill herd algorithm for scheduling tasks in a fog environment, which takes into account the financial cost in addition to energy and time.They performed a very comprehensive simulation for different conditions and showed that their method reduces energy consumption and response time.Chen et al. [31] explore extending computation offloading beyond fifth-generation networks by combining the wireless communications and multiaccess edge computing (MEC).To address challenges in MEC systems, a distributed learning framework is introduced, treating computation offloading as a multi-agent Markov decision process.A case study showcases the potential of the online distributed reinforcement learning algorithm within this framework for resource orchestration and optimization in computation offloading scenarios.Wang et al. [32] explore the role of mobile edge computing in enabling 6G applications and presented the timing algorithm.ESTMP is an energy-efficient algorithm that minimizes program execution time while meeting energy constraints.This algorithm is especially suitable for time-sensitive mobile applications, optimizing the use of resources and reducing energy consumption.The simulation results confirm the significant advantages of the proposed algorithm in terms of program duration, resource utilization, and energy consumption.Table 1 examines the mentioned research from different aspects and shows their important features.Table 1 highlights that reducing energy consumption is a priority in the reviewed studies.However, it is worth noting that dynamic voltage and frequency scaling (DVFS), a key and essential technique for energy reduction, is not widely utilized by many researchers.Interestingly, many researchers focus on minimizing response time to achieve cost reduction and enhance customer satisfaction.It is important to recognize that real-time processing is a requirement in many IoT applications, emphasizing the significance of addressing quality of service.In light of these considerations, our objective is to address the critical processing requirements in IoT networks.We aim to decrease energy consumption, improve the quality of service, and reduce delays across different components of the system architecture.To accomplish this, we utilize a multiobjective scheduler that simultaneously considers objectives such as delay, energy, and time.

System Model
This section examines the system architecture first, followed by the formulation of the issue.
3.1.System Architecture.The architecture used is a wellknown three-layer structure consisting of a cloud layer, a fog layer, and an IoT device layer [33,34].
IoT layer: this segment includes many IoT devices such as smart wearables, self-driving cars, security sensors, smart home appliances, industrial devices, etc., which require realtime processing [35].
Fog layer: fog computing allows end users to run applications and store data with low latency and high reliability.This layer contains heterogeneous resources, is geographically close to the end user, and has a wider geographical distribution [36].
Cloud layer: tt contains very powerful computing and storage resources and does not have the same geographic extent as the fog layer.Therefore, it has a higher cost and more delay than the fog layer [37].
Figure 1 shows the system architecture.First, IoT devices send requests to the fog layer as a job.Then, each job can be divided into several smaller tasks that are interdependent or independent.The broker is the most essential component of the fog layer, which receives requests and allocates tasks to resources based on the information about processing resources such as processing cost, computing power, and other characteristics.It should be noted that in some situations, user requests are offloaded to the cloud layer for processing.Offloading to the cloud layer leads to increased delay and cost.
In the fog environment, the tasks are expressed in the form of a workflow, and the task schedule is shown in the form of a directed acyclic graph (DAG).Each node represents a task, and the edges of the graph represent the Journal of Sensors 3 prerequisite relationships between work units.An entry task has no priority, and an exit task has no successor.Tasks and edges have weights in this graph, used to determine computational and communication costs.A simple DAG graph is shown in Figure 2.There are 11 tasks and workflows demonstrated between them.According to the graph, t 0 represents the input task, and t 10 represents the output task.Also, if t 0 and t 1 are both running on the same host, the transfer cost (or transfer time) will be zero, but if they are both running on separate hosts, the transfer cost will be 3.We try presenting a model that correctly models the realworld system and can be promising in achieving more accurate and close-to-reality results.Therefore, we consider all aspects that affect energy consumption and response time.
Response time The execution time corresponds to the makespan of the last task, and its value is calculated using Equation (2).
The EFT value represents the closest completion time, which is computed by the Equation (3).
The variable t ij indicates the amount of time the ith task is run on the jth hosts, and the variable AST t i ; ð h k Þ shows the    AFT 3.2.2.Transmission Delay.The transmission delay is predetermined by the DAG graph in prior work.In contrast, in this method, the transmission delay is determined by the network structure and the source and destination locations.The network topology, transmission costs, and network equipment energy consumption in the fog layer are considered in Figure 3.
The cost or delay of transmission is determined using the suggested technique based on the three-level network topology (dynamically) and each solution utilizes Equation (5).If the t i task needs to migrate, the transfer delay is obtained from the following relation, otherwise, its value is equal to zero.
The amount of network delay is shown in Equation (7).
E h j À Á tells how much energy the machine h j consumes.The value of the variable E idle denotes the amount of energy consumed if the physical machine is turned on or idle.The value of the variable E busy represents the amount of energy spent by the host at its most efficient status.In contrast, the value of the variable U CPU j represents the machine's efficiency, as estimated by Equation (9).
If task i is done on machine j, U CPU j represents the ratio of the million instructions per second (MIPS) required for task i to the CPU capacity of machine j 0 s.We modify the utility formula as Equation (10), intending to consider the task time and frequency of the processor.
where PT demonstrates the processing time of the task i on the machine j and VF indicates the amount of voltage consumed by the CPU at the frequency of f.The overall energy cost for task migration is calculated by The migration cost of the t i task is computed using Equation (11) if migration is required; otherwise, the migration cost is zero.
l t is the size of the task and e t is determined by using Equation (12) based on the location of the two hosts.e t ¼ e 1 ; j; k are connected to same edge SW e 2 ; j; k are connected by aggregation SW e 3 ; j; k are connected by core SW: Therefore, energy consumption is calculated using Equation (13).Journal of Sensors 5 3.2.4.Fitness Function.As aforementioned, the suggested method is a multiobjective optimization problem, and any of the solutions in Pareto-front could be the answer.A difficulty is selecting the best solution from your existing options.We will use Equation ( 14) to normalize and weigh each goal, then choose the best response.
The significance coefficients of each aim are represented by the values of the variables α and β.In addition, the data are standardized with the use of the normalized function in Equation ( 15) to account for the influence of each aim equally.

NSGA-III and SA
IETIF uses a combination of NSGA-III and SA multiobjective algorithms and improves its operators to solve scheduling problems, each of which we will briefly explain below.

NSGA-III Algorithm.
To solve multiobjective problems, various evolutionary algorithms have been developed, including the genetic algorithm with nondominance.Based on the NSGA-II [39], the NSGA-III algorithm employs a fixed number of reference points.The use of these reference points has led to population expansion at Pareto-front and a diversity of solutions.There are usually numerous separate objective functions that tend to find their maximum or minimum at the same time in multi-objective optimization problems.It is worth noting that the majority of these goal functions is at diametrically opposed points.So, improving one of them leads to worsening the other [40].Therefore, a set of optimal answers is obtained, which is called the Pareto-front.The collection of selected solutions on the Pareto-front is not dominated by any other solution and is scattered by reference points in the problem space.Equation ( 16) is used to calculate the total number of reference points.
4.2.Simulated Annealing Algorithm.The SA method starts with an initial solution and then generates a new solution in the neighborhood of the initial solution and moves toward it if it is better than the current answer.Otherwise, it accepts that answer with the probability exp − ð ΔE=TÞ as the current solution.ΔE is the difference between the objective function of the current answer and the nearest answer in this relationship and T is a temperature parameter.Several repetitions are performed at each temperature, and then the temperature is slowly decreased.In the initial stage, the temperature is set too high, so there is a higher probability of accepting worse solutions.In the final stages, there is less chance of accepting worse answers, and thus the algorithm converges toward a global optimum.The main advantage of the SA algorithm is that it avoids local optimization.Accepting incorrect answers increases the chance of finding a solution in the problem space.

IETIF
IETIF is a hybrid multiobjective algorithm that consists of three main phases, which we will examine in the following.

Primary Population Production.
The first phase is to initialize the population, hence IETIF replaces the generation of the initial population by a random approach with one in an intelligent way.As a result, each initial solution is regarded as a possible solution.Each solution is represented by an array along with the number of tasks.Tasks must be completed in the order in which they appear in the array.In other words, the index of the array represents the task's priority, and the value of the element shows the task number.Table 2 is the first attempt at solving task scheduling in Figure 2.
Initial solutions must satisfy the execution priority constraint because the workflow is DAG-based and the order of execution is critical.To address this issue, the node height values in the DAG are used to construct the initial population in this approach.The minimum height of each DAG node is calculated using the Equation (17).According to this formula, the height of the entrance node is equal to zero and the height of its children is equal to one.In the same way, the height of each child is 1-unit higher than the height of his parents.If the height of the parents is different, the parent with the higher height is considered to calculate the height of the node.
We use three operators to construct the initial population: level arrangement, deep arrangement, and hybrid.5.1.1.Level Arrangement.The tasks are sorted in ascending order depending on their height value in this scenario.The task with the smallest height is at the top of the list, while the task with the largest height is at the bottom.The tasks with the same height are picked and a permutation of them is included in the list based on the height (from less to Journal of Sensors more).This is repeated until all the tasks have been completed.This operator ensures that tasks at lower heights are always completed sooner than tasks at higher heights.It is similar to a breadth-first search in that each run yields a different result.

Deep Arrangement.
The task arrangement in this scenario is not always dependent on height.In Figure 2, for example, possibly, the fourth task could be accomplished before the second one.The level arrangement operator will not create this state.So, we present the deep arrangement operator to examine the problem space in more depth.This operator acts as a depth-first search, returning a different but correct result each time used.The incoming work is initially chosen, and all of its children are added to the candidate list.Then, one of the nodes of the candidate list is selected randomly.Among the children of the selected node, nodes that have no dependence on nodes that have not yet been seen are candidates for selection in the next step.This process continues until all nodes are listed.

5.1.3.
Combined.This mode combines deep and level search and generates solutions using both methods.Operators can be used at equal or random rates to generate the initial population.
5.1.4.NSGA-III and SA Combined Algorithm.To produce new solutions, we will use the SA algorithm's operators in addition to the mutation operator.As a result, the new solution after performing each of these operators is an executable solution.
5.1.5.Reversion.In this situation, a task is chosen randomly, followed by the all tasks at the same level.In the new solution, the order of the selected tasks is reversed.Assume that q is a proposed solution and Task 4 is chosen.Because the third, fourth, fifth, and sixth tasks are all on the same level.
The sequence in which they appear on the new list has been switched around.Table 3 shows the output of this operator.
5.1.6.Insertion.In this strategy, a task is chosen randomly first i 1 .The second task is selected between all tasks that are at the same height as i 1 .Then a random number r between 0 and 1 is used to move.If r ≥ 0:5, i 2 is ahead of i 1 .If the value of r <0:5, i 1 moves after i 2 .If q is a solution and Tasks 3 and 6 are chosen randomly, the insert operator in the Table 4 is set for r <0:5 and r ≥ 0:5.r is chosen completely randomly and the probability of choosing both operators is equal.
5.1.7.Replacement.In this strategy, a task is chosen randomly first i 1 .Then all the tasks that are at the same height as the selected task are chosen, and a second task is chosen from i 2 .
The element now in position i 1 will be moved to position i 2 , and the element currently in position i 2 will be transferred to position i 1 .Suppose T9 is chosen, and since T8 and T7 are at the same level as T9, one of them is chosen at random (T7), replacing T9.Table 5 shows the result after the moving procedure.
5.1.8.Mutation.In this procedure, a node is chosen randomly first i 1 .The parent nodes and the children of node i 1 are identified.The i 1 node is then moved after the final parent or before the first child using a random number.Table 6 depicts the result after the mutation if t 5 is selected.
5.1.9.DVFS.The use of the dynamic voltage and frequency scale (DVFS) technique is one of the most important and extensively utilized dynamic ways of controlling the heat generated by circuits and reducing energy consumption in microprocessors.The processor's voltage and frequency are changed to reduce power usage in this technology dynamically.The DVFS algorithm can dramatically cut energy usage.This algorithm's primary purpose is to adjust the voltage and frequency of existing CPUs in real-time.To do this, we adjust the execution frequency in ascending order for each task and update the completion time to identify the lowest frequency that will allow the program to be completed considering the time limitations [41,42].Using the DVFS approach, the best solution available in the Pareto-front was chosen.
In other words, for each solution in the Pareto-front, the gap between the tasks is determined first.A gap between tasks is when a machine has finished its previous task but still cannot start a new task due to the dependencies between tasks in the DAG and must wait for the parent of the new task to be fully executed.Therefore, the machine can work with a lower frequency, and reducing the frequency reduces energy consumption and increases the running time.Choosing the right frequency should be such that the task execution time does not exceed the length of the it.After calculating the available gaps between all the machines, the amount of energy saving will be calculated by reducing the frequency and the solution that has the most energy saving will be selected.
For each solution, one of the Reversion, Insertion, Placement, or Mutation operators is randomly chosen and applied.These steps are iterated for a specific number of times or until convergence to a solution is achieved.Finally, among the optimal solutions that all have an equal makespan, the final solution is determined by selecting the one that consumes less energy through the application of the DVFS algorithm in the Lines 29-36.

Evaluation
In this section, the performance of IETIF is evaluated in terms of energy consumption, response time, speedup, efficiency, and SLR.IETIF will be compared to recent studies such as HEFT-U, HEFT-D, HEFT-L [43], RandomHEFT [44], IWO-CA [42], and IKH-EFT [30].Random DAGs are created in experiments to examine the performance of algorithms under various scenarios.An Intel Core i7 CPU and 16 GB of memory were utilized to develop MATLAB algorithms.The simulation conditions are given in Table 7. Þare created.The communication to computation ratio (CCR) parameter creates distinct datasets DAGs to analyze the proposed method in various scenarios.We make two datasets, with CCR one and two.In addition, the produced programs include a variety of computations and communication expenses generated randomly.Dataset properties are listed in Table 8.The evaluation criteria are explored in the sections that follow.6.2.Speedup.The ratio of sequential execution time to parallel execution time is known as speedup.The sequential execution time is calculated by allocating all tasks to the processor and minimizes the task graph's total execution time, as shown in Equation (18).
where CP is the critical path, the critical path is the longest path from the entry node to the exit node in the application graph.C comp t i ; ð p j Þ shows the computational cost of the i th task on the j th host.Since the critical path serves as a lower bound for the makespan, the SLR cannot be less than 1.The denominator value may be smaller than the actual critical path because this formulation ignores communication costs and thus produces shorter critical path lengths than the valid critical path length.6.5.Results and Analysis.All algorithms were run 10 times to evaluate and compare the results, and the average is reported in this section.
One of the most important criteria that has been considered in most articles is response time.Response time criteria and energy are considered simultaneously thanks to a multiobjective algorithm, which has improved the performance of IETIF.The results for CCR = 1 and CCR = 2 are given in the following.
Figures 4 and 5 illustrate that our approach improves response time compared to other algorithms.In addition, by increasing the number of tasks entered into the system, the response time of IETIF is significantly decreased, resulting in enhanced system performance.IETF considers transmission delay and makespan simultaneously, thus it can perform better than the other methods in the different conditions.
Figures 6 and 7 show the energy consumption criteria for all methods.By comparing the simulation results, it is clear that IETIF outperforms the other approaches.There is a  Journal of Sensors substantial difference in the amount of energy required by the suggested technique compared to the other algorithms.The number of tasks submitted into the system increases, and IETIF is superior to the other methods in terms of energy consumption.IETF is an energy-aware approach and uses DVFS which can improve its efficiency in reducing energy consumption.
The speedup parameter specifies how quickly tasks are completed when they enter the system.According to the results shown in Figures 8 and 9, our technique is faster than the other algorithms.It has provided much better results.When the CCR is set to 1, for example, IETIF improves the speedup parameter by about 7% and 15%, respectively, compared to the IKH-EFT and IWO-CA methods.When CCR equals 2, the improvement rates are 11% and 24%, respectively.Due to the production of an intelligent primary population, IETIF has the ability to create     When CCR = 1, the proposed approach outperforms the IKH-EFT and IWO-CA methods almost by 4% and 6%, in efficiency terms, respectively.If the value of CCR is 2, the rate will be 4% and 7%, respectively.IETF uses deep arrangement along with other operators, it can further explore the problem space, and more exploration enhances its efficiency.
Figures 12 and 13 show the SLR results.The lower the value, the better.In SLR, the denominator is the least computation of critical path tasks.There is no makespan less than the denominator of the SLR equation with any algorithm.As a result, the algorithm with the lowest SLR is the best.The results show that the IETIF outperforms the other methods tested using the       Journal of Sensors SLR criterion.It is emphasized that as the number of tasks entered into the system increases, the amount of SLR gained from the suggested method will differ dramatically from that acquired from other algorithms.

Conclusion
One of the critical aims of in-network, cloud, and fog computing is to save energy and reduce response time.In this paper, we present a method for task scheduling in an IoT-fog environment.The recommended solution employs a combination of NSGA-III and SA algorithms.The use of reference points in NSGA-III has led to the dispersion of population at the Pareto-front and the diversity of solutions, while SA avoids getting stuck in local optimization.IETIF employs the DVFS algorithm; since it is one of the most important and widely used methods for managing the heat generated by the circuits and reducing energy consumption in microprocessors.The performance of IETIF was assessed in terms of energy consumption, response time, speedup, SLR, and efficiency.The suggested algorithm is compared to other methods such as HEFT-U, HEFT-D, HEFT-L, RandomHEFT, IWO-CA, and IKH-EFT.The simulation results confirm that IETIF outperforms the mentioned algorithms.The main factors in the improvement of IETIF are the intelligent selection of the initial population and the employment of operators who, in addition to searching the entire problem space, lead to the generation of solutions that satisfy the problem conditions.

3. 2 .
Problem Formulation.In this section, we explain the objective function based on the fog model and structure.The objective function tries to reduce response time and energy consumption simultaneously.Therefore, the response time and how to model the energy in the fog are investigated.

4
Journal of Sensors actual start time of the ith task on the jth hosts.The AFT variable's value also indicates the actual completion time, determined using Equation (4).

TABLE 1 :
Comparison between related work.
(6)otes the size of the task and d t is determined based on the distance between the source and destination machines.The value d t will be equal to d 1 if the two physical machines of origin and destination are both linked to an edge switch.If the link between two devices is formed using an aggregation switch, d t will be equal to d 2 .The delay is d 3 if the core switch must pass for transmission.Equation(6)explains calculating d t between two i and j machines.
l t ; if t i needs to migrate: d t ¼ d 1 ; if j; k are connected to same edge SW: d 2 ; if j; k are connected to same aggregation SW: d 3 ; if j; k are connected to same core SW:

TABLE 7 :
Conditions for simulation using the studied algorithms.