ComputationOffloading Strategy for IoTUsing ImprovedParticle Swarm Algorithm in Edge Computing

To address the problems of high energy consumption and time delay of the offloading strategies in traditional edge computing, a computation offloading strategy for the Internet of *ings (IoT) using the improved Particle Swarm Optimization (PSO) in edge computing is proposed. First, a system model and an optimization objective function are constructed based on the communication model for the uplink transmission and the multiuser personalized computation task load model while considering constraints from multiple aspects. *en, the PSO is used to update the position of particles by encoding them and calculating the fitness values to find the optimal solution of the task offloading strategies, which greatly reduces the energy consumption during the task allocation process in the system. Finally, the simulations are conducted to compare the proposed method with two other algorithms in terms of the average time delay and energy consumption under different numbers of user mobile devices and data transmission rates. *e simulation results showed that the average time delay and energy consumption of the proposed method are the smallest in different cases. And, the average delay and energy consumption are 0.205 s and 0.2 J, respectively, when the number of users’ mobile devices is 80, which are better than the other two comparison algorithms. *erefore, the proposed method can reduce the task execution delay with less energy consumption.


Introduction
In recent years, with the rapid development of mobile Internet and wireless communication technology, the human society has entered the 5G era [1]. Mobile smart devices are widely used in many fields, which provide great convenience to all walks of life [2,3]. However, it is difficult for existing technologies or individual hardware devices to provide shorter-delay and larger-bandwidth network communication to these devices [5,6]. In addition, the characteristics of mobile smart terminal devices that consume large amounts of computing energy put forward higher demands on the performance of batteries [7,8].
erefore, the problems associated with the massive computation and energy consumption of smart devices have been the focus in recent researches [9,10]. e mobile edge computing provides new approach to solve the aforementioned problems, which is also the most forward-looking solution [11,12]. It allows terminal users to offload missions to servers for implementation, which can reduce the energy consumption and time delay [13]. In [14], a constrained multiobjective optimization task offloading model with corresponding algorithms was proposed aiming at the problem of allocating tasks from edge clients to edge servers as well as other edge clients in mobile edge computing, which takes the minimum energy consumption and processing delay as the objective function. However, this approach does not consider the balance between task delay and the network energy consumption. Ren and Wang proposed a method for quality-of-service (QoS) prediction in mobile edge computing environment [15]. It identifies the similar users for different users through clustering analysis and applies the data provided by similar users to predict the QoS when the edge server accessed by the user is switched. But the energy minimization problem is not taken into consideration in this method. An ARIMA-BP-based selective offloading (ABSO) strategy for mobile edge computing was given by Zhao and Zhou, which can minimize the energy consumption of mobile devices while satisfying the latency requirements [16]. However, this method does not take into account the user experience and QoS. In [17], a novel mobility and dependency-aware QoS monitoring method for mobile edge environments was proposed to address the problem of bias between monitoring results and real results due to the dependency between user mobility and the QoS value. But this approach does not yield significant improvements in terms of the communication time delay. As the sudden task offloading of mobile users when they are located in hotspots may lead to overloading of several edge servers, a load balancing algorithm for mobile devices in edge cloud computing environments based on genetic algorithms was given by Lim and Lee [18]. Nevertheless, this algorithm does not consider the minimization of the task latency and the network energy consumption. In [19], the reliability-aware offloading and the minimization of energy consumption and delay were studied. A task-merging strategy based on mobile program component graph with the fundamental constraint of valuable offloading was proposed. Meanwhile, a fast algorithm was developed for the hybrid problem to minimize the computation complexity in program partitioning. However, the safety interruption problem is not taken into consideration when the user devices are energy-constrained. To solve the problem of limited computational resources in edge computing architectures, Shi et al. studied the problem of cross-server computation offloading and the collaboration between multiple edge servers for multitask mobile edge computing and proposed a greedy approximation algorithm, which can greatly reduce the overall energy consumption [20]. However, this method does not give a specific approach to improve the QoS of users.
Based on the above analysis, to address the problems of high energy consumption and time delay of the offloading strategies in traditional edge computing, a computation offloading strategy for IoT using improved PSO in edge computing is proposed. ere are two main general ideas in this proposed method. First, the overall model of mobile edge computing system and the corresponding objective function are constructed by establishing the communication model and the computation task load model. In addition, the individual particles are encoded by introducing the improved particle swarm algorithm to improve the algorithm's performance in finding the optimal solution, compared with the traditional computation offloading strategies for IoT.

Model of the System.
e analysis of the task offloading system model of mobile edge computing in multiuser scenario is performed in this section.
Assume that the total amount of user mobile devices in a defined region is U; the user mobile device is set to u, where u ∈ U 1, 2, 3, . . . , U { }. Every mobile device has its corresponding mission that needs to be performed and the focus of all mobile devices is different (some user mobile devices focus on the energy consumption while some focus on the time delay, etc.). A base station is located in the defined area and there are server devices of mobile edge computing placed. e individual user mobile devices in U within the area can communicate with the server device of mobile edge computing through a radio access network and offload their computation missions to a server device of mobile edge computing for execution. e total number of channels in the base station is set to be V, and each user mobile device can connect to the server devices of mobile edge computing through only one channel.
In this paper, it is assumed that user mobile devices will not change the channel again when it is selected, and the computational resources of server devices of mobile edge computing are capped but central server has unlimited available computational resources. Moreover, one server device of mobile edge computing can only meet A user mobile devices for computing during the same time period. When the number of computation tasks goes beyond the limit of server device, the excess computation tasks will be offloaded to the remote central cloud server for execution through the backbone network. e basic architecture of mobile edge computing is shown in Figure 1.
As Figure 1 shows, the computation mission of every mobile device requires three steps for decision making in the process of offloading: Decision 1. For the user mobile device side, clarify that the computation task is performed locally or offloaded to the server devices of mobile edge computing. It will get into Decision 2 if the computation mission is selected to be offloaded to the server devices of mobile edge computing. Decision 2. On the mobile edge computing server side, it needs to ensure whether the computational resources are sufficient or not. If the computational resources are not sufficient, it gets into Decision 3. Decision 3. When computational resources in mobile edge computing server are not sufficient, a decision needs to be made whether it should continue to wait in queue of the current server or to offload the mission to remote central server for execution.

Communication
Model. In the system of mobile edge computing, it is assumed that each user mobile device in the specific area communicates through the uplink transmission channels that are orthogonal to each other. Namely, all the user mobile devices do not interfere with each other on the same frequency during the task offloading. Denoting the transmission power of mobile device u as P u , then the uplink rate R u of this mobile device is shown in the following equation: where S u represents the uplink bandwidth of the user mobile device u and B � U u�1 S u represents the total uplink bandwidth of the system. z u denotes the channel gain coefficient of the uplink between the user mobile device u and the base station. δ 2 u is the noise power between the uplink of the user mobile device u and the base station.

Computation Task Model.
is section demonstrates the process of simplifying the tasks by constructing a computation task model. In general, a computation task includes three main elements: (1) e volume of data required for the computation task: the data mentioned here mainly consist of the program code, input parameters, and so on. If the computation task should be offloaded to the server device of mobile edge computing to carry out, these data need to be uploaded to the server via the mobile device's transmitter unit module. (2) Computing capacity required for this task: it is usually denoted by the number of CPU cycles. (3) e result data of this computation task: the result data need to be downloaded from the server device of mobile edge computing to the mobile device if the task is offloaded. us, a computation task can be represented by a ternary equation containing these three elements, which can be written as where Q v denotes the computation task. D v denotes the volume of data required for the computation task. C v indicates the computing capacity for the mission. R v denotes the result data of this computation task. e user mobile device can obtain these three elements (D v , C v , and R v ) of the computation task by the program call graph technology. Meanwhile, in order to fill the personalized needs of each user, various application types and corresponding data are provided to analyze and calculate the delay, and energy consumption in the situation of the computation task is completed locally and on cloud, respectively.
us, the delay and energy consumption load model for multiusers can be constructed.

Personalized Computation Task Load Model.
e personalized computation task load model for multiusers is performed in this section. From the above analysis, it is needed to clarify that the computation task of each user mobile device is selected to be executed locally or not at the beginning. e analysis is carried out for these two cases as follows. e case where the computation task of user mobile device u is selected to be executed locally. We assume that the computing capacity of user u, which means the amount of CPU periods, is J u1 and the amount of CPU periods required to execute the computation task is j u1 . us, the time t u1 required for local calculation of the mission is Hence, the energy consumption E u1 for the local execution of the computation task can be formulated as where λ is the energy consumption coefficient and its value is related to the chip structure of the user mobile device. λ is a constant and is usually set as λ � 10 − 26 . erefore, the total overhead W u1 when the computation task of the user mobile device u is executed locally can be written as where α u1 andα u2 represent the trade-off coefficients of the energy consumption and the time delay for task execution, respectively, when the user is making the offloading decision. And they need to satisfy the constraints shown in the following equation.
User's mobile device It can be seen in equation (5) that the total overhead consists of two components, which are the energy consumption and the time delay incurred during local task execution. When α u1 is larger, it indicates that the residual energy of user mobile device is less and more attention will be paid to the energy consumption of the user mobile device when making offloading decisions. While α u2 is larger, it indicates that the computation task of the user mobile device is delay sensitive and more attention should be paid to the task execution delay when making offloading decisions. In addition, users can also dynamically adjust and balance the different concerns according to their specific situations. e computing task of the u-th user is unloaded to the edge server for execution Set the processing delay of the mission at the server device of mobile edge computing as t u2 (P u , J u2 ), which can be formulated as where t u2(1) (P u ) denotes the time delay caused by uploading the input data of computation tasks to the mobile edge computing server via uplink. t u2(2) (J u2 ) denotes the time delay caused by executing the computation task at the mobile edge computing server. t u2(1) (P u ) can be calculated as where σ u � z u /δ 2 u . When a mission is uploaded to a server device of mobile edge computing, computational resources J u2 will be allocated. And the time delay t u2(2) (J u2 ) of the task execution can be calculated as e energy consumption when transferring the computation task of user mobile device u to the mobile edge computing server can be written as where β is the efficiency of the device transmission power amplifier. e total overhead W u2 when the computation task of user mobile device u should be offloaded to a server device of mobile edge computing to carry out can be formulated as As depicted in equation (11), the total overhead consists of 2 contents. ey are the energy consumption incurred by task execution and the time delay of the remote server device.
rough the analysis of the abovementioned task offloading process, it can be seen that the impact of task offloading on the energy consumption and delay of the user's mobile device is the main consideration. As the computing capacity of the mobile edge computing server is much larger than that of the user mobile devices, the energy consumption incurred by executing the computation tasks on the edge server can be neglected.
In addition, considering that the volume of the result data is generally small when the computation task is executed on the mobile edge computing server, the energy consumption and time delay required when returning execution results to the user mobile device can be ignored.

Optimization
Objectives. Based on the aforementioned analysis, the total overhead W u of the user mobile device u during task offloading can be written as Here, the objective function is formulated as the minimization of the total overhead of the user mobile devices during the task offloading process. e optimal offloading strategy . , x u , the optimal uplink allocation power P O � P 1 , P 2 , P 3 , . . . , P u , and the optimal mobile edge computing resource allocation strategy M O � m 1 , m 2 , m 3 , . . . , m u are obtained with the minimum W u . Hence, the optimization objective function for the computation task offloading of user mobile devices in a specific area can be formulated as e constraints are shown in the following equation. 1: where x u is the offloading decision of users. When x u � 1, user mobile device u will offload the mission to the server device of mobile edge computing for execution. When x u � 0, user mobile device u chooses to execute the task locally. P max represents the maximum transmission power, J max represents the maximum computation resource, and U 2 represents the set of user mobile devices that offload the mission to the server device of mobile edge computing to carry out. Constraint 1 shows the constraints about user's offload decisions. Constraint 2 requires that the uplink power of the user mobile device in offloading process cannot exceed its maximum transmission power. Constraint 3 shows that computational resources allocated to the user mobile device in offloading process must not exceed the maximum computational resources available in the mobile edge computing server. Constraint 4 indicates that computational resources allocated to the user mobile device in offloading process cannot be less than zero. Constraint 5 represents the bandwidth limit of the system. e number of user mobile devices that are allowed to upload data at the same time in the specific area cannot be more than U.

Computational Resource Allocation Strategy Using Improved PSO
It is clear that the numerous objective function constraints make it much more difficult to obtain the optimal computation offloading strategy. erefore, an improved PSO is introduced to find an optimal solution of task offloading strategy.

Particle
Coding. e particles in the particle swarm denote some mobile edge computing servers where the current computation task is to be offloaded. And the offloading decision vector X � x 1 , x 2 , x 3 , . . . , x u corresponding to each particle represents the optimal execution point for this computation task. e size of the particle swarm is G, and total amount of computation tasks currently selected for local execution is L, and the total number of mobile edge computing servers in a given area is Y.
e set of all tasks is denoted as R � r 1 , r 2 , r 3 , . . . , r v . In order to formulate different tasks mathematically, we assume that the mobile device of user u has r i number of computation tasks that need to be executed. r i is related to three variables and can be denoted as r i � r i (I i , ρ i , E c ) , where I i indicates input data volume, ρ i is the computational density, E c indicates energy consumption for the computation task execution. e cycle frequency of all servers can be represented as set T � t 1 , t 2 , t 3 , . . . , t n , which contains the cycle frequency of each server in the system. e maximum transmission rate can be calculated by channel gain matrix Z, which is shown as where the diagonal element Z i,i in the channel gain matrix Z represents a channel gain of 0, and Z i,i (1 ≤ i ≤ v, 1 ≤ i ≤ n, i ≠ j) represents the channel gain required for the transmitting the computation task of the user mobile device i to server j.
All individual particles in the particle population are integer coded. Each individual particle in the encoding process can take any integer from [1, n]. For example, a task set contains 5 computation tasks: R � r 1 , r 2 , r 3 , r 4 , r 5 , and the particles are encoded into [1, 2, 0, 1, 2], which characterizes the position where different computation tasks are executed. e element value of tasks r 1 and r 4 is 1, which indicates that these missions will be offloaded to a server whose number is 1 for execution. e value of tasks r 2 andr 5 is 2, which indicates that they will be offloaded to a server whose number is 2 for execution. e value of task r 3 is 0, denoting that this task will be executed locally. e elements x i in the offloading decision vector take values from[0, n]. x i � k(1 ≤ k ≤ n) means that the computation task will be offloaded to the kth server to carry out. e moving velocity of the particles in the particle swarm is denoted as A � a 1 , a 2 , a 3 , . . . , a u , indicating the velocity of current computation missions sent to server. e moving velocity of the particles is encoded in the same dimension as the size of task sets and initialized as an integer. Rounding operations are also adopted during the update process. For example, there is a task set containing 5 computation tasks: R � r 1 , r 2 , r 3 , r 4 , r 5 and the particles are encoded into [4,1,2,0,3], which characterizes the execution servers for different computation tasks. e element value of task r 2 is 1, which means that the mission is carried out by a server where original server is shifted down by 1 position. e element value of task r 5 is 3, which indicates that this mission is carried out by a server where original server is shifted down by 3 positions. And the element value of task r 4 is 0, denoting that this task is executed by the original server.
In the iterative computation of the PSO, the optimal position of every individual particle is denoted as o1 , d o2 , d o3 , . . . , d on , and the optimal position that occurs among all particles is denoted as e set D indicates that the total overhead of the system is minimized when each particle is allocated to the task in its corresponding way. e set F represents that the total overhead of the system is minimized when all the particles are allocated to the tasks in such way.

e Fitness Function.
e fitness value of the particles represents the total overhead of the system when missions are sent to different servers, which can be calculated as where C u denotes the total time delay for executing an offloaded task. ζ is the penalty coefficient. E u is the transmission energy consumption of the uth computation task. E max is the maximum energy consumption.

Computation Offloading Process.
In u-dimensional discrete binary particle swarm problem, the velocity and position of the ith particle are generally denoted by n-dimensional vector A and D, which are X i � x i1 , x i2 , x i3 , . . . , x iu and Y i � y i1 , y i2 , y i3 , . . . , y iu , respectively. erefore, in the task offloading optimization problem, the positions of particles represent the decision variable for the execution position of each computation task, which can be written as e overall flow of mission offloading strategy for IoT on edge computing using improved PSO is shown in Figure 2. e detailed process of the computation task offloading based on the discrete binary PSO is shown as follows: (1) Initialization for the particle swarm: all particles in the swarm are divided into decision variables in a random way. e values of the objective function of different particles and the optimal positions of the individual particle D O and the particle swarm F O are calculated according to equation (13).

Wireless Communications and Mobile Computing
(2) Set the maximum number of iterations; update the velocity and positions of all the particles and calculate the corresponding energy consumption E u and the time delay C u related to the new decision variables. If the time consumed to execute the task is larger than the time delay threshold, the decision variable cannot meet the requirement, and the particle position should be discarded. Otherwise, the time consumed to execute the task is smaller than the threshold and the requirement is satisfied. en the optimal positions of the individual particle D O and the particle swarm F O should be updated according to the calculation results of the objective function. (3) e relatively optimal decision variables and energy consumption are obtained when the iteration is complete.

Simulation Environment and Parameter Settings.
When there are multiple user mobile devices in a specific area, the server should be located near the base station and all user mobile devices should be within the coverage area. As shown in Table 1, basic simulation parameters of channel are set according to the 3GPP standard.

Analysis of Simulation
Results. First, the convergence performance of the proposed computation offloading algorithm for IoT using improved particle swarm algorithm in edge computing is analyzed. e number of users in the defined region is set to 80, the number of servers is 15, and the size of each computation mission is 5 Mb. e simulation results of the time delay for whole missions under different numbers of iterations are shown in Figure 3. In Figure 3, the proposed method converges quickly after 20 iterations, and the total time delay of the system remains almost the same when the number of iterations is more than 20, indicating that the global optimal solution is obtained. Hence, the proposed method has strong global optimization capability and searching performance in the early stage of computation and can also perform fast global search in the later stage. e proposed method reduces the total system time delay from about 0.275 s to about 0.205 s, which is a 25.45% decrease, and greatly improves the overall performance of the computation offloading strategy. en, a detailed analysis for the effect of different crossover rate and mutation rate on convergence is carried out. We set the total number of users in this region to be 80. Figure 4 depicts the relationship between the total overhead of user mobile devices and the number of iterations for three different fixed crossover and mutation probabilities and the adaptive crossover and mutation probabilities. As shown in Figure 4, in the case of fixed crossover and mutation probabilities, whatever their values are will make the algorithm fall into the local optimal solution trap. In addition, fixed crossover and mutation probabilities will also prolong the searching process and consume more time. In contrast, the adaptive crossover and mutation probabilities can adjust dynamically as the fitness value changes in searching and not sink into the local optimal solution trap, which greatly reduce the time to targeting global optimal solution and improve calculation speed.

Comparative Analysis.
A comparative analysis of the performance in delay and energy consumption of the proposed method with the algorithms proposed in [16] and [20] under the same conditions is provided. Simulations are conducted for the average time delay of different algorithms with different numbers of user mobile devices. e simulations results are illustrated in Figure 5.
In Figure 5, the average time delay of all three algorithms is proportional to the number of user's mobile devices. With the rise of the number of user's mobile devices, average delay and overall growth rate for the proposed method are the smallest. And the average time delay is only 0.205 s in the case of the total user's mobile devices number being 80. e proposed computational resource allocation strategy is studied for the communication model of uplink transmission and the multiuser personalized computation task load model, on which the system model and objective function are constructed. And the objective function is constrained in multiple aspects, which improves the computation efficiency of the system model to a certain extent.
e algorithm proposed in [20] has a larger average time delay whose variation rate is low as the user amount grows. Because of that, the performance of time delay is sacrificed to save more energy. In addition, the balance between delay and network energy consumption is not considered. e delay of the      algorithm proposed in [16] is smaller with fewer user mobile devices, but the growth rate of that is large with the rise of users because the tasks selected for local execution increase significantly when there are many users' mobile devices. Simulation for the energy consumption of different algorithms at various transmission rates is conducted; the simulation curve is demonstrated in Figure 6.
In Figure 6, energy consumption of computation task executed locally is independent with data transmission rates, which is due to the fact that data do not need to be transmitted when the computation task is selected to execute locally.
e energy consumption of all three algorithms descends as the data transmission rate increases, which indicates that the consumption of task offloading drops with the rise of transmission rate. In addition, the energy consumption in the proposed method decreases at the fastest rate and the energy consumption is the smallest for the same transmission rate.
is is because the consumption and delay decrease as transmission rate increases when data are transmitted between users and servers. e proposed task offloading strategy can perform more fine-grained and frequent offloading operations due to the introduction of PSO. Figure 7 shows the comparison for energy consumption in each algorithm with different numbers of users' mobile devices.
As depicted in Figure 7, the energy consumption in different algorithms grows with the rise of users, while energy consumption in the proposed method is the smallest with the lowest growth rate. e energy consumption is 0.2 J in the case of the number of users being 80. Moreover, the energy consumption of local execution is the largest. e better capability in proposed strategy is due to the introduction of PSO, which greatly improves searching efficiency and results in a lower proportion of locally executed tasks while considering the time delay, thus reducing the system energy consumption.

Conclusion
To address the problems in traditional computation offloading strategies for IoT environment, a computation offloading strategy for IoT using improved particle swarm algorithm in edge computing is proposed. e overall system model is constructed, and the searching efficiency of optimal solution in algorithm is improved by introducing the improved PSO. Based on the particle swarm algorithm, the optimal solution of task offloading strategy is obtained by encoding the particles and calculating the fitness value so as to update the positions of different particles, which minimizes the energy consumption produced by the system task allocation. e proposed strategy is analyzed by simulation experiments. It can be known from simulation that compared with the other two algorithms, the method shows the Local execution

Proposed in this article
Ref. [20] Ref. [16]   least average time delay and energy consumption for task execution under different situations. Meanwhile, the growth rate of the average delay and consumption are smallest accompanied by the user's rise, and the energy consumption drops most rapidly accompanied by the rise of the transmission rate. Hence, the proposed method can achieve the best overall system performance. e future work will focus on the overall capability of the proposed strategy and improved way considering the situations that users' mobile devices can move in a specific area or within different areas.
Data Availability e data included in this paper are available upon request to the corresponding author without any restriction.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.