User Energy Efficiency Fairness Algorithms for Task Offload in Cloud Edge Networks

Key Laboratory of Information and Communication Systems, Ministry of Information Industry, Beijing Information Science and Technology University, Beijing 100101, China School of Information and Communication Engineering, Beijing Information Science and Technology University, Beijing 100101, China Key Laboratory of Modern Measurement and Control Technology, Ministry of Education, Beijing Information Science and Technology University, Beijing 100101, China


Introduction
With the rapid advances in mobile communication, several new applications such as virtual/augmented reality, face recognition, and automated driving have been developed. ese applications demand ultra-low delay and high reliability of mobile networks [1][2][3]. Although these new applications enhance the convenience of performing tasks, the amount of data and energy consumed to transmit data has increased at an astronomical rate.
is has led to a heavy processing burden on the network communication equipment. Cloud-side networks can help realize comprehensive applications regarding communication computing and resource collaboration to e ciently mitigate the burden on the communication equipment [4][5][6][7][8][9][10]. erefore, research on cloud-side networks is of great signi cance. An incredible amount of research has been conducted in recent years. e combination of cloud-side networks and mobile edge computing (MEC) [11][12][13][14][15] technology is able to ful ll the networking requirements of communication, computing, and business processing. It can realize an e ective support system for the network providing novel applications. By applying MEC technology, users on the cloud-side network can upload computing tasks with high computing requirements and energy consumption to edge nodes. Edge computing nodes are very close to users. erefore, data transmission pressure and link congestion can be e ectively alleviated. Simultaneously, the network bandwidth demand and transmission delay in data computing and storage process can be signi cantly reduced [16][17][18].
At present, considerable advancements have been made in the field of mobile edge computing for cloud networkedge network applications. Previous research used edge nodes and mobile devices to form an edge computing system wherein mobile devices have computing tasks [19]. Considering the frequency adjustment and nonadjustment of edge access points, different methods were used to obtain the unloading decision. A user cooperative cloud network-edge mobile edge computing model was also proposed, whose primary objective is minimizing the task processing delay to make the optimal task offloading decision [20]. Stochastic optimization theory was also used to make the optimal unloading decision for the computational unloading problem [21]. e data cache in the mobile edge computing model was considered to obtain the unloading decision by considering the average minimum task processing delay [22]. ese studies only focused on the optimization analysis of the task unloading decision-making and did not study the resource allocation problem.
In the cloud-side network, the intensive deployment of edge servers consumes considerable resources. Many studies have previously been conducted in this field. A resource allocation method was proposed to minimize the energy consumption of wireless power supply system in a cloud-edge computing scenario [23]. A computational resource allocation algorithm was proposed to minimize terminal energy consumption with delay as a constraint [24]. A fair aware communication and computing resource allocation scheme was proposed that aimed at minimizing the maximum loss between users as the objective function for optimization [25]. For the single-cell scenario, reference [23]proposed a task unloading and resource allocation scheme to minimize the energy consumption of the system with wireless power supply capability. Considering the user delay constraint, reference [24] proposed an access control and computing resource allocation algorithm to minimize terminal energy consumption. In the multicell scenario, considering that the user can unload tasks to multiple edge nodes, task allocation of the terminal was optimized to minimize the weighted sum of delay as well as terminal energy consumption [26]. e unloading decision, communication resource allocation (uplink bandwidth and downlink bandwidth), and computing resource allocation of each user were optimized to minimize the energy consumption and delay weight loss for all users under user delay constraints [27]. An optimization problem aimed at minimizing the maximum loss between users was constructed, and a fair and perceptual unloading decision was proposed for the communication and computing resource allocation scheme [25]. e joint scheduling problem of tasks and resources for multiple edge servers in the cloud network was also analyzed [28]. Here, although the resource management under the cloud network-edge end fusion network and load balancing between edge nodes are considered, the correlation between the user unloading decision and resource allocation is ignored. Accordingly, a comprehensive consideration of the user unloading decision and resource allocation is lacking.
On the cloud-side network, the interaction between computing and communication makes the resource optimization problem very complex and difficult to solve. In addition, there are multiple edge nodes in the network with vast differences in their communication computing power and load. In a cloud network-edge integration network, it is important that the nodes cooperate with each other in order to improve network efficiency and performance and realize efficient utilization and resource sharing. erefore, this study aims to develop a cooperative cloud network-edge fusion network taking maximum and minimum user energy efficiency as the objective function and comprehensively considering user unloading decisions and resource allocation.
e primary contributions of this paper are summarized as follows: (i) An edge network model is built for the collaborative cloud network, and the fairness of user energy efficiency is analyzed based on the maximum and minimum criteria. (ii) e optimization of user energy efficiency fairness proposed in the cloud network edge-end fusion network model is a mixed-integer nonconvex fractional programming problem which is tough to solve. Using the generalized fraction theory, relaxation variables, and equivalent replacement of the optimization problem, we convert the optimization problem into a convex optimization problem. CEEF-and ADMM-based energy efficiency fairness algorithms are proposed to dispose of the problem. (iii) rough simulation, the performance of the proposed CEEF and ADMM energy efficiency fairness algorithms is observed. e efficiency of the proposed algorithm to guarantee user energy efficiency fairness is verified.

System Model
In this section, a system model for cloud-side network systems, including network, communication, and computation models, is presented. Figure 1 e user terminal has a computing task for processing. Q i � (L i , K i ) describes the computing task, where L i represents the data size of the task, and K i represents the number of CPU cycles required to accomplish the terminal task. e user terminal is able to choose to deal with the task locally or unload it to the MEC or cloud server. When the user terminal sends the task to the MEC server, the MEC server can process the task by itself, forward it to other MEC servers with richer computing resources, or further unload the task to the cloud server for processing. Table 1 summarizes the used notations and the corresponding de nition in this paper.

Network Model. As shown in
Binary variables x i , y i,m , z i ∈ 0, 1 { } are de ned, where x i 1 shows that the task of the user terminal i is processed locally and x i 0 means that the task is not processed locally. Similarly, y i,m 1 indicates that the task of user terminal i is processed by a MEC server m and y i,m 0 indicates that the task is not processed on the MEC server m. Additionally, z i 1 indicates that the task of user terminal i is processed by a cloud server and z i 0 indicates that the task of user terminal i is not processed by the cloud server.
e unloading decision to be made by the user terminal for computing tasks needs to meet the following constraints: is constraint means that the computing task of each user terminal can only be processed by one of three means: local processing, MEC server processing, and cloud server processing. is implies that, for a user terminal i, only one binary variable can be equal to 1.

Communication Model.
In the cloud-edge networks model, when the user terminal i sends a task to the MEC server m, the channel gain is expressed as where g i,m is the channel power gain coe cient when the user terminal i sends tasks to the MEC server m, d i,m is the distance between user terminal i and MEC server m, and α is the path loss factor. Assuming that the user movement rate is minimal during unloading calculations, g i,m can be considered to be a constant. e uplink transmission rate between i and m is where B is the available spectrum bandwidth, p i is the uplink transmission power of user terminal i, and σ 2 is the noise power.

Computation Model.
e local computing capacity of i is represented by f L i , and the power consumed during local computing is represented by P L i . erefore, when i processes the task Q i locally, the calculation delay can be expressed as (4) e energy consumed can be expressed as When i sends the task to m for processing, the computing power of m is represented by F m and the computing resources allocated to i are represented by f i,m . When the user terminal i sends a task, the transmission delay can be expressed as e energy consumed during transmission by i can be expressed as Accordingly, the calculation delay of m for task processing can be expressed as When i unloads the task on m, m can choose to process the task itself or send it to other MEC servers with richer computing resources. e order T m,k indicates the average round-trip time of task forwarding between the MEC server m and another MEC server k. When m k, T m,k 0. erefore, when i transmits a task to m for processing, the total delay is divided into three parts: the transmission delay  Figure 1: Cloud edge networks model.

Mobile Information Systems
between i and m, the forwarding delay between m and k, and the calculation delay after k receives the task. e total delay can therefore be expressed as When i unloads a task to the cloud server for processing, the cloud computing capacity allocated for it is represented by f c i , the average round-trip time of task transmission between m and the cloud server is represented by T c , and the processing delay of the task in the cloud server can be expressed as (10) e size of the returned data after processing is considerably smaller than that before processing. Hence, the transmission delay of the result is negligible. When the cloud server processes the task of i, the total delay of the entire process can be expressed as To sum up, the total delay of user terminal i in task processing can be expressed as e total energy consumption can be expressed as

User Energy Efficiency Fairness Resource Allocation Algorithm
First, this section formulates the resource allocation and task unloading optimization problems based on the maximumminimum criterion and then solves them using the CEEF and ADMM algorithms, respectively.

Problem Reformulation.
e maximum-minimum criterion is an effective means to ensure fairness for all users. Hence, it was used to construct the joint resource allocation and task unloading optimization problem. e unloading decision vector for i is expressed as ψ i � x i , y i,1 , y i,2 . . . , y i,M , z i }, and the joint optimization problem is expressed as follows: Equation (14) represents the objective function of joint resource allocation and task unloading, equation (15) represents the user terminal requirements for the delay in the task unloading process, equation (16) shows that the computing resources of the base station cannot exceed the maximum computing capacity, and equations (17) and (18) indicate that only one node can be selected for computing the required tasks.

Solutions Using the CEEF Algorithm.
e user energy efficiency fairness optimization problem based on the maximum and minimum criteria is a mixed-integer nonconvex fractional programming problem. First, the problem is transformed into an equivalent mixed-integer nonconvex subtraction optimization problem using the generalized fractional theory. Let the variable Q represent the value of the optimization problem, that is, the maximum and minimum energy efficiency. Using the generalized fraction theory and introducing a relaxation variable θ, the problem can be transformed into the following: where As x i , y i,m , z i are binary variables, it can be relaxed to e optimization problem is further modified as follows: e objective function equation (26) is a linear function, and equations (28)-(31) are linear constraints. e total delay of the user terminal using equation (27) is expressed as erefore, the problem is described as follows: where F min is a normal number close to 0. In order to solve the aforementioned problem, the CEEF algorithm is proposed. Its steps are shown in Algorithm 1.
As the unloading decision variable is substituted into a continuous variable, x i , y i,m , z i need to be mapped to a binary variable after the optimal solution is obtained. e recovery scheme is as follows: All the other variables were also recovered similarly.

Solutions Using the ADMM Algorithm.
e ADMM algorithm is a standard method to solve large-scale optimization problems. Combined with CEEF, it can effectively achieve user energy efficiency fairness.
First, the optimization variables x i , y i,m , and z i are copied and redefined as x i , y i,m , and z i to form the following equation: Equation (33) can be equivalently modified as follows: erefore, the augmented Lagrangian function from equation (43) can be expressed as

Mobile Information Systems
where λ i , μ i,m , and c i represent the Lagrange multipliers and ρ > 0 represents the augmented Lagrange parameter, which is a constant. is constant controls the convergence of the ADMM algorithm iteration. To facilitate the solution of the target variable, the Lagrange multiplier coefficients are simplified. Hence, the augmented Lagrange function can be equivalently transformed into the following: When using ADMM to solve the problem, the variables need to be updated through multiple iterations. Let represent the value of the t th iteration of the variables. e first step to solve the equations is to update the local variable, which is expressed as follows: is problem can be decomposed into m parallel subproblems, where each subproblem is solved separately using convex optimization. e second step is updating the global variable s using the following formula: (59) e aforementioned problem is an unconstrained quadratic convex problem. After deriving the variable, the following equation is obtained: e global variable optimal solution is obtained as follows: e third step is updating the Lagrange multiplier as follows:

Initialization:
Set t � 1, Q (0) � 0, ε � 0.01, t max � 25. Substitute Q (0) � 0 into the problem equation (33) to obtain an initial feasible solution (0) . Iteration: Substitute Q (t− 1) into the problem equation (31) and update feasible solution (t) ; (2) Solve the problem equation (15) by substituting the feasible solution (t) and update Q (t) ; t � t + 1. end while ALGORITHM 1: CEEF algorithm. 6 Mobile Information Systems e fourth step verifies the iteration termination condition, expressed as follows: where ε pri > 0 and ε dual > 0 represent the iteration stop threshold under feasible conditions. e energy efficiency fairness algorithm based on the ADMM algorithm described here is shown in Algorithm 2.
As the unloading decision variable is modified into a continuous variable, x i , y i,m , z i need to be mapped to a binary variable after the optimal solution is obtained. e recovery scheme is as follows: All the other variables were recovered in the same way. Complexity Analysis: when using CEFF algorithm to solve the problem, the algorithm complexity is O(I 2 (M + 1)). For the proposed ADMM-based distributed algorithm, in the local variable update, the computational complexity is O(I 3 ).
In the global variable update, the corresponding calculation complexity is O (I(M + 1)). In addition, the complexity of updating Lagrange multiplier is O(I). erefore, the computational complexity of each iteration of algorithm is

Simulation Results
In this section, the proposed algorithms are compared by simulation experiments. e simulation is composed of 3 base stations and 27 random users. e radius covered by each base station is 500 m, and the users are randomly distributed in the overlapping area covered by the base station. e transmission link bandwidth in the system is 15 MHz, noise power is σ 2 � −143 dBW, path loss model is h � 128.1 + 37.6 log 10 (d), and d represents the distance between the MEC server and user terminal. e local computing capacity of the user terminal, base station, and cloud server is 0.6 Gcycles/s, 5 Gcycles/s, and 10 Gcycles/s, respectively. To achieve a performance comparison, a noncooperative scheme is simulated in addition to the CEEF and ADMM [29]. In the noncooperative scheme, computing tasks could be processed by the user terminal, MEC server, or cloud server, but the cooperation between MEC servers is not considered. Computing tasks cannot be forwarded to other MEC servers, and the maximum-minimum user energy efficiency cannot be achieved by optimizing resource allocation. Figure 2 shows the convergence process of the CEEF algorithm, ADMM algorithm, and noncooperative algorithm. e abscissa represents the number of iterations, and the ordinate represents the energy efficiency. It can be seen from the simulation graph that with an increase in the number of iterations, the three schemes stabilize at their unique value, thereby verifying the convergence of the algorithms. e performance of CEEF and ADMM algorithms is 30.76% higher than that of the noncooperative scheme.
is is because the computing tasks in the noncooperative scheme cannot be forwarded among MEC servers, thereby increasing energy consumption and decreasing energy efficiency. Figure 3 shows the impact of different task calculations on user energy efficiency performed by the CEEF, ADMM, and noncooperative algorithms when the transmission power of the user terminal is 0.2 W and 0.5 W. e abscissa represents computations required for the task, and the ordinate represents energy efficiency. e energy efficiency of the three schemes gradually decreases with an increase in computations. As the number of calculations increases, the energy consumption increases, thereby decreasing energy efficiency. e figure also shows that the energy efficiency trend of the CEEF and ADMM algorithms is consistent and greater than that of the noncooperative scheme. We can hence conclude that the performance of the cooperative schemes is better than that of noncooperative scheme under varied computational loads. Figure 4 shows the impact of di erent data sizes of a task on user energy e ciency under the CEEF, ADMM, and noncooperative algorithms when the transmission power of the user terminal is 0.2 W and 0.5 W. e abscissa in the gure represents the data size of the task, and the ordinate represents the energy e ciency. An increase in the data size of the task gradually decreases the e ciency of the three algorithms. When the data size of a task increases, the energy consumed by the system for calculations increases, thereby decreasing energy eciency.
e gure also shows that the energy e ciency trend of the CEEF and ADMM algorithms is consistent and greater than that of the noncooperative scheme. We can hence conclude that the performance of the CEEF and ADMM algorithms is better than that of the noncooperative scheme for varied data sizes. Figure 5 shows the impact of transmission power of the user terminal on user energy e ciency of the CEEF, ADMM, and noncooperative algorithms when the task data size is 0.2 Mb and 0.5 Mb. e abscissa represents the transmission power, and ordinate represents the energy e ciency. With an increase in transmission power, the energy e ciency of the three schemes gradually decreases. When the transmission power gradually increases, the energy consumed by the system for transmission calculation increases more than the transmission rate, resulting in reduced energy e ciency. e gure also shows that the energy e ciency trend of the CEEF and ADMM algorithms is consistent and greater than that of the noncooperative scheme. We can conclude that the CEEF and ADMM algorithms perform better than the noncooperative algorithm when the user transmission power is varied. Figure 6 shows a comparison between considering and not considering fairness. e di erence between the two is that the objective function of optimization is di erent. e objective function without considering fairness is that the maximum total energy e ciency, that is, the ratio of the total user rate to the total user energy consumption should be the largest. e gure shows that when fairness is not considered, the energy e ciency gap between the best users and the worst users is relatively large. We can conclude that the maximum and minimum user energy e ciency considered in this paper can better ensure the fair access of resources for all users.

Conclusion
is paper develops a cloud network-edge network model and analyzes the fairness of user energy e ciency based on the maximum-minimum criterion. e user energy eciency fairness optimization problem proposed in the cloud network-edge network model is a mixed-integer nonconvex fractional programming problem, which is di cult to deal with.
rough the generalized fraction theory, relaxation variables, and equivalent replacement of the optimization problem, the problem was modi ed into a convex optimization problem. Accordingly, CEEF-and ADMM-based energy e ciency fairness algorithms are proposed to dispose of the problem. Simulations are performed to test the proposed algorithm and verify that the proposed algorithm can guarantee user energy e ciency fairness.

Data Availability
Data sharing is not applicable to this article as no data sets were generated or analyzed during the current study.

Conflicts of Interest
e authors declare that they have no con icts of interest. Mobile Information Systems 9