New Application Task Offloading Algorithms for Edge, Fog, and Cloud Computing Paradigms

In the last few years, we have seen an exponential increase in the number of computation-intensive applications, which have resulted in the popularity of fog and cloud computing paradigms among smart-chip-embedded mobile devices. These devices can partially offload computation tasks either using the fog system or using the cloud system. In this study, we design a new task offloading scheme by considering the challenges of future edge, fog and cloud computing paradigms. To provide an effective solution toward an appropriate task offloading problem, we focus on two cooperative bargaining game solutions—Tempered Aspirations Bargaining Solution (TABS) and Gupta-Livne Bargaining Solution (GLBS). To maximize the application service quality, a proper bargaining solution should be properly selected. In the proposed scheme, the TABS method is used for time-sensitive offloading services, and the GLBS method is applied to ensure computation-oriented offloading services. The primary advantage of our bargaining-based approach is to provide an axiom-based strategic solution for the task offloading problem while dynamically responding to the current network environments. Extensive simulation studies are conducted to demonstrate the effectiveness of the proposed scheme, and the superior performance over existing schemes is observed. Finally, we show prime directions for future work and potential research issues.


Introduction
Currently, billions of smart devices connect to the Internet in the form of the Internet of Things (IoT). IoT is a worldwide network based on standard communication protocols and a novel paradigm with access to wireless communication systems. It applies various technologies to provide the promising fifth generation (5G) service applications. Meanwhile, the evolution of 5G networks is becoming a major driving force for the growth of IoT. For the connection of billions of smart devices, 5G-based IoT infrastructure is expected to have extended coverage, higher throughput, lower latency, and connection density of massive bandwidth. However, the management of such different kinds of control criteria is cumbersome and challenging for traditional network infrastructures that rely on conventional computing paradigms [1,2].
Despite the advance in the capacity of smart devices, mobile hardware is still resource-poor compared to the sys-tem server hardware. Constrained by battery life, storage limitation, computation capacity, and wireless bandwidth scarcity, the resource-poor mobile devices encounter the difficulty of supporting content-rich or computation-intensive applications such as real-time image processing for video games, augmented reality, and location-based services. Cloud computing is introduced as a promising paradigm to overcome the above difficulty. By employing this cloud computing method, the computing, data storage, and mass information processing can be offloaded to the cloud servers while ensuring the reliability and availability of the application services. This new paradigm is termed as the Cloud of Things (CoT), which helps in creating an extended portfolio of future network architecture [3,4].
CoT offers an efficient computing model where system resources can be shared as services through IoT. However, connecting to the remote cloud server causes communication latency, and the cloud cannot easily respond in real time to frequent network dynamics; it turns down the expected advantages of CoT. Usually, mobile devices can no longer afford to wait for the varying response time of a cloudbased computation service, especially with stringent demands on tolerated delay. Therefore, the rising tide is driving toward a new technology. Fog computing is a solution to subdue the shortcomings of cloud computing. It is a highly distributed platform with fog computing nodes, such as cloudlets, located at the edge of the Internet. As a mobilityenhanced small-scale cloud datacenter, the main purpose of cloudlets is arbitrating resource-intensive and interactive mobile applications with lower latency. It is a new architectural structure, called Fog-of-Things (FoT), which extends the CoT paradigm to leverage recent developments in future networks [5,6].
Initially, IoT devices had simply developed to collect and send data for analysis, but lacked system elements to perform complex computations on-site. However, recent advancements in embedded systems-on-a-chip have significantly increased the number of intelligent devices that possess some resources to partially run computationintensive applications [2]. This trend has extended the potential of IoT, and paves a way to develop a new paradigm, called the Edge-of-Things (EoT). Actually, there is a high possibility that CoT and FoT paradigms will encounter more challenges in relation to network dynamics, resulting in a high overhead in the network response time, leading to time latency and traffic burden. In order to avoid these problems while achieving an efficient resource utilization, the EoT paradigm may become necessary in future network services [7].
While FoT and EoT paradigms have some similarities, there is a major difference. First, both paradigms involve pushing intelligence and processing capabilities down closer to where services originate. Therefore, they share similar objectives (i) to reduce the amount of data sent to the cloud, (ii) to decrease network and Internet latency, and (iii) to improve system response time in remote mission-critical applications. However, there is a key difference between FoT and EoT; it is exactly where intelligence and computing power is placed. FoT pushes intelligence down to the local area network level of the network architecture, processing data in a fog node or IoT gateway. This approach can achieve a number of benefits including on-demand service, resource pooling, and virtualization. Metaphorically speaking, fog computing sits between physical things and cloud computing, just like in nature, where fog exists between the ground and clouds. Contrary to FoT, EoT pushes the intelligence, processing power, and communication capabilities of an edge gateway or appliance directly into devices. To ensure Quality of Experience (QoE) in terms of latency, bandwidth, and security, the applications running on the EoT paradigm will perform actions locally before connecting to the cloud, thus reducing network overhead issues as well as security and privacy issues. Therefore, EoT can bring new benefits such as early data resolution; responsive management on the edge; and improved latency, robustness, and security. However, due to cost and energy consumption issues, edge devices typically have limited capacities [7,8].
Fortunately, CoT, FoT, and EoT paradigms are not incompatible in nature; in fact, they compensate each other's limitations. More importantly, the future network concept is the convergence of CoT, FoT, and EoT paradigms; it has inspired us to seek a joint solution to maximize the performance of future networks. In this study, we propose a new task offloading control scheme by considering the merits of CoT, FoT, and EoT paradigms. Based on the combined design of different paradigm operations, our integrated approach can obtain a synergy effect while attaining an appropriate performance balance. However, it is an extremely challenging work to combine the CoT, FoT, and EoT paradigms into a holistic scheme. Therefore, a new solution concept is required.
Since the 1950s, game theory has been used to study strategic interactions. Whenever the choices made by two or more individuals have an effect on each other's gains or losses, and hence their actions, the interaction between them is game-theoretic in nature. In recent years, there has been a remarkable increase in work at the interface of game theory and many academic research fields from economics to computer science. Especially, game theory has been playing an increasingly visible role in network management, in areas such as resource management, routing mechanism, power control, and traffic modeling. There is a major reason for this; the Internet calls for analysis and design of systems that span multiple entities with diverging information and interests. Game theory, for all its limitations, is by far the most developed theory of such interactions [9].
1.1. Motivation. The aim of this study is to propose a novel task offloading control scheme for a hierarchical future network system. To tackle the task offloading problem in mixed edge-fog-cloud computing, we employ the CoT, FoT, and EoT paradigms, and jointly consider the combination of mobile devices, cloudlets, and a cloud system. They need to coexist and synthetically complement each other to meet the diverse requirements of future networks. To investigate the strategic interactions among cloud, fog, and edge computing paradigms, we formulate mobile device/cloudlet/cloud-connected cooperative games, and adopt the Tempered Aspirations Bargaining Solution (TABS) and the Gupta-Livne Bargaining Solution (GLBS). Both are based on bargaining solution guidelines, and each individual mobile device and its corresponding cloudlet and cloud server work cooperatively to negotiate their conflicting interests while guaranteeing fairness and efficiency.
The main challenge of our game-based task offloading approach is to retain generality for future networks. Definitely, future networks will adopt new computing paradigms, and a three-layer hierarchical network system can be extended complicatedly. Therefore, CoT, FoT, and EoT paradigms could be replaced by new computing fashions. To adapt to these dynamics, our proposed task offloading control scheme is not fixed to specific computing paradigms but is designed to be dynamic and flexible and can adaptively respond to new future network infrastructures. This is the main advantage of our proposed scheme over the traditional task offloading scheme.

Major Contributions.
To fulfill the promised advantages of three-layer hierarchical network platforms, several technical issues and challenges should be addressed. In this study, our work addresses the task offloading problem by adopting TABS and GLBS. To model the interactions among mobile devices, cloudlets, and a cloud system, we design a new cooperative bargaining game process. Using two different bargaining solutions, the proposed scheme effectively allocates the hierarchical network resources in a fair-efficient manner. With self-adaptability and real-time effectiveness, a well-balanced solution can be obtained while leveraging the full synergy of the CoT, FoT, and EoT paradigms. In summary, the contributions of this paper are as follows: (i) By employing CoT, FoT, and EoT paradigms: motivated by the future IoT environments, we assume a three-layer hierarchical network system by employing the CoT, FoT, and EoT paradigms. Depending on the different computing characteristics, they work together toward an appropriate network performance (ii) Computation-intensive task offloading based on GLBS: according to GLBS, a computation-intensive task is offloaded to fog and cloud servers. This approach can investigate the potential benefit gained from its delay-tolerant characteristics (iii) Time-sensitive task offloading based on TABS: based on TABS, the time-sensitive task is offloaded to fog and cloud servers. This approach can maximize the expected payoff obtained from its delay-sensitive characteristics (iv) Jointly designed to leverage the synergistic and complementary features: we explore the interaction of GLBS and TABS methods to balance contradictory requirements. The main idea of our approach lies in its responsiveness to the reciprocal combination of different bargaining solutions (v) Reciprocal negotiation and self-adaptability: from the viewpoint of practical operations, the main features of our bargaining-based task offloading scheme are reciprocal negotiation and self-adaptability. Under dynamic hierarchical network environments, these characteristics are generic and applicable for realworld operations while ensuring a fair-efficient solution (vi) Performance analysis: the major challenge of our proposed scheme is to strike the appropriate performance fairly and efficiently. A numerical simulation study shows that a timely effective solution is dynamically obtained based on the jointly bargaining solutions Beyond the feasible combination of optimality and practicality, the possible advantages of our approach include adaptability, flexibility, and responsiveness to current network system conditions. To the best of our knowledge, little research has been conducted on bargaining-based task offloading algorithms for future hierarchical network systems.
1.3. Organization. The remainder of this article is organized as follows. In Section 2, some related researches about cloud and fog computing-based task offloading problems are discussed. In Section 3, we provide a three-layer hierarchical network system infrastructure for the task offloading problem and formulate two cooperative bargaining game models for different kinds of application services. Then, we design our proposed scheme aiming at maximizing the system performance. We also provide the primary steps of the proposed scheme for readers' convenience. In Section 4, we evaluate the performance of our proposed scheme through extensive simulations. Finally, concluding remarks are drawn in Section 5 with future work.

Related Work
Cloud, fog, and edge computing mechanisms, which are kinds of Internet-based paradigms, have attracted great attention with a large quantity of literatures. In [10], the Fair and Energy-Minimized Task Offloading (FEMTO) algorithm is proposed based on a fairness scheduling metric, taking three important characteristics into consideration, which include the task offloading energy consumption, the fog node's historical average energy, and fog node priority. Based on the fairness scheduling metric, the FEMTO algorithm determines the task offloading solution including the target fog node, the terminal node transmission power, and the subtask size in a fair and energy-minimized manner. Finally, extensive simulations are carried out in a fog-enabled IoT network to investigate the performance of the proposed FEMTO algorithm [10].
The article [11] studies the problem of dynamic offloading and resource allocation with prediction in a fog computing system with multiple tiers. By formulating it as a stochastic network optimization problem, the Predictive Offloading and Resource Allocation (PORA) algorithm is developed. The PORA algorithm exploits predictive offloading to minimize power consumption with queue stability guarantee. Theoretical analysis and simulation results show that the PORA algorithm incurs near-optimal power consumptions with a guarantee of queue stability. Furthermore, it requires only a mild value of predictive information to achieve a notable latency reduction, even with the prediction errors [11].
Yousefpour et al. introduced a general framework for IoT-fog-cloud applications and proposed a delay-minimizing collaboration and offloading policy for fog-capable devices that is aimed at reducing the service delay for IoT applications [12]. The authors developed an analytical model to evaluate their policy and showed how the proposed framework helps to reduce IoT service delay. In contrast to the existing schemes, their proposed policy considers IoT-to-cloud and fog-to-cloud interactions and also employs fog-to-fog communications to reduce the service delay by sharing load. For load sharing, it considers not only the queue length 3 Wireless Communications and Mobile Computing but also different request types that have various processing times [12].
The authors in [13] designed a more efficient and secure cloud storage based on fog computing. By offloading part of the computing and storage work to the fog servers, the Reed-Solomon code is also introduced to protect the privacy of users. Therefore, data privacy can be guaranteed. To decrease the communication cost and reduce latency, they developed a differential synchronization algorithm, which provides a feasible solution but increases the workload on the users' devices and the cloud server. By offloading part of the work to the fog server, the efficiency of the entire process can be improved. Finally, the experiment results show that their architecture is feasible and has better performance than the other methods [13].
The Joint User equipment and Fog Optimization (JUFO) scheme is designed to minimize the energy consumption of the user's equipment and fog system based on the priority distribution of cloud tasks while maintaining service time constraints [14]. It is based on the popularity distribution of cloud tasks and energy consumption model. A network system consisting of a user's equipment, a fog server, and a remote cloud server is considered, where the user's equipment sends requests for cloud services, and the fog server and the remote cloud server process the requested service. In order to solve the optimization problem, the energy consumption and service time of each network component are mathematically modeled. The advantage of the JUFO scheme comes from using the profile of each cloud task in the optimized fog server offloading control scheme. Simulation results show that the JUFO scheme can provide a significant savings in energy consumption while supporting real-time service requirements in regions with burdening workloads [14].
The authors in [15] proposed the Joint Radio and Computational Resource Allocation (JRCRA) scheme. The JRCRA scheme investigates a joint radio and computational resource allocation problem to optimize the system performance and improve user satisfaction. By communicating with the users, cloud providers try to find suitable fog nodes for offloading users' computation tasks, together with the assignment of a radio spectrum, to satisfy users' requirements. With the objective of optimizing the users' satisfaction, they formulate this joint resource allocation as a mix integer nonlinear programming problem. Therefore, the interactions among the IoT users, service providers, and fog nodes have been modeled based on the matching game framework, and the transmission quality, service latency, and maximum power requirement have been effectively addressed. Through the simulation results, they conform that their proposed approach achieves the distributive, close-tooptimal performance from both the users' perspective and the system's view [15].
The Hierarchical Fog-Cloud Computation Offloading (HFCCO) scheme in [16] focuses on the allocation of fog computing resources to the IoT users in a hierarchical computing paradigm including fog and remote cloud computing services. The major goal of this scheme is to determine the offloading decision for each task arriving to the IoT users, where each user is interested in maximizing its own QoE. Utilizing a potential game model, the HFCCO scheme proves the existence of a pure Nash Equilibrium (NE) and develops an algorithm to obtain NE. To mitigate the time complexity of obtaining NE, a near-optimal resource allocation algorithm is also provided and shows that it reaches ε -NE in polynomial time. Numerical analysis shows that the IoT users can obtain a higher QoE, and the computation time of delay-sensitive IoT applications is reduced significantly when utilizing the computing resources of fog nodes. These results demonstrate the ability of fog nodes in providing low-latency computing services in IoT systems [16].
In [17], the Fog-Cloud Optimal Workload Allocation (FCOWA) scheme is proposed for the tradeoff between power consumption and transmission delay in the fogcloud computing system. To provide a systematic framework of computation and communication codesign in the fogcloud computing system, the FCOWA scheme models the power consumption function and delay function of each part of the fog-cloud computing system and formulates the workload allocation problem. This problem can be decomposed into three subproblems of three corresponding subsystems, which are solved via existing optimization techniques. Extensive simulations show that the fog computing mechanism can significantly improve the performance of the cloud computing mechanism while sacrificing modest computation resources to save communication bandwidth and reduce transmission latency [17].
Chen et al. developed a novel traffic-flow prediction algorithm that is based on long short-term memory with an attention mechanism to train mobile-traffic data in a single-site mode [18]. The proposed algorithm is capable of effectively predicting the peak value of the traffic flow. This predicted peak value is sent to a remote cloud. At the remote cloud, resources are dispatched and allocated dynamically based on traffic adaptation using a cognitive engine and an intelligent mobile-traffic module to balance the network load. For a multisite case, they also presented an intelligent IoT-based mobile-traffic prediction-and-control architecture capable of dynamically dispatching communication and computing resources. With the support of the cognitive engine and mobile-traffic control modules, the mobiletraffic flow for the entire network is predicted and controlled intelligently [18].
The paper [19] proposes an intelligent task offloading scheme, called the iTask-Offloading scheme, for a cloudedge collaborative system. The architecture of iTask-Offloading includes the local device layer, the edge cloud layer, the remote cloud layer, and the cognitive engine; it can not only recognize the resources from the local device, the edge cloud, and the remote cloud, but it also understands the task of intelligent application. The iTask-Offloading scheme is designed to combine the cognitive engine with the traditional cloud-edge collaborative system, and provides fine-grained task offloadings for the separability of intelligent application tasks to enable personalized task offloading. Finally, a real testbed is built to show that the iTask-Offloading scheme has less latency than traditional cloud computing [19]. 4 Wireless Communications and Mobile Computing In [20], the authors proposed a new Edge Cognitive Computing (ECC) architecture that deploys cognitive computing at the edge of the network to provide dynamic and elastic storage and computing services. In addition, they proposed an ECC-based dynamic cognitive service-migration mechanism that considers both the elastic allocation of the cognitive computing services and user mobility, to provide a mobility-aware dynamic service-adjustment scheme. Finally, they developed an ECC-based test platform to evaluate system performance; the results effectively demonstrate that edge cognitive computing realizes the cognitive information cycle for human-centered reasonable resource distribution and optimization [20].
Chen and Hao investigated the task offloading problem in an ultradense network aiming to minimize the delay while saving the battery life of a user's equipment [21]. Specifically, they formulated a task offloading problem as a mixed integer nonlinear program and transformed this optimization problem into two subproblems, i.e., a task placement subproblem and a resource allocation subproblem. Based on the solution of the two subproblems, they proposed an efficient offloading scheme. Simulation results have shown that their proposed scheme is more efficient compared to the random and uniform computation offloading schemes [21].
The paper [22] proposes a new mobile cloudletassisted service mode named Opportunistic task Scheduling over Co-located Clouds (OSCC), which achieves flexible cost-delay tradeoffs between the conventional remote cloud service mode and the mobile cloudlet service mode. Then, this work performs detailed analytic studies for the OSCC mode and solves the energy minimization problem by compromising between the remote cloud mode, the mobile cloudlet mode, and the OSCC mode. In addition, this study introduces two different kinds of task allocation schemes, i.e., dynamic allocation and static allocation. Under both the mobile cloudlet mode and the OSCC mode, dynamic allocation exhibits lower cost than static allocation [22].

The Bargaining-Game-Based Task
Offloading Algorithms In this section, we describe the three-layer hierarchical network architecture based on the CoT, FoT, and EoT paradigms. It presents the different emerging technologies, which can be combined to approximate the optimal system performance.
According to the cooperative game approach, we can get an effective bargaining solution while adapting the fast changing future network environments.

Hierarchical Network Architecture for Task Offloading
Services. In this study, we consider a future network system with a hierarchical computing structure and discuss the functional capabilities of different computing paradigms with their physical properties. The main objective of the hierarchical architecture is to provide a better QoE for end users. Edge devices may either perform their tasks locally or offload them to computing servers, which are the cloudlets in close proximity and the remote cloud server. In our proposed scheme, we address the task offload problem according to cooperative bargaining models, which are formulated by cooperation, coordination, and collaboration of the device, the cloudlet, and the cloud server. As shown in Figure 1, we assume a three-layer hierarchical network system comprised of multiple IoT devices, such as smart phones, surveillance cameras, personal digital assistants, laptops, and on-board units, denoted as the set of EoT devices D = fD 1 , D 2 , ⋯, D n g. D 1≤i≤n generates different application service requests 3 , ⋯g and may offload certain amounts of computing tasks to the fog nodes, denoted as the set of cloudlets F = fCL 1 , CL 2 ⋯ , CL m g, and one cloud server ðℂÞ. D 1≤i≤n , CL 1≤j≤m , and ℂ have their computation power capacities, i.e., P D i , P CL j , and P ℂ , respectively, which can be consumed by a monotonic increasing function of the computation amount. Whereas in reality, the P D i , P CL j , and P ℂ resources are limited and raced. When a lot of computationintensive applications are executed, these resources will become exhausted rapidly. Due to the resource scarcity, it is impossible to guarantee all applications' needs. To maximize the overall system performance, it is necessary to effectively utilize these computation resources for different application requests.
Despite the obvious advantages of using offloading services to process IoT applications, the future network system still suffers from the degraded QoE from service delays. Different application services not only require different computation intensities, but also have different delay sensitivities. Since the future network system covers a large geographical area from the edge device (D) to the central cloud (ℂ), the communication delay should be taken into account. According to the required QoE, various application services can be categorized into two classes: computation-intensive applications and delay-sensitive applications. To make offloading decisions, we must consider the required QoE. Therefore, resource management strategy becomes a key factor in enhancing the future network system performance while ensuring service constraints.
To tackle the future network task offloading problem, we adopt two cooperative bargaining solutions: Tempered Aspirations Bargaining Solution (TABS) and Gupta-Livne Bargaining Solution (GLBS) [23]. Each individual mobile device offloads its application task ðAÞ while partitioning the computation amount ðΓ A Þ into three parts, i.e., P D A , P CL A , and P ℂ A ; they are assigned to its own device D, the corresponding CL, and ℂ, respectively. To adaptively partition its Γ A , the main ideas of TABS and GLBS are applied. Based on two bargaining solutions, we can take various benefits in a fair-efficient way.

Tempered Aspirations and Gupta-Livne Bargaining
Solutions. Let N be the set of potential bargainers, and ℝ, ℝ + , and ℝ ++ are denoted as the sets of all, nonnegative and positive real numbers, respectively. ℝ n is the n-fold Cartesian product of ℝ. We use conventional notation for comparison of vectors: x ≥ y means that x 1≤i≤n ≥ y 1≤i≤n , x > y indicates 5 Wireless Communications and Mobile Computing that x ≥ y, and x ≠ y and x ≫ y means x 1≤i≤n > y 1≤i≤n . Let co ðAÞ denote the convex hull of set A in ℝ n ; it is mathematically expressed as coðAÞ = f Z ∈ ℝ n | Z = ðα × xÞ + ðð1 − αÞ × yÞ, x, y ∈ A and α ∈ ½0, 1g. Let cchðAÞ denote the convex and comprehensive hull of A, cchðAÞ = fy ∈ ℝ n | y ≤ Z, Z ∈ co ðAÞg. If N has more than one member, for every x ∈ ℝ n and every i ∈ N, define x −i = x N\fig [23,24]. A disagreement point (d) is a vector d = ðd 1 , ⋯, d n Þ that is expected to be the result if bargainers cannot reach an agreement. A bargaining problem for N is a pair ðS, dÞ such that S is a bargaining set for N, d ∈ S, and there exists an x ∈ S satisfying x ≫ d. Let the aspiration vector aðS, xÞ be defined by [23].
The ideal point of the problem ðS, dÞ represents bargainers' expectations before bargaining negotiation and it is defined by aðS, dÞ . Denote the family of all bargaining problems for N by Σ N . The reference point r ∈ S satisfies r ≥ d. A solution concept on Σ N is a function ϕ that associates with each triple ðS, d, rÞ ∈ Σ N , and a unique outcome of ϕ is denoted as ϕðS, d, rÞ ∈ S [23].
In 2011, P.V. Balakrishnana et al. proposed a new bargaining solution, called Tempered Aspirations Bargaining Solution (TABS). With the reference point ðrÞ, TABS is defined for every ðS, d, rÞ ∈ Σ N as [23]: If a bargaining problem is translated so that the disagreement point is at the origin, TABS is the only point along the frontier of S proportional to the aspirations vector aðS, rÞ. TABS can be axiomatically characterized by using the following axioms: Weak Pareto-Optimality, Symmetry, Scale Invariance axioms, r-RestrictedS-Monotonicity, Irrelevance of Trivial Reference Points, and S-Continuity [23].
(i) Weak Pareto-Optimality (WPO): for every bargaining set S, define its Pareto-optimal set as POðSÞ = fy ∈ S | x > y implies x ∉ Sg. Similarly, its weak Pareto-optimal set is defined as WPOðSÞ = fy ∈ S | x ≫ y implies x ∉ Sg. For every ðS, d, rÞ ∈ Σ N , ϕðS, d, rÞ ∈ WPOðSÞ In 1988, Gupta and Livne proposed another bargaining solution, called the Gupta-Livne Bargaining Solution (GLBS). The solution is "dual" to the TABS in the sense that it exchanges the roles played by the reference and disagreement points. In the Gupta-Livne approach, the disagreement point d has no role to play as a threat in the bargain. It serves only to form the aspirations of the players through the construction of the ideal aspiration point. For every ðS, d, rÞ ∈ Σ N , the GLBS is defined as follows [23]: The main difference of TABS and GLBS is the role of disagreement point d. In TABS, d is used as a reference vector from which proportional payoffs are measured. However, in GLBS, it is used to set the ideal aspiration point. Both solution concepts are illustrated in Figure 2 [23].

The Proposed Application Task Offloading Algorithms.
In this study, we design two bargaining games for task offloading services. First, the idea of TABS is adopted to implement the time-sensitive application offloading algorithm. To fair-efficiently offload the task computation, the offload decision process for A D i k is formulated as a cooperative bargaining game G TABS = ffD i , CL j , ℂg, fP D i , (i) Players: in G TABS , a smart device D i ∈ D, the corresponding cloudlet CL j ∈ F, and the cloud server ℂ are assumed as game players fD i , CL j , ℂg to process the task offloading service (ii) Computation powers of players: D i , CL j , and ℂ have P D i , P CL j , and P ℂ computation powers, respectively; they are assumed as total CPU capacities of game players k is generated from the D i , and total computation amount of A D i k is Γ A k (iv) Strategies: each player has a finite computation capacity. The set of strategies for each player consists of its discrete computation power levels. Let To quantify service satisfaction, the utility functions of players in TABS can be derived as follows: where χ are assigned computation amounts to D i , CL j , and ℂ, respectively. ψ D i , ψ CL j , and ψ ℂ are coefficient parameters to represent the QoE of D i , CL j , and ℂ computation services, respectively. β and β ℂ C are the current computation loads of D i , CL j , and ℂ, respectively. In the developed bargaining game, each player is a member of a team willing to compromise with other players. According to their utility functions and expected payoffs, team players make a collective decision to gain a total optimal solution. In G TABS , the reference point, i.e., r D i ,CL j ,ℂ ðS, xÞ, is defined as follows: where φ D , φ CL , and φ ℂ are the control factors to decide the reference point values of D i , CL j , and ℂ, respectively. m A k is the minimum computation capacity for the A k task offloading service. In G TABS , the aspiration point of TABS, i.e., a D i ,CL j ,ℂ ðS, xÞ, is defined as follows: Based on the disagreement point d as a starting point, the line (L) forward of the aspiration point a D i ,CL j ,ℂ ðS, xÞ is defined as follows: Simply, we can think that TABS is a weak Pareto-optimal solution located in S as well as in line L in (7). Geometrically, TABS is the intersection point ðU ℂ ÞÞ between the bargaining set S and line L. Therefore, TABS must satisfy 8 Wireless Communications and Mobile Computing Second, the idea of GLBS is adopted to develop the computation-intensive application offloading algorithm. To adaptively offload the delay-tolerant task computation, the offload decision process for A D i k is formulated as another cooperative game model G GLBS = ffD i , CL j , ℂg, fP D i , In the G GLBS game, only utility functions and aspiration points are defined differently, and the other game elements are the same as G TABS . In G GLBS , a D i ,CL j ,ℂ ðS, xÞ is dynamically calculated according to (1), and D i , CL j , and ℂ's utility functions for the task A D i k can be derived as follows: where ω ℂ are communication delay factors of CL j and ℂ, respectively. σ is the system's basic time unit for the task offloading ser-vice. Τ A k is the time deadline of A k . Based on the reference point r D i ,CL j ,ℂ ðS, xÞ as a starting point, the line (L) forward of the aspiration point a D i ,CL j ,ℂ ðS, xÞ is defined as follows: Simply, we can think that GLBS is a weak Pareto-optimal solution located in S as well as in line L in (12). Geometrically, GLBS is the intersection point ðU ðχ * ℂ ÞÞ between the bargaining set S and line L. Therefore, GLBS must satisfy 3.4. Main Steps of Proposed Task Offloading Algorithm. In this study, we design a novel task offloading scheme for different kinds of applications, which can be categorized into two classes according to the required QoE: computationintensive or time-sensitive applications. Different types of application services over future network systems not only require different QoE but also need different network control strategies. Based on different application characteristics, we dynamically select the most adaptable bargaining solution to address the task offloading problem. In the proposed scheme, the basic concepts of TABS and GLBS are adopted to distribute the computation amount of each application task. Computation-intensive but delay-tolerant applications can be ultimately executed without offloading services. Therefore, it is reasonable that the task offloading bargaining solution is measured based on the reference point as a starting point; GLBS is suitable for these services. For time-sensitive and delay-constrained applications, it is worthless if we cannot meet the time deadlines of applications. Therefore, it is appropriate that the task offloading bargaining solution is measured based on the disagreement point as a starting point; TABS is appropriate for these services. By a sophisticated combination of these two bargaining solutions, our cooperative game-based approach approximates a well-balanced performance among conflicting requirements. The primary steps of the proposed scheme are described as follows, and they are described by the following Figure 3: Step 1. Control parameters and system factors are determined by the simulation scenario in Section 4 and Table 1.
Step 2. At each time period, individual mobile devices D generate application tasks; different kinds of applications are equally generated.
Step 3. If a computation-intensive application A is generated, the GLBS is used to process the task offloading service. According to (1), (3), (9)- (12), and (13), the computation amount Γ A of an application task is effectively distributed to D, CL, and ℂ.
Step 4. If a time-sensitive application A is generated, TABS is used to process the task offloading service. According to (2), (4)- (7), and (8), the computation amount Γ A of an application task is dynamically distributed to D, CL, and ℂ.
Step 5. Based on the interactive process, the current computation loads of a device, a cloudlet, and cloud server, i.e., β D C , β CL C , and β ℂ C , respectively, are monitored in a real-time online manner. This information is used to calculate utility functions of each game players.
Step 6. The system is constantly self-monitoring the current network situation. If a new task offloading service is requested, it can retrigger a new bargaining process; the system proceeds to Step 3 for the next game iteration.

Simulation Setup.
In this section, we evaluate the performance of our proposed protocol and compare it with that of the JRCRA [15], HFCCO [16], FCOWA [17] schemes. To ensure a fair comparison, the following assumptions and system scenario are used: (i) The simulated hierarchical network system consists of 10 cloudlets (m = 10) and 100 mobile devices (n = 100).
(ii) In the offered application load situation, the arrival process for new application requests is the rate of the Poisson process (ρ). The offered range is varied from 0 to 3.0 (iii) Mobile devices are distributed randomly over the network coverage area, and we assume the absence of physical obstacles in the experiments (iv) For mobile device, cloudlet, and cloud computation capacities, i.e., P D , P CL , and P ℂ , we assume their CPU computing powers. They are 5 GHz, 100 GHz, and 1000 GHz per second, respectively (v) Each mobile device selects its corresponding cloudlet at the closest distance for the task offloading service (vi) We assume that 10% of P D , P CL , and P ℂ may be consumed to sustain the basic operations of a mobile device, a cloudlet, and a cloud server (vii) Computation-intensive applications and timesensitive applications are equally generated (viii) To reduce the computation complexity, the computation amount is specified in terms of the basic computation unit, i.e., m, where one m is the minimum computation capacity (e.g., 100 MHz) for the offloading service. Therefore, for practical implementations, the computation amount distribution is negotiated discretely by the size of one m  Table 1 At each time period, individual mobile devices generate application tasks The GLBS is used to process the task offloading service According to (1), (3), (9)- (12), and (13), the computation amount of task is offloaded The TABS is used to process the task offloading service Using (4), utility functions for a smart device, a cloudlet, and cloud server are calculated According to (2), (4)-(7), and (8), the computation amount of task is offloaded Based on the interactive process, the C D , C CL , and C C values are monitored in a realtime online manner If a generated application is a computation-intensive task?
Yes No Using (9)-(11), utility functions for a smart device, a cloudlet, and cloud server are calculated The system is constantly self-monitoring the current network situation  To demonstrate the validity of our proposed method, we measured the task delay-out probability, normalized throughput of edge devices, and fairness of edge devices for their payoffs. Table 1 shows the control parameters and system factors used in the simulation. These parameters and factors have their own characteristics.

Results and Discussion
In Figure 4, we evaluate the task delay-out probability under four methods. As a criterion of QoE assessment, the task delay-out probability is a measurement of how many application tasks fail to meet their time delay constraints. It is a key performance evaluation factor in the future network opera-tion. The fail ratio of all schemes is increasing with the increase of the task request rate. It is reasonable since the higher task requests lead to the system resource exhaustion, thus making the task delay-out probability increases. However, we observe that there is a considerable performance excellence in the proposed scheme. Our bargaining-based approach can fair-efficiently share the future network system resource to improve the service quality. Therefore, we can maintain the stable performance superiority under different application load intensities.
Normalized throughput of edge devices, which is displayed in Figure 5, represents the resource efficiency of the hierarchical network system. This is another main criterion on the performance evaluation. As can be observed, the performance trend of all schemes is similar. Typically, a higher system throughput can increase the network capacity; it is more profitable for the system operator. In the proposed scheme, each smart device adaptively offloads its tasks to the fog node and cloud server based on a proper bargaining solution. Especially, we explore the reciprocal combination of the GLBS and TABS methods to balance contradictory requirements. Under dynamic network system environments, the possible advantages of our approach include adaptability, flexibility, and responsiveness to current network system conditions. Therefore, we can effectively manage the three-layer hierarchical network system resource while satisfying desirable features, which are defined as axioms of a selected bargaining solution. Due to this reason, we can actually distribute the system resource to increase the throughput of mobile edge devices than the existing JRCRA, HFCCO, and FCOWA schemes. Figure 6 depicts the fairness among edge devices. Fairness is a prominent issue for the operation of traffic intensive networks, and it is analogous to the social welfare for the resource allocation problem. Especially, under heavy application load environments, fairness is a highly desirable property for different edge devices. To characterize the fairness notion, we follow Jain's fairness index [25], which has been     frequently used to measure the fairness in network management. In the proposed scheme, we adopt the basic idea of TABS and GLBS, and share the system resource fairly while satisfying their fair-oriented axioms. Therefore, in our proposed scheme, the actual outcome is fairly dealt out among individual edge devices. As shown in Figure 6, the profitsharing fairness in our approach is distinctly better compared to the existing schemes, which are designed as lopsided and one-way methods and do not effectively consider the fairness issue.
The simulation results shown in Figures 4-6 demonstrate that the proposed scheme can attain an appropriate performance balance. In contrast, the JRCRA [15], HFCCO [16], and FCOWA [17] schemes cannot offer this outcome under widely different network application request situations.

Summary and Conclusions
In this paper, we investigate the application task offloading problem based on the edge, fog, and cloud computing paradigms. According to the 3-tier network hierarchy, i.e., mobile device-cloudlet-cloud infrastructure, the task offloading problem is formulated and addressed by using the cooperative bargaining game concept. Especially, we practically apply the TABS and GLBS methods to effectively offload the computation amount of each application task. By jointly considering the computation intensity and delay sensitivity, we adaptively select the most suitable bargaining method in an intelligent manner. For the evolution of the future network application services, our bargaining-game-based approach is attractive and appropriate to operate the real-world network system. The performance evaluations are presented to illustrate the effectiveness of the proposed scheme and demonstrate the superior performance over the existing JRCRA, HFCCO, and FCOWA schemes.
In the future, we would like to consider privacy issues such as the differential privacy during the task offloading operation. Further, we will investigate the mobile device mobility to excellently adapt the dynamic network environments. In that case, the required information exchange and communication overhead need to be carefully investigated. In addition, we will extend the scenario from one cloudlet fog node to multiple cloudlets fog nodes when an individual application task is offloaded. For this future work, interference management, control overhead and load balancing will be considered.

Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
The author, Sungwook Kim, declares that there are no competing interests regarding the publication of this paper.

Authors' Contributions
The author, Sungwook Kim, is the sole contributor to this research work.