Intelligent Computation Offloading for IoT Applications in Scalable Edge Computing Using Artificial Bee Colony Optimization

Most of the IoT-based smart systems require low latency and crisp response time for their applications. Achieving the demand of this high Quality of Service (QoS) becomes quite challenging when computationally intensive tasks are oﬄoaded to the cloud for execution. Edge computing therein plays an important role by introducing low network latency, quick response


Introduction
Internet of ings (IoT) is reshaping the technological landscape of traditional systems. e concept of IoT-based smart system is making its way from dreams to reality [1]. Smart city, smart healthcare, and smart industry have grasped the researchers' attention at a monumental scale [2][3][4][5]. However, smart systems generate high volume of data in a short period of time and create several challenges such as data management, security, storage, and energy consumption [6]. In addition, the applications pertaining to these IoT-based systems are resource-constrained and require a edge-based architecture is to deploy edge as a micro data center that has the potential to provide cloud like services, even in the absence of the cloud. However, fog computing provides computing, storage, and other services through intermediate nodes such as routers and gateways, which are resource-limited. Edge computing utilizes computation offloading concept, where the resource-constrained IoT devices handle compute-intensive tasks to the edge server, execute the task, and send back the result to IoT devices. Computation offloading not only saves energy in IoT devices but also extends the lifetime of these devices [11]. Figure 1 shows edge computing architecture for IoT.
One way to achieve the required high QoS is via computation offloading application of edge computing [12]. However, computation offloading is rather a complex job, which enfolds the complexity of task scheduling, partitioning, migration, and latency. In addition, the offloading has a vital role in edge discovery, as well as selecting an appropriate edge node for computation. e current IoT-based smart systems use large-scale sensors that generate a huge amount of data at the IoT deployment layer. e rapid processing of the generated data is very substantial. erefore, computation offloading for the resource-constrained devices is quite significant. e objective of this research is to scale the edge server for delay sensitive tasks that demand stringent QoS requirements. A computation offloading technique selects a task from the IoT layer generated by IoT devices and offloads it to the edge server for execution. However, computation offloading at a large scale creates congestion on the edge server, which provides low QoS. erefore, a resource scheduling mechanism for load balancing over edge servers is required to ensure the effective utilization of the edge resources, while considering communication cost and response time of the tasks. On the other hand, computation offloading is a nontrivial, challenging, and NP-hard problem, where its complexity is directly proportional to the increasing number of offloading tasks. ereby, several studies proposed greedy algorithms to tackle computation offloading problem [13,14], but the computation offloading still grows exponentially and became very challenging for traditional greedy algorithms. erefore, in this paper, we proposed an Artificial Bee Colony-(ABC-) based computation offloading algorithm that effectively and seamlessly performs the process of computation offloading. e major contributions of this study are as follows: (i) We devised a classical three-tier framework by integrating edge and cloud to simulate computation offloading process following strict energy and latency constraint for delay-sensitive tasks. An edge server is used in conjunction with cloud due to its higher computing power than edge servers. In the proposed framework, the inclusion of cloud further scales the edge server efficiently. (ii) To effectively balance the workload over edge servers, we propose ABC optimization technique based on swarm intelligence, where the objective function is set to achieve the minimum computation cost and low latency for the offloaded tasks. (iii) A computation offloading algorithm based on ABC is implemented for seamless computation offloading. e results exhibit that the proposed technique shows notable improvement in reducing the response time and efficient load balancing over edge nodes, compared to Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Round-Robin (RR) Scheduling. e rest of the paper is structured as follows. Section 2 presents a detailed overview of the related work. Section 3 shows the edge-cloud integration framework that spans over the system model. In addition, it also shows the computation offloading algorithm based on Artificial Bee Colony. Section 4 presents the results and discussion of the proposed technique. Finally, Section 5 comprehends the conclusion of this research study.

Related Work
Smart systems are built using a large number of IoT devices, which generate huge amount of data in a short period of time. e generated data is sent to the cloud for aggregation, analytics, and computation [15][16][17]. Computation offloading to the cloud decreases the computation load from IoT devices. However, it produces excessive delay that violates the QoS requirements for real-time systems and applications [18]. Furthermore, it leads to inefficient use of resources and unnecessarily overloads back-haul links [19]. Edge computing brings down computation, storage, and other services in the closed proximity of the users, thereby meeting the strict QoS requirements for delay-sensitive applications.
Computation offloading has largely attracted the researcher's attention, and a number of studies are conducted on edge computing domain. In [20], a three-tier edge-cloud integration framework has been deployed to reduce energy and latency of IoT devices. To identify the best edge server for computation offloading, the expected offloading and the propagation delay of different edge servers are considered to determine a threshold. If the expected delay is below the threshold, the request is accepted for offloading; otherwise, the task is given to the next feasible edge server. e edge server is further connected to the cloud for scalability, where the task is offloaded to the cloud if the edge server reaches its maximum. is work produces desirable results, but they did not consider the communication cost.
In [21], the authors focused on scheduling the problem of computation offloading over the edge server placed in the proximity of users. e offloading and computation decisions are made by the IoT device, while considering the battery life and response time for better quality of experience (QoE). ey formulated the offloading problem as mixedinteger nonlinear programming (MINLP) and resolved it using branch and bound reinforcement learning technique. e proposed solution did not consider the load balancing over edge and it leads to scalability issue.

Complexity
A distributed and scalable framework for wearable IoT devices is proposed in [22]. is existing system considered metrics such as response time, bandwidth, storage, and a number of tasks successfully completed for effective resource provisioning using recurrent learning. e response time is reduced by introducing a control layer between cloud and IoT layer, which generalizes the edge computing model but lacks practical computation offload environment. A dynamic computation offloading algorithm based on stochastic optimization technique is presented to deal with computation offloading problem [23]. e computation offloading problem is further divided into subproblems to achieve a minimum cost of the offloading process. ey reduced the cost but did not consider the load balancing over multiple edge servers while migrating tasks. e existing computation offloading frameworks comprised over cloud, edge, and IoT have been devised for seamless computation offloading [24][25][26].
ese studies focused on reducing the communication overhead, response time, and bandwidth using machine learning algorithms such as Lyapunov optimization, Deep Supervised Learning (DSP), and Discrete Particle Swarm Optimization (DPSO). ese frameworks have a well-defined objective for low latency and minimize energy consumption of IoT devices. However, this existing system focuses on edge-cloud framework instead of computation offloading process. A load balancing framework based on directed graph partitioning algorithms is proposed to balance load over edge node for in-network flexible resource provisioning and allocation [27]. e devised algorithm is inappropriate in the dynamic workload conditions. A cloud-edge integration architecture is introduced to deal with the scheduling problem of bag-of-tasks applications using Modified Particle Swarm Optimization (MPSO) [28]. e similar problem is handled using Genetic Algorithm (GA) [29]. e main goals of these studies were to reduce the operating cost and remote processing time of the task. However, these studies have not focused on communication cost and latency incurred by the computation offloading process.
Several studies use different clustering techniques to protect the edge server from bottleneck and cope with the scalability issue. For instance, a CNN-based fused tile partitioning (FTP) is presented to distribute the workload over edge servers [30]. A PSO-based multiclustering technique in a semiautonomous edge-IoT environment is proposed in the account of reducing processing and communication delays to distribute the load over edge servers [31]. A graph-based edge clustering technique and software defined network-(SDN-) based multicluster overlay (MCO) are utilized to optimize task size, number of servers, required channels for communications, best channel allocation for effective load distribution, and scaling the edge server [32,33]. However, these studies produce additional communication overhead and add more latency while making computation offloading decision. e existing studies reveal that the computation offloading process is very complex and challenging. It consumes extra energy and incurs latency, while intercommunicating between devices and servers [5,6]. A single device is responsible for making computation offloading decisions, which consumes more energy and causes fast battery drain. In addition, the existing edge server makes computation offloading decision independently in the IoT environment [34][35][36][37], where the edge server is overwhelmed with many requests, creating congestion over the edge server, and originates scalability issue. e existing approaches do not attain the high QoS requirements of IoT applications. Nonetheless, they reflect a trade-off between QoS and the scale of offloading requests.

Complexity
In our work, we design a dynamic and decentralized task execution through computation offloading. To accomplish the above-mentioned objective, a three-tier edge-cloud integration framework is designed for a successful computation offloading process. One of the major advantages of the proposed layered-based architecture is a robust service discovery. e IoT system is designed using a large number of devices and servers, where searching for the right resource for the IoT device is quite challenging. A social Internet of things (SIoT) clustering approach [38] is deployed at IoT layer that performs the task of aggregation and resource management.
e SIoT not only controls the number of offloading tasks sent to the edge server and protects the edge server from bottleneck but also creates an association between offloading task and the resource allocation. is association finds the right resource for executing a particular task, hence reducing the latency of the offloading task.
We propose an ABC optimization technique that balances the workload over the edge server, provides the right resource for offloading device, and exploits low latency interconnections between IoT device and server. e proposed framework can be effectively utilized for IoT devices, where the task is executed under strict energy with the required latency to meet the high QoS requirements, which is very unlikely to get using the traditional cloud. Finally, a computation offloading algorithm based on ABC is proposed to provide an efficient computation offloading facility that searches to find new resources for task execution. e objective function measures the network latency and execution time of the task to achieve minimum service time. e detailed discussion on ABC Algorithm and computation offloading technique is provided in Section 3.

Edge-Cloud Integration Framework for Computation Offloading
In this section, we briefly describe the system model. In addition, we also provide the detailed overview of the Artificial Bee Colony optimization technique and a novel ABCbased computation offloading algorithm.

System Model.
ere are three layers in the IoT-based edge infrastructure as shown in Figure 2. e first layer is IoT layer, where a large number of sensors are connected to LAN in clusters. ese clusters are connected to the second layer called edge layer, which contains multiple edge nodes.
ese nodes perform basic analytics on the data received from the sensors. However, the computational and storage capabilities of these nodes are limited. e third layer is the cloud layer. e master node of second layer is responsible for deciding whether to offload the task to the cloud layer or not. It has powerful computing and storage resources to perform heavy analytics and large/long-term data storage. Table 1 expresses the list of symbols and notations used in the system model. e IoT layer has N nodes S 1 , S 2 , . . . , S n , where each sensor/mote S i is working at frequency λ i . Edge layer has M nodes Eg 1 , Eg 2 , . . . , Eg m . If the computation job is performed in the IoT node without offloading it to the fog or cloud, then the service time for a job is computed using the following equation: where ST n is a service time of the task, T com i is computation required to complete the task i, and C sen is computational capability of the mote (IoT sensor node). e communication cost between mote and edge node is computed using equation (2), which depends on transfer capacity and network latency.
where NL me is network latency between mote and edge, D m is data generated by the mote/sensor, and B me is the bandwidth between mote and the edge. If the computation is performed in the IoT node, the energy (E m ) required for the computation of the job is given by the following equation: where C mote is an energy consumed by processing units in unit time. e service time of the job offloaded to the edge node is calculated using the following equation: In the above equation, C EdgeNode is clock frequency of the edge node. e task offloaded to the edge node required energy for its completion, which is calculated using the following equation: e service time for job offloading to the cloud is computed using the following equation: where EdgeCloudComm is composed of the cloud latency and time required to send data to cloud from the edge node. It is calculated using the following equation: where NL EC is network latency between the edge node and cloud, while EC BW is bandwidth between edge and cloud. erefore, the energy consumption for the offloaded task to the cloud is calculated using the following equation: e total communication cost for offloading the job to cloud using IoT node is computed by the following equation:

Objective Function.
It is important to handle two main decisions of whether to offload the task to edge or cloud. erefore, the objective function is used to minimize the energy consumption (E) and service time delay (ST) of each offloading scheme. e first decision is about offloading the job to the edge: Similarly, the edge might have limited resources to fulfil the computational requirements of the job. erefore, it may decide to further offload the task to the cloud. e following variables in decision allow offloading the job to the cloud: For ABC optimization algorithm, ST i and E i are required to be normalized using the two following equations: For all K jobs, total energy consumption E total is calculated using equation (15), and the total delay for offloading the task to the cloud is computed using the following equation: erefore, the objective function is where ST is the service time and E is energy consumption for the offloading scheme.
where δ ∈ [0, 1] is weight to prioritize the elements of the objective function and (E(l j , job r )) is energy consumption of the task job r by the node at level l j . e objective function has following constraint on the edge layer:

Complexity 5
where Eg cap is a load capacity of the edge node j and H k is an asthmatic mean of response time if it is the only edge that serves all the offloaded jobs.

Artificial Bee Colony (ABC)
Algorithm. e ABC was proposed by Dervis Karaboga [39]. It is a swarm-intelligence-based optimization algorithm, which contains three types of bees. e first type is scout bees that search for new sources of food randomly, thus ensuring exploration. e second type is onlooker bees that choose a food source by observing the dance of employed bee. e third type is employed bees that are linked to the food source, thus ensuring exploitation. Scout bees and onlooker bees are not linked to any specific food source. erefore, they are usually called unemployed bees. A general outline of the ABC algorithm is shown in Algorithm 1.
In this section, we presented the ABC algorithm for computation offloading at IoT edge. e objective function measures the service time (network latency and time required for job completion) and energy consumption for the solution provided by the optimization algorithm. e main purpose is to minimize the objective function, which searches for minimum computational cost and job latency. ere are three decision variables: s i , t i , and ξ (contains a list of jobs). e input for the algorithm is a set of jobs and nodes.

Initialization Phase.
e population (nodes) is represented by vectors x n , which is initialized by bees using the following equation: where ub j and lb j are the upper and the lower bounds of the parameter, respectively.

Employed Bee Phase
(1) New Solution. e employed bee searches for new nodes (y mi ) with more resources in a neighbourhood. e new neighbour node y mi can be found by the the following equation: where τ nj is a function that generates a random number between −1 and 1 and x pj is a randomly chosen node in a neighbourhood.
(2) Greedy Selection. Fitness of new node y mi is calculated. If its fitness is high, then x nj y mi is memorized.

Onlooker Bee Phase
(1) Probability Calculation Based on Fitness. Onlooker bees choose the node probabilistically based on information provided by the employed bee nodes. e onlooker bees choose node x n using the probability p n as shown in the following equation: where F n is fitness function, which is computed by using the following equation: (2) New Solution for Onlooker Bee. Once a node is chosen for the onlooker bee probabilistically, a new neighborhood node y mi is determined using equation (20).
(3) Greedy Selection. At this stage, y mi and x nj are compared to each other. If a new node in neighborhood y mi has high fitness value, then current node y mi is memorized.

Scout Bee
Phase. Scout bees ensure exploration and choose a node randomly. An employed bee becomes a scout bee if it fails to improve its solution in a limit (number of trails). Figure 3 and Algorithm 2 present the flowchart and algorithm of ABC-based computation offloading technique, respectively.

Results and Discussion
In this section, we discuss the results achieved using our proposed framework. A three-tier edge-cloud integration framework is proposed, where IoT devices are placed at Tier-1, where data are generated from multiple devices in multitasking manner. Tier-2 comprises edge servers, and Tier-3 includes a resource-rich cloud. e hierarchical representation of the proposed framework helps efficiently utilize the resources and distinguish the responsibility of each tier. A simulation-based edge-cloud integration test-bed is designed using MATLAB. e simulation setup not only provides the opportunity to conduct the experiment in the control environment using a preferred set of parameters but also allows us to repeat the experiment under different scenarios and constraints. ereby, we have evaluated the performance of the proposed Artificial Bee Colony computation offloading algorithm against the metaheuristic algorithms such as Particle Swarm Optimization, Ant Colony Optimization, and Round-Robin Scheduling [10]. A list of parameters with their corresponding values acquired by conducting several preliminary experiments is provided in Table 2.
e metrics for the evaluation of the proposed offloading algorithm are degree of imbalance and standard deviation in response time, while observing the load of edge node. e degree of imbalance among edge nodes is calculated using the following equation: 6 Complexity e IoT devices at Tier-1 generate the tasks, selected for computation offloading to the edge server. is approach leverages the IoT devices to save energy and make them capable of handling compute-intensive tasks. e edge tier is placed between IoT tier and cloud, which takes the data load generated by IoT devices and executes the offloaded tasks using a number of edge servers. However, each server has several virtual machine (VM) instances that ensure the successful execution of the offloaded tasks, where each VM handles a different class of IoT applications. e cloud is a resource-rich solution to IoT and is connected to the edge server through Internet for achieving scalability in the edge server. e performance analysis of computation offloading based on ABC algorithm is evaluated using three-tier edgecloud framework for seamless and successful execution of task offloading between IoT, edge, and cloud. In this experiment, we have considered the response time, standard deviation, and degree of imbalance as a set of parameters.
Using these parameters, we tested the performance of ABCbased computation offloading algorithm under different scenarios. However, the degree of imbalance reflects the inequality among multiple edge servers, and the standard deviation of the response time exhibits the load balance between edge servers. e degree of imbalance is expressed using equation (24). Figure 4 shows the performance of the ABC task offloading algorithm in scenario 1, where the number of edge server Eg n is 3, total number of IoT nodes M n is 250, and primary server rate is 100. e two delay-sensitive and delaytolerant applications are generated from IoT nodes. It has been observed that the proposed ABC task offloading algorithm outperforms the counterparts ACO, PSO, and RR Algorithms by keeping the response time well below the defined latency threshold. e ABC algorithm satisfies the QoS requirement of both delay-sensitive and delay-tolerant application. However, the RR scheduler degrades its performance by violating the latency requirements of 100 ms. In scenario 2, experimental parameters are changed by increasing the number of edge servers Eg n to 5 and server rate to 300 with a gradual increase in number of IoT devices. In Figure 5, the proposed ABC task offloading algorithm is compared with ACO, PSO, and RR scheduler. e achieved results show that the proposed algorithm maintains the low response time even with the increasing number of computation offloading requests of IoT devices.
In Figure 6, we present the performance of ABC algorithm. It is witnessed that the proposed algorithm maintains lower response time of the offloaded tasks in the single run of (1) Step 1: Initialization Phase (2) repeat (3) Step 2: Employed bees' phase (4) Step 3: Onlooker bees' phase (5) Step 4: Scout bees' phase (6) Step 5: Memorize the best solution achieved so far (7) until maximum cycle number reached (8) Output the best solution identified ALGORITHM 1: ABC algorithm.
We recorded the average response time of the bees. e ABC algorithm minimizes the objective function by discovering minimum computational cost and low latency of each offloaded task. e probabilistic calculation and memorization of the fitness value reach the minimum fitness value immediately in the 11 th iteration. e proposed ABC  Complexity algorithm explores the search space quickly, resulting in faster convergence compared to the ACO and PSO for the best possible solution. In addition, it achieves the best value in a short period of time.
In scenario 4, we changed a set of parameters by introducing three different classes of applications using three different types of sensors. Each sensor generates different data rate according to the task with having number of edge servers Eg n � 8, the server rate � 500, and number of IoT nodes M n � 2000.
In Figure 7, the results exhibit that the ABC task offloading algorithm maintains the low response time because    the onlooker bee probabilistically selects and memorizes the successful node while looking for other probabilistic solutions and outperforms the RR, PSO, and ACO algorithms. Figure 8 reflects the behaviour of the proposed ABC algorithm by considering the standard deviation. e standard deviation is the variation between the average response times of all the tasks that are offloaded for remote execution. As the number of IoT devices increases, the proposed algorithm shows an improved performance in the standard deviation. However, the standard deviation grows exponentially while exceeding 1750 IoT devices, which is due to the inherent issue of scalability in edge computing.
ese results are achieved through the same parameters mentioned in scenario 4. Figure 9 exhibits the degree of imbalance of the offloaded tasks over edge servers while increasing the number of offloading tasks from IoT layer using scenario 4. It is observed that the proposed ABC algorithm minimizes the objective function, produces low values, and reflects that the workload is effectively distributed among the edge servers.

Conclusion and Future Work
Smart city is designed using a large number of IoT devices. ese devices are resource-limited and their applications are resource-intensive, which require high QoS. Nonetheless, they produce a large amount of data in a short period of time. Cloud can be a feasible solution for it, but the inherent longer latency makes it nonviable. Edge computing is a potential solution that resides in the close proximity of the users. It offers low latency, high bandwidth, crisp response, and reliability for the resource-limited IoT devices. erefore, in this paper, we proposed a three-tier edge-cloud integration architecture and utilized a classical computation offloading technique for seamless task offloading process. A metaheuristic and nature-inspired ABC optimization technique is used to balance the workload over edge servers while considering network latency and service rate of the edge servers. e numerical results exhibit that the proposed ABC algorithm produced notable improvement in response time of IoT applications. In addition, the results also ensure that the workload over edge servers is managed effectively, which scales the edge server for entertaining maximum number of offloaded tasks.
In the future, we aim to extend this work to LTE and 5G communication architecture using multiobjective optimization, including communication cost and energy consumption cost.

Data Availability
e data used to support the findings of this study are included within the article. e used simulations software and its details are mentioned in the Results section that help to reach conclusions.

Disclosure
Mohammad Babar and Farman Ali are the co-first authors.

Conflicts of Interest
e authors declare that they have no conflicts of interest.

Authors' Contributions
Mohammad Babar and Farman Ali contributed equally to this work.