Mobile Cloud Computing is one of today's more disruptive paradigms of computation due to its effects on the performance of mobile computing and the development of Internet of Things. It is able to enhance the capabilities of devices by outsourcing the workload to external computing platforms deployed along the network, such as cloud servers, cloudlets, or other edge platforms. The research described in this work presents a computational model of a multilayer architecture for increasing the performance of devices using the Mobile Cloud Computing paradigm. The main novelty of this work lies in defining a comprehensive model where all the available computing platforms along the network layers are involved to perform the outsourcing of the application workload. This proposal provides a generalization of the Mobile Cloud Computing paradigm which allows handling the complexity of scheduling tasks in such complex scenarios. The behaviour of the model and its ability of generalization of the paradigm are exemplified through simulations. The results show higher flexibility for making offloading decisions.
Cloud Computing paradigm is one of the most disruptive technology advances of our times. This paradigm has made Information Technology (IT) resources available to the general public through Internet. In this way, any business, organization or particular user can access to computing infrastructures and services for a fee. The progress of communication technologies has also contributed to this end. Boosting the bandwidth and hence speed of all connections has enabled us to handle more traffic. In addition, the improvements on management of cloud centres thanks to virtualization and server consolidation methods allow us to give a rapid response to changing application demands [
Implementation of this paradigm to mobile computation has led to Mobile Cloud Computing (MCC) concept [
However, this promise of providing new resources beyond mobile computing capabilities by accessing cloud servers can lead to uncertainty and delays in response times due to unpredictability in network operations through the Internet. Further, some applications cannot expect to receive the calculated results from offloading the source. In general, there is a lack of versatility in the cloud that makes it unable to adapt to specific task requirements and to internet operation conditions. Thus, the user-perceived quality is highly variable and depends on both the application’s degree of interactivity and the network’s end-to-end latency. There is a requirement to improve the ability of the MCC paradigm to meet the performance requirements for heterogeneous devices in dynamic environments where the available infrastructure can change.
From these motivation aspects exposed by these problems, the
The A study of the complexity and key issues of distributing the processing load among several computing layers along the network and a review of the architectures and network layers to perform this offloading process. The proposal of a general framework for a multilayer architecture to formalize the processing of an application and the implications of the distributed configurations to the performance and the communications requirements.
The
This paper is organized as follows: in Section
The concept of MCC has been evolved in recent times with the goal of increasing the performance by offloading the workload outside the mobile devices. As a result, the outsourcing options have been increased along the communication network as shown in Figure
Outsourcing options deployed along the communication network.
These outsourcing options are deployed at different layers of the network forming a pool of computing platforms that are available depending on the application context.
In the top, at the network core, the
Next, the
The same principle of cloudlets is applied to the
These last two layers (cloudlet and MEC) can be enabled in a dynamic way to deliver computing support to the cloud servers. In this way, they are not replacing but complementing the Cloud Computing Paradigm by providing a flexible computing power when necessary [
Finally, a recent trend to develop the model of MCC is the Fog Computing Paradigm. The
Realizing the vision of multilayer offloading architecture for implementing the MCC paradigm is a challenging task because of the complexity in handling the multiple aspects involved, especially those concerned with performance evaluation and tasks scheduling. In this issue, the motivation for offloading is diverse depending on the user requirements, device configuration or application constraints. Moreover, performance aspect can be of different nature such as power consumption, time delay, money cost of using external services, and network usage.
There is a need to design suitable frameworks to formalize the performance components in a homogeneous way to make decisions on when and where to outsource the processing load. In this subsection, a review of recent frameworks of MCC paradigm is described in order to reach the knowledge border in this issue.
There are several existing frameworks and architectures in the literature for outsourcing the tasks of the application workload to other computing platforms. The areas of application of these techniques cover many sectors and disciplines. In general, they develop the IoT and Cyber-Physical Systems (CPS) paradigms that are experiencing a great growth in recent years. The devices involved are a heterogeneous set which mainly consists of sensors, actuators, and embedded systems. In addition, the mobile phones and other mobile devices such as wearables are also in this kind of applications.
As overview of the MCC proposals, they aim to carry out a collaborative work for distributing the processing load and meeting the application requirements [
In general terms, offloading from the “things” or mobile devices to cloud, edge, or cloudlet server always produced significant increase in performance. It is clear that the slower the devices are, the sharpest these differences become. However, the offloading criteria can consider other aspects such as network usage or money cost of external computing services [
The key parts of a framework for handling the offloading process are the architecture that defines the available outsourcing platforms, the decision method on where and when to offload, and the communication model that defines how to perform this process.
Regarding the decision method for offloading, in general term, it is considered as a scheduler which decides when and where to offload taking into account the application constraints, the tasks’ features, and the performance aspects of the available computing platforms. Its main aim is how to reasonably allocate the tasks to available computing platforms to minimize the total cost and load. The optimal scheduling scheme belongs to NP-complete problem set and, therefore, traditional strategies cannot be applied in a suitable way for MCC applications.
To address this issue, many approaches have been proposed based on different mathematical techniques and algorithms. In recent approaches, the scheduling method can be formulated as a constrained optimization problem where a suboptimal solution is usually the best choice. These methods can be used for multitask [
In most cases, the offloading decisions are mainly focused on energy optimization [
The frameworks can be implemented as a set of procedures, methods, and tools. These components can be part of the device itself or installed in a middleware layer in an external supervising device [
Recent frameworks for MCC.
| |
---|---|
Collaborative Working Architecture [ | General architecture and scheduler. |
Multilayered scheduling [ | Tasks specification, scheduling method. |
Cuckoo [ | Programming model and integration tools. |
Data storage framework [ | Architecture, Database Management, and File Repository Model. |
Flexible Framework [ | General architecture and Scheduling method. |
Cyber-Manufacturing [ | Architecture, communication protocol and analysis. |
Federated IoT services [ | Management, problem formulation, and heuristic for tasks allocation over 5G. |
VM migration framework [ | Smart precopy live migration approach. |
Cloudlet in MCC [ | Architecture, Stochastic performance modelling. |
Resource usage optimization [ | Architecture, Resource usage, and performance evaluation modelling. |
Mobile code offloading [ | Architecture, offloading methods in Java and push notifications. |
QoS Aware Computation Offloading [ | Problem formulation, optimal offloading decision process. |
Adaptive MCC framework [ | Application partitioning, offloading decision algorithm. |
Edge Computing Framework [ | Communication and Computation Models. |
Distributed computational model [ | Resource utilization specification, management system. OT: CPS nodes, Cloud |
Scheduling internet of things applications [ | Scheduling method, performance metrics. |
IoT and Cloud Computing Integration [ | Architecture and components. |
Context-aware computation offloading [ | Design pattern and estimation model. |
Edge-Fog cloud [ | Method for distributing the processing tasks. OT: Edge and fog nodes |
Framework for code offloading [ | Architecture and offloading decision-making engine. |
After reviewing the frameworks for offloading mobile computation, three main findings can be drawn that justify our proposed model for designing multilayer MCC architectures: The general objective of existing MCC architectural approaches is to improve the overall application functioning. However, in most cases, they do not consider multiple options for offloading the work nor the intermediate network infrastructure. There are numerous frameworks on how to distribute the computation of the applications to perform partial remote execution. They are mainly focused on minimizing the energy consumption of devices, increasing the performance, and maximizing the overall QoS. However, they do not consider heterogeneous performance metrics such as money cost or global network traffic. The existing works in the literature do not provide a formal framework to formalize the overall offloading process; rather, they conduct numerical results of their reference architectures. Other works have insufficient depth, or they are focused on specific issues of each layer separately.
The research presented in this article pursues the same objectives as mentioned above. To the best of our knowledge, this is the first study to provide, from a holistic perspective, a comprehensive model that provides an analysis of technical aspects and includes all known computing layers of the network. This model will allow us to handle complexity of the offloading process of MCC systems.
Next section introduces the proposed multilayer architecture and presents a formal basis for modelling the offloading process in the MCC paradigm.
This architecture design promotes an adaptation to changing environments and enables a dynamic scaling of computational power able to assume a variable and/or intensive application workload in a more effective manner than existing proposals. The network layers considered for offloading computation can be those described in Section
This approach attempts to identify methods to obtain the best results and performance using the network infrastructure deployments and local processing capabilities. Multiple design configurations can be supported. The mobile devices and connected “things” can be heterogeneous and have different capabilities for processing data. The middle infrastructure layers can be deployed by stakeholders to improve the execution of their mobile or IoT applications and extend them to more potential customers, for example, advanced multimedia games and complex financial apps. These layers can be equipped with specialized hardware components for improving the complex calculations, including custom cloudlets with GPUs, DSP, and cryptographic coprocessors.
Below, the main technical characteristics of the MCC paradigm are introduced together with the implications and advantages provided for them by the proposed model.
The
The
The desirable
The
The
Finally, the
Table
Technical characteristics of offloading methods and frameworks.
| | |
---|---|---|
| Static, dynamic | Exploit the available nearest resources |
| Static, dynamic | Leverage the potential of the available resources |
| Module, bundle, subroutine, process, thread, class, component, method | Provide flexibility to the application needs |
| Client-server communication, virtualization, mobile agents | Support heterogeneous infrastructures |
| Adaptive schemes | Awareness of available layers |
| Resource monitoring and profiling, parametric analysis, stochastic methods | Decide where to perform the computations |
The descriptions and recommendations regarding the operation of the proposed architecture reveal that there continue to be important challenges that must be addressed to leverage the available infrastructure at multiple layers of the network. In this regard, this work introduces basic ideas and notes on specific research issues for implementing a multilayer architecture.
The next subsection describes the formal framework of the architecture and the elements involved in the distributed computing.
This subsection describes the general aspects of the multilayer distributed architecture for outsourcing the processing load using the combination of computing layers of the available infrastructure.
According to the stated working hypothesis, the main idea behind the multilayer architecture to address computation offloading is to offer a set of options for performing the computations at different network layers where available infrastructure exists. As a rule, it is advisable to perform the processing as close as possible to where the data are acquired to reduce the delay and global network traffic. However, the final decision on where to offload each task depends on many other aspects including application requirements, devices configuration, user preferences, size of input data, and pricing. The result is a flexible and scalable model where the computations can be performed on a variety of platforms and computing layers. The formulation introduced in this subsection is utilized to better describe the contributions of the proposed architecture for providing flexibility to the processing requirements of IoT applications. Important notations and expression used in this paper are provided in Tables
Summary of Notations.
| |
---|---|
| Application Task |
| Set of Application tasks ( |
| Set of available computing layers ( |
| Computing platform |
| Computing platform of layer k |
| Computing platform of layer base |
| Set of computing platf. of layer j ( |
| Set of the available upper platforms of device |
| Set of performance aspects ( |
| Sequence of platforms on which the application is processed ( |
| List of platforms that meets the minimum computing costs for the |
| List of platforms that meet the minimum communication costs for the |
| Saturation value in calculation cost |
| Volume of data generated by task |
| Necessary data to be moved for computing the task |
Summary of Formulations.
| |
---|---|
| Processing cost of the task |
| Minimum processing cost of the application for the |
| Communication cost to move the task |
| Minimum communication cost of the execution of the application for the |
| Execution cost of the application for the |
| Minimum execution cost of the application for the |
| Minimum communication cost through the distributed architecture for the minimum processing cost ( |
| Volume of data to be moved for computing the whole application ( |
| Volume of data to be moved through the distributed architecture to run the application within the minimum cost. ( |
First, we consider a granularity unit for offloading the application task. This unit can be one of those mentioned in Table
Let
These tasks can be processed at different computing layers and platforms. Let
The number of network processing layers depends on numerous aspects such as execution environment, available infrastructure, and configuration options. In all these cases,
Each layer has a set of computing platforms, which can be heterogeneous with different processing capabilities and abilities according to their characteristics. Thus, layer
The different layers of the network can be deployed in sequence or in a parallel configuration where each computing platform of a layer can execute the services of the upper layers and provides services to several elements of the lower layers.
For a specific device (
The list of platforms described in Expression (
Several expressions can be introduced into the proposed general framework to define the behaviour of the multilayer architecture related to performance and resource consumption issues. In this section, the execution cost and data communication are analysed. The execution cost can be any of the different performance aspects that are involved in the execution of a task in a device. In this manner, let
For each of these aspects, the overall execution cost (E) of an application in this distributed infrastructure consists of two main components as indicated in Expression (
The cost expressions are a function of time because the execution conditions can change at any time depending on the workload currently processing on each platform, the other processes that eventually are executing simultaneously, and the network traffic situations.
Related to the first aspect (a), the processing capability of each computing platform can be in a range from zero to extremely high. Further, there may be platforms with specific capabilities that provide services to many applications and allow the acceleration of the processing of specific types of tasks. For example, GPUs can be installed on the cloudlet servers to accelerate multimedia algorithms. In this manner, the granularity of this calculation is the cost of computing each task. For each task
The workload of a platform can vary during the day depending on the number of devices simultaneously connected and other features. This fact is common in the intermediate platforms and in the cloud infrastructure because they collect data from different elements of the lower levels. However, it is normal that they have redundant computing elements that perform massive parallel processing. If a platform cannot compute a task, it will be assigned a cost of the saturation value
In addition to computational cost, the framework considers the communication cost along the architecture, that is, the cost of moving the tasks between the platforms and layers as well as the data they require. The following expression indicates the communication cost to move task
This expression is also a function of time and can have a variable result at any time according to different aspects such as network congestion, device connectivity, bandwidth availability, and pricing. The cost of moving the data in the same processing element is null; that is, executing the entire application on the same platform processor has no communication cost. Further, as in the case of the computational cost, if it is not possible to move the data to the
The communication cost can be any performance aspect of the set
Normally, the necessary input data of a task corresponds with the generated output data from the previous task:
The data flow and connectivity of the computing platforms define a graph to share and distribute the application workload. In the base of the graph are the mobile/embedded devices; the upper side is formed by the cloud-computing servers. Between these two sides, several intermediate computing platforms can be installed. This infrastructure allows executing advanced applications and provides improved overall performance.
In this architecture, movements of the tasks can occur based on the offloading configuration and execution costs. The many possible options allow flexible implementation of the applications to optimize any of the system parameters including minimizing response time, reducing the data flow through the communication network, minimizing the energy consumption of the devices, increasing the processing time of the cloud system, and minimizing the money cost of using external resources. Hence, the proposed multilayer architecture allows the design of numerous configurations for the execution of tasks depending on the type of application, execution restrictions, or operating conditions considering the above aspects.
The sequence of platforms on which the application executes is defined by the vector
Then, the execution cost of an application can be obtained expanding Expression (
Similarly, the amount of data to be moved for computing the entire application is obtained from the next expression:
From the previous expressions, calculations can be made to optimize the processing according to the configuration criteria of the architecture. Consequently, a more appropriate sequence of platforms for outsourcing is obtained. This information can guide the scheduling methods and the offloading strategy to achieve the best performance.
The following expression obtains the minimum cost of the execution of the application considering the cost of the platforms and cost of data movement along the communication network:
The cost of one of the components of Expression (
Note that Expressions (
The list of platforms that meets the minimum cost is defined as follows. Let
Then, considering the minimum processing cost for one performance aspect
Finally, the following expression obtains the amount of data to be moved through the distributed architecture to execute the application within the minimum cost:
It is notable that the information regarding computational and communication costs is not fixed and it is time dependent for some of the performance aspects. Thus, the computing platforms of the architecture can follow different strategies to derive the correct decisions regarding offloading tasks to another platform or layer.
First, it can use prediction techniques based on historical data. In this manner, the devices can know the estimated performance of the available platforms quickly. A possible implementation can be a look-up table that stores the performance data for each interesting aspect. The tasks can be clustered to ensure manageable table sizes. For example, the similarity criteria can be any indicator of the type of specialized processing required, such as integer, floating point, multimedia, or cryptographic. This feature should be noted in the programming stage of the application to ease the offloading process. Moreover, this data must be updated after each operation.
Secondly, there are methods for periodically testing the data time costs. Light threads can be launched at the beginning of processing to request performance conditions of the architecture.
A combination of the above methods can be made to obtain more accurate information regarding the execution context of the architecture.
In any case, an embedded middleware layer could be necessary to take charge of this job. This layer is already playing an increasingly important role in the edge computing paradigm to perform discovery and other broker services [
This section describes how the proposed multilayer computational model can handle real-world scenarios in order to find the best execution cost (
In these scenarios, the IoT layer is composed by mobile devices such as smartphones, tablet PCs, and smartwatches. These are heterogeneous devices and might have different computing and communication capabilities. In addition, the users might configure the devices according to their preferences and economic plans.
The example application consists of an Augmented Reality (AR) system for Smart Cities which enables the user to move freely through the modelled environment of the city using their mobile devices. This technology recognizes what you are doing and then enhances it. The AR systems for Smart Cities have great potential for all involved [
In this example, the scheduling algorithm of the middleware could be based on prediction techniques and, for example, the performance estimations could be like those shown in Table
Performance estimations.
| | |||
---|---|---|---|---|
(A) BATTERY SAVING | (B) MONETARY COST | (C) REAL-TIME | ||
| | | ||
| | | | |
Fog nodes | 1 | 0.2143 | 250 | |
Cloudlet | 1 | 0.256 | 0.4286 | 52 |
MEC server | 1.56 | 0.128 | 0.7812 | 52 |
Cloud server | | | 2.2177 | 50 |
The general definition of these contexts is that the user device is configured to save battery power. Battery consumption is a performance aspect for only battery-powered devices such as mobile devices or wireless sensors. Thus, it only applies to IoT devices. In addition, this is a static feature. That is
It means that the processing cost of outsourcing platforms does not matter for this configuration since the only important thing is energy saving. Thus, under this configuration, the application workload will be outsourced whenever and wherever possible. However, the communication costs for moving the tasks to another computing platform are not void since data communication consumes battery. Therefore, in this scenario, the computation of a task (
This consumption depends on where the data is moved. Generally, communication costs through a wireless local network are lower than through a telecommunication network such as LTE [
As can be seen in Table
The general definition of this context is that the user device does not have performance enough to run the VR application. It is only used for displaying the results and, therefore, it has to outsource the processing load. In such a context, it is configured to save processing expenses when using an external computing platform.
This is a similar approach than the previous one but, in this case, the device is unable to compute any task; that is
This scenario supposes that using outsourcing resources has money cost under the
However, in order to know the processing cost, a dynamic monitoring is needed. For example, Table
Generally, cloudlet and MEC price should be higher than cloud since they cannot take advantage of economies of scale. In addition, they are limited to a more restricted area, and thus, they have less competitors for outsourcing. From other points of view, cloudlet layer can be deployed and owned by organizations and therefore they should have no costs for their users.
Regarding Net function, the devices might have a communication cost. It could also be taken into account in order to offload through local wireless or communication network, since, in many cases, the former has no costs.
The general definition of these contexts is to minimize the computing delay of the application in order to meet quality constraints of demanding real-time AR algorithms.
In this scenario, there exists a very dynamic nature of the cost matrix since it depends on the current workload of each computer. In addition, the data must be moved to the target computing platform, and therefore the communication costs are responsible for a relevant part of the total delay [
As can be noted from the above data, the communication costs are increasing with distance from outsourcing platforms. In addition, fog and cloudlet layers can be into the same Local Area Network (LAN) of the device or at a very close one. Of course, these costs depend on communication technology used. For 5G technology the delay of MEC and cloud platforms will be significantly decreased.
Regarding computing delay, in general, the computing costs of intermediate computing platforms and the cloud server are significantly lower than those of the device itself. In this way, a decreasing computing cost must occur for the most of application tasks:
This consideration is quite frequent in environments designed for outsourcing of tasks from mobile devices or connected “things.”
Table
There may be certain applications more suited than others to make the most of this model. Certainly, applications containing intensive computing tasks will be more candidates to outsource than others; moreover, those tasks that require or produce a large volume of data should be processed as close as possible to the data sources to minimize communication costs.
MCC is a recent paradigm with disruptive implications for mobile computing and for the development of IoT and CPS advanced applications. This paradigm promotes the QoS of devices and ‘things’ by outsourcing their computation tasks to external computing platforms. The majority of proposals in MCC take into account only the cloud layer for outsourcing the application workload. Another computing infrastructure hosted at different network layer is also being introduced recently (cloudlets, fog nodes, etc.).
This work generalizes the idea of MCC paradigm to multiple computing platforms at different network layers. The proposed model considers not only the cloud layer, but also other network computing layers such as fog computing, Mobile Edge Computing, and cloudlet computing platforms.
To the best of our knowledge, this paper is the first work that extends the MCC paradigm to the available computing layers by considering them in a comprehensive and integrated fashion with the distributed architecture. To this end, a general framework of a multilayer network architecture is described. The proposed model formalizes the processing of the application workload and the implications of the distributed configurations in the performance and in the communications needs. This model offers a versatile approach where different performance aspects can be considered within the same framework (time delay, power consumption, money, etc.). In this way, the proposal analyzes from a holistic perspective the technical aspects involved in Mobile Cloud Computing paradigm taking into account the contributions and results of the more recent works on these topics.
This framework enables decision making regarding scheduling the outsourcing of the applications tasks based on the current and historical information of the systems. Several examples of these features have been described in three scenarios where different performance aspects and computing platforms are involved. Each of them is conditioned by its user preferences or device configurations. The results show the versatility of the model to represent all the elements involved.
In the future, this research can be extended to cover remaining challenges around this paradigm, for example, (a) the specification of the network topology and link capacity of the computing platforms along the network; (b) the definition of a middleware layer to drive the outsourcing process; (c) the introduction of discovery and broker services, and a suitable decision method for outsourcing; and (d) the design of mechanisms for collecting and disseminating the performance information, and the evaluation about the associate cost for doing that. In any case, the extension of the model can be made from the proposal of this work by adding new modules and features.
The available data can be found in the references and web pages cited in the document.
The authors declare that they have no conflicts of interest.
This work was supported by the Spanish Research Agency (AEI) and the European Regional Development Fund (ERDF), under Project CloudDriver4Industry TIN2017-89266-R, and by the Conselleria de Educación, Investigación, Cultura y Deporte, of the Community of Valencia, Spain, within the program of support for research under Project AICO/2017/134.