Research Article Intelligent Scheduling Method Supporting Stadium Sharing

the


Introduction
Fitness is gaining popularity in China over the decade [1,2]. One of the main problems encountered in the implementation is the lack of stadiums [3]. As an important guarantee for the development of sports, the sharing of stadium resources from college or high/middle school and society has become the direction under the condition of insufficient stadium resources, which can effectively alleviate the contradiction between the growing demand for fitness of citizens and the limited social stadium resources. erefore, it is necessary to construct the resource-sharing model of stadiums from college or school opening to the society and ensure the sustainable opening of college or school stadium resources to the society.
Taking Beijing as an example, there are nearly about 1,000 stadiums available with different sizes in the whole city, which are free and charged, and how to make the citizens book a suitable stadium by using phones or computers for their own needs is in most urgent need to address the problem. e booking of the shared stadium can be regarded as task scheduling in MEC. e phones or computers are often deployed at the edge of the network [4][5][6]. Task offloading refers to the transfer of computing tasks from resource-constrained phones or computers to external platforms.
Mobile edge computing (MEC) is actually the cloud for real-time and personalized services, which offers the IT service environment and cloud computing capabilities. e related environment is characterized by proximity, low latency, and high bandwidth, and this service environment also offers exposure to real-time context information, and it can be leveraged by applications to create value. [7,8].
For the problems of limited edge network resources and insufficient flexibility, it is necessary to introduce resource management and task scheduling mechanism to manage the edge network as a whole. Reasonable resource allocation and scheduling scheme are designed to make full use of edge resources to reduce latency and improve the user experience of booking stadiums. e scheme designing needs to consider many factors such as the number of users, the distribution of computing resources, the number of computing resources, and so on, which maximize the utilization of resources at the edge. e optimized network latency makes it easier for citizens to book the most suitable stadium for them, which is bringing a better experience to users.
Accordingly, the contributions of this study are summarized as follows. (i) e offloading latency is modeled after analyzing the task computing latency and cloud computing latency. (ii) A joint scheduling of service caching and task algorithm is proposed to divide the problem into two relatively easy subproblems, which are joint scheduling of service caching and task. e rest of this study is organized as follows. e related work is in Section 2. e modeling of resource allocation and task scheduling is studied in Section 3. In Section 4, joint scheduling of service caching and task algorithm is proposed. e experimental results are shown in Sections 5 and 6 that conclude this study.

Related Work
For the problems such as limited resources and insufficient elasticity of edge network, resource allocation and task scheduling mechanism should be introduced to manage the overall edge network. Currently, there are some research studies on resource allocation and task scheduling.
In order to cope with the problems of insufficient processing capacity and limited resources of terminal devices, computing offloading is introduced in MEC. EC offloading refers to the user-end offloading computing tasks to the MEC network to solve the device's shortcomings in resource storage, computing performance, and energy efficiency. Many strategies of resource allocation for MEC have been proposed. In [9], two resource allocation mechanisms were proposed, one is the auction-based mechanism, which generated the envy-free allocation, and the other is the linear program-based mechanism, which was no guarantee for envy. In order to recover the impact of limited caching service for task offloading, in [10], efficient convex programming was designed to solve the problem. In [11], a game-based distribution scheme was presented to allocate resources that offloading jointly and dynamically required in the mobile system. In [12], the nodes in the industrial internet of things using power transmission-scheduling scheme was proposed to lower the charging cost. In [13], the jointly partial task offloading and resource allocation of MEC with energy collection was proposed to take into account the network architecture of MEC. e computing tasks could be offloaded to the edge server, but the movability of users could not be predicted, and in [7], an effective solution based on relaxation and rounding off strategy was proposed to ensure the seamless migration problem by promising the movability of users. In order to safely perform the offloaded tasks, in [14], the Markov model was used to reduce the risk. In addition, a secure mechanism was proposed to lower the economic returns. ere were most important two aspects in MEC, which were the security of task offloading and the connectivity of users, and in [15], the constraint of offloading rate and latency was studied to express the optimal solution of MEC. e power used in the edge server is huge, and in [16], the tasks were offloaded through searching the highest reward in order to solve the limited power of the edge server.
Task scheduling has been studied for a long time in cloud computing. Scholars usually optimize resource allocation and user task scheduling to reduce latency. With the development of MEC and the combination of cellular network, how to schedule EC tasks submitted by users has become an urgent problem to be solved.
ere are also some other methods for task scheduling in MEC. Based on the combination and strong coupling of resource allocation, in [17], a joint resource allocation and task scheduling method was proposed to divide the problem into two subproblems, which could optimize the coupling of MEC. Considering the cooperation of convolutional neural networks (CNNs) and MEC, in [18], the task scheduling problem could be divided into two problems, which were resource allocation and CNN task scheduling. In order to highlight the latency of task scheduling, the problem could be transformed as a minimum latency problem. It was necessary to transmit data in some special environments, and in [19], in order to transmit encrypted data, the task scheduling strategy was proposed. In [20], a dynamic planning energy-saved task scheduling algorithm was proposed to the minimum the latency constraint. In [21], MEC server was deployed at each area on the internet of vehicles, and authors proposed that the computing tasks could be scheduled to MEC server through wireless networks. In [22], the energy-based task scheduling method was proposed to optimize power consumption. In [23], a novel secure framework was proposed to offload the computing tasks to the MEC server in low latency and low power consumption. In [24], a random integer nonlinear planning method was proposed to schedule the computation-intensive tasks.

Modeling of Resource Allocation and Task Scheduling
Users book stadium on phones or computers and offload computing tasks to edge servers. e latency of task offloading can be divided into two parts, which are communication latency and computing latency. In the scenario of mobile edge network, wireless transmission latency is not the focus of this study that can be regarded as a constant. When the task of booking a shared stadium arrives at the base station, if the directly connected edge server has the ability to process the task, there is no transmission latency. If the task needs to be sent to other edge servers in the edge domain for processing, the related transmission latency needs to be increased. So, the problem of booking shared stadium is transformed into service caching and booking task scheduling.

Edge Caching.
In this study, the tasks can be performed only if the resources exceed a certain threshold m . ere is a decision to be made about which types of services to cache so that the average latency is lowest. e decision variable a mn is introduced to indicate whether service m runs on edge server n. e symbols in this study are shown in Table 1.

EC Latency.
Computing latency refers to the time required for the task of booking a shared stadium to be calculated in the edge server. e computing latency of a task is affected by multiple dimensions of computing resources at the same time. Only the CPU frequency is considered in this study. Let 1/λ and λ denote the parameters of negative exponential distribution and the service rate. It is assumed that when the CPU frequency allocated to the service exceeds the resource threshold m , the service rate linearly increases with the increase in CPU frequency. For example, the coefficient of service m is μ m , and the processor frequency allocated to edge server n is ESf mn ; then, the service rate is λ mn � μ m × ESf mn . A simple M/M/1 queuing model can be used to model the computing latency of tasks [25].

Cloud Computing Latency.
Due to the limitation of edge resources and the existence of sudden traffic, the edge cannot process the task of user offloading timely booking shared stadium. So the booking task may be sent to a remote data center for running, and the latency is called cloud computing latency in this study. e communication latency experienced by cloud computing is much larger than the communication latency L ic in edge network. It is worth noting that cloud latency changes over time but over a shorter period of time, which can be assumed that the edge to cloud data center communication latency is a constant. Cloud data center has nearly infinite computing resources, which requires less computing latency than the edge and is relatively fixed. erefore, for each service m, a constant L m can be used to represent cloud computing latency within a period.

Optimization Model.
e offloading latency is modeled after analyzing three kinds of latency. e purpose of this study is to optimize the latency performance of the edge network by adjusting the allocation of stadium resources and the scheduling of booking shared stadium tasks. e latency of each user of each service is a random variable, and a weighted average ω can be used to characterize the performance of the entire edge.
When a directly connected user of an edge server frequently uses a service, the resource quantity of the related service can be adjusted to increase ESf mn . When there are many user requests for multiple services on the edge server, it needs to decide which services and what percentage of requests will be offloaded and where the offloading tasks will be performed. e mn represents load adjustment for service m on edge server n. If e mn is positive, it indicates that the load of the same service of other base stations in a certain domain needs to be offloaded to the local server. Otherwise, it means that a certain amount of user requests need to be transferred to other nodes for computing, which may be other edge servers or cloud data centers. When e mn is determined, each node will forward the user task request to the controller according to the ratio η mn � e mn /δ mn , and then, the controller will decide the destination node to forward.
On the basis of the M/M/1 model in queuing theory, the computing latency of service m on edge server n is defined as follows.
l c mn � 1 a mn × μ mn × Esf mn − δ mn + ln e mn . (1) Even for the same service, each edge server has its own queuing queue, and the l_c mn is quite different. Weight balancing is required according to the number of users that need to be processed. e load proportion of each edge server is defined as follows.
η c mn � δ mn + ln e mn task m .
So the average computing latency of service m at the edge is defined as follows: n∈q l c mn × η c mn . (3) In addition to the task of booking shared stadium performed on the edge, there are some booking tasks to offloading to the cloud for running, and the proportion of booking tasks performed on the cloud is determined by the adjustment of the arrival rate of each edge server δ mn . e calculation of latency introduced during transmission is similar. erefore, the transmission ratio can be defined as follows: Total CPU frequency δ mn m ∈ P, n ∈ Q, direct request arrival rate of service m on edge server n μ m m ∈ P, service rate coefficient for service m L m m ∈ P, cloud computing latency for service m l m m ∈ P, inner-domain forwarding latency of service m m m ∈ P, minimum resources for service m operation ω m m ∈ P, the weight of service m task m m ∈ P, total task arrival rate of service m in edge domain a mn m ∈ P, n ∈ Q, cache decision variable e mn m ∈ P, n ∈ Q, arrival rate adjustment decision variable ESf mn m ∈ P, n ∈ Q, resource allocation decision variable η Cloud m � − n∈q δ mn task m , η l m � − n∈q δ mn task m .
(4) e average latency can be defined as follows.
To sum up, for a certain service m, the average latency can be defined as follows.
l a m � n∈q l c mn × η c mn + η Cloud m × L m + η l m × l m .

(6)
By using the weight of services to perform weighted average, the complete expression of average latency in the edge domain can be defined as follows.
l a m L � m∈p n∈q ω m task m δ mn + ln e mn a mn × μ mn × Esf mn − δ mn + ln e mn + l m × δ mn − 2 3 L m × e mn .
ere are certain constraints on decision-making, which are summarized as follows.
(i) In [26], the convergent divides the population into multiple clusters by using a clustering strategy. To ensure the convergence of the calculation queue, the service rate should be higher than the request arrival rate, which can ensure that the task of booking shared stadium can be performed faster and improve user experience; then, a mn × μ mn × Esf mn > δ mn + ln e mn , ∀m ∈ P, ∀n ∈ Q.
(ii) e edge server can forward the directly connected tasks at most, and the service arrival rate cannot be less than zero after adjustment; then, δ mn + ln e mn ≥ 0, ∀m ∈ P, ∀n ∈ Q.
(iii) Only internal demands are processed in the edge domain, and the remaining demands are performed by the cloud. erefore, the sum of service arrival rate adjustment should be less than or equal to zero; then, n∈q δ mn ≤ 0, ∀m ∈ P.
(iv) e total resources allocated to different services on the same edge server cannot exceed the resource limitation; then, m∈p a mn × Esf mn ≤ S, ∀n ∈ Q.
(v) e CPU frequency allocated for a service should be greater than or equal to the minimum threshold; then, Esf mn ≥ Th m , ∀m ∈ P, ∀n ∈ Q.

Joint Decision Algorithm
e intelligent scheduling method supporting the stadium sharing problem is a mixed-integer nonlinear programming, and the global optimal solution cannot be found in polynomial time. In this study, a joint scheduling of service caching and task algorithm is proposed to divide the problem into two relatively easy subproblems, which are joint scheduling of service caching and task. e subproblem of service caching is to make a decision on the decision variable a mn . In service caching, a greedy strategy is proposed to determine the service that should be cached for each edge server. e joint task scheduling is to further optimize resource allocation and task scheduling on the basis of the determined service caching.

Service Caching Subproblem.
e service caching subproblem is a zero-one programming problem, which is tightly coupled with subsequent subproblems. When the number of edge servers controlled by the controller is large, the time of the conventional solution will rapidly increase and it is difficult to solve. erefore, this study presents a greedy strategy to determine the service caching on the edge server, which can greatly reduce the solution time and avoid excessive performance loss.
Caching of selected service at the edge is essential to allow more tasks of booking shared stadium that users offload to be performed at the edge without offloading to the cloud. erefore, the task arrival rate of service is an important reference index. In addition, different tasks have different tolerance to latency, and tasks with lower tolerance and higher weight should be satisfied first. To balance the above two points, the "cache priority index" is introduced as shown in equation (13). Each edge server is ranked according to priority criteria and allocated computing resources according to minimum operating requirements until the server resources are exhausted. For the service to which computing resources are allocated, the service caching decision variable a mn is set to 1. 4 Discrete Dynamics in Nature and Society cpi mn � ω m × δ mn , ∀m ∈ P, ∀n ∈ Q. (13)

Joint Scheduling of Service Caching and Task.
ere may be a resource surplus that needs to be allocated among the services that are cached by the edge server once the caching types are determined. e scheme of allocating computing resources will affect the waiting time of service queues. To optimize the latency, it is necessary to determine the proportion of computing tasks in the edge and cloud. erefore, there are two decisions to be made. (i) e computing resources to be allocated for each service are determined by ESf mn . (ii) e proportion of tasks that needs to be adjusted is determined by e mn .
e two decision problems cannot be separately made, and the results need to be obtained in an optimization process. After the service caching decision variable a mn is determined, the original objective function is transformed from the original mixed-integer nonlinear programming problem to the general nonlinear programming with constraint problem. e decision Algorithm 1 can be summarized as follows.

Performance Metrics.
e algorithm in this study is proposed to reduce the latency of booking shared stadium request and improve user experience. e following performance metrics are set in this study. (i) e average latency of users in the system, which is also the target of direct optimization of the algorithm. (ii) e satisfaction rate of quality of experience (QoE) can reflect whether it is reasonable to take the average latency as the optimization goal. e settings for the simulation parameters are shown in Table 2.
A QoE latency limit is set in each edge decision sample. When the user's latency is smaller than the QoE latency limit, QoE requirements can be considered to be met. e average latency of each queue can be obtained through the queuing model, so this study directly takes whether the average latency is less than the QoE latency limit as the standard of whether users meet QoE. When machine learning meets congestion control, in [27], the performance of reinforcement learning-based congestion control algorithms was explored.
Edge load refers to the computing offloading requests generated by users in the edge domain within a unit time, that is, the arrival rate of requests. In this study, a benchmarking value is designed for the load size, which reflects the edge load relative to the edge affordability, so as to get rid of the influence of service type p and the number of edge servers q. It is assumed that each edge server has a computing resource of Cf (such as the basic frequency of CPU). Figure 1 shows the effect comparison of whether or not edge cooperation is performed in the edge domain.

Result Analysis.
ree classical resource allocation and task Input: the number of iterations iN Output: a mn , ESf mn , e mn (01) Initialization: logarithmic barrier lb � 0.01 (02) Initialization: a mn � 0, ∀m ∈ P, ∀n ∈ Q (03) for edge server n ∈ Q, do (04) for service m ∈ P, do (05) compute service cache priority index cpi mn � ω m × δ mn ; (06) end for (07) Rank cpi n in reverse order (08) Resource surplus rs � C (09) for service m ∈ service cache order co m , do (10) if rs ≥ m , then (11) amn � 1 (12) rs � rs − m (13) end if (14) end for (15) end for (16)   Discrete Dynamics in Nature and Society scheduling algorithms in recent years are selected for comparison, which are MEC-based resource management and task scheduling (MEC-RMTS) [28], coalitional gamebased cooperative offloading (CGCO) [29], and multiservice task computing offload algorithm (MTCOA) [30]. e MEC-RMTS framework is used for efficient task offloading in the internet of things, CGCO is a cooperative offloading algorithm based on the coalitional game, and MTCOA solves the multiservice task offloading problem. e data in the figure are measured when p � 10 and q � 20. Each edge server separately makes service caching and scheduling decisions.
It can be seen from Figure 1 that the proposed algorithm in this study with edge cooperation idea can achieve lower average system latency at lower load level. is is because the use of idle edge servers to provide certain computing services for neighbor nodes can improve the overall performance of the system. With the increase in edge load, the average latency of the three baselines gradually increases, which indicates that edge resources are unable to process edge load, and more tasks need to be transferred to the cloud for running. Edge cooperation can improve the stress resistance of the edge system. rough the analysis of the results, it is found that the proposed algorithm in this study using edge cooperation reduces the time latency by 70% on average compared with the algorithm without edge cooperation. Figure 2 shows the proportion of the calculated tasks transferred to the cloud for running, which is called the proportion of cloud offloading. It can be seen from Figure 2 that the proposed algorithm in this study can significantly reduce the cloud offloading ratio under low and middle pressure, so as to achieve lower average latency. Under heavy load, edge cooperation cannot continue to reduce latency and degenerates into uncooperative mode. Figure 3 shows the calculated proportion of tasks that are transmitted but not run in the above process. More network transmission may introduce more transmission effort, but it is also more conducive to resource concentration. It can be seen from Figure 3 that the proposed algorithm in this study uses the method of secondary transfer of more tasks to reduce the pressure of local task running. Figure 4 shows the statistics of the QoE satisfaction rate of users with different scheduling algorithms (latency is limited to 0.5 s). As can be seen from Figure 4, the proposed algorithm in this study can guarantee a better QoE satisfaction rate under low pressure. No matter which scheduling algorithm, the QoE satisfaction rate of users decreases with the increase in edge load pressure. However, no matter in which load range, the proposed algorithm in this study can achieve better performance, which also verifies that the proposed algorithm in this study meets the design goal of improving edge peak stress resistance.

Conclusions
National fitness is imperative, but a major problem encountered in the implementation is the lack of stadiums. e sharing of stadium resources from college or high school and society has become the direction of change under the current situation of insufficient stadium resources, which can effectively alleviate the contradiction between the growing fitness needs and the limited social stadium resources. In this study, the characteristics of edge resource allocation and task scheduling are analyzed, and joint scheduling of service caching and task algorithm is proposed. According to the minimum resource allocation requirements of service caching, the decisionmaking problem of the edge network is modeled as a joint resource allocation and task scheduling model, and the resource allocation and task scheduling scheme are obtained in an optimization process. Experimental results demonstrate that the proposed algorithm can effectively integrate and utilize edge resources, which can also improve the latency and QoE. Currently, there are few studies on task collaborative scheduling between mobile devices and MEC servers. e algorithm proposed in this study is for the tasks that cannot be decomposable, that is, all the computing tasks can only be locally performed or on the MEC server. However, there are many mobile applications that can be fine-grained in practice. For such computing tasks, some subtasks can be offloaded to the MEC server for execution, while others are locally offloaded for execution. More detailed offloading decisions need to be reformulated, which can further reduce task latency.

Data Availability
e data used to support the findings of the study are included within the article.

Conflicts of Interest
e author declares no conflicts of interest.