Deploying GIS Services into the Edge: A Study from Performance Evaluation and Optimization Viewpoint

Geographic information system (GIS) is an integrated collection of computer software and data used to view and manage information about geographic places, analyze spatial relationships, and model spatial processes. With the growing popularity and wide application of GIS in reality, performance has become a critical requirement, especially for mobile GIS services. To attack this challenge, this paper tries to optimize the performance of GIS services by deploying them into edge computing architecture which is an emerging computational model that enables efficient offloading of service requests to edge servers for reducing the communication latency between end-users and GIS servers deployed in the cloud. Stochastic models for describing the dynamics of GIS services with edge computing architecture are presented, and their corresponding quantitative analyses of performance attributes are provided. Furthermore, an optimization problem is formulated for service deployment in such architecture, and a heuristic approach to obtain the near-optimal performance is designed. Simulation experiments based on real-life GIS performance data are conducted to validate the effectiveness of the approach presented in this paper.


Introduction
Geographic information system (GIS) has been a hot technique for providing the tools for capturing, storing, analyzing, and displaying spatial data [1]. In order to provision GIS services with high Quality of Service (QoS), performance of the system is a critical issue [2]. In recent years, there have been several research works dedicating to optimizing the performance of GIS services from different aspects [2][3][4].
Edge computing is an emerging technique of optimizing computing systems by performing data processing at the edge of the network near the source of the original data [5]. It pushes applications, data, and services away from centralized points (i.e., the cloud) to the logical extremes of a network, and thus, the communication latency for processing user requests can be significantly reduced [6,7], as well as fault-tolerance [8], privacy [9][10][11][12], and security [13] being enhanced. With edge computing architecture, the performance as well as scalability of GIS systems can be dramatically enhanced [14].
Although there have been some research studies focusing on improving the QoS of GIS services by applying edge computing techniques, few of them paid attention to the performance evaluation issue. ere lacks of analytical approaches for evaluating as well as optimizing the performance of GIS systems which is able to quantitatively indicating the impact after deploying GIS services into the systems with edge computing paradigm. It is quite a challenging work to capture the dynamics of the GIS systems, especially after constructing them with edge computing architecture, since the introduction of the edge layer makes it quite complicated for task scheduling and request processing. Furthermore, whether to dispatch the request to the near-end edge servers or far-end cloud servers for obtaining the optimal QoS remains largely unexplored.
In this paper, we make an attempt at filling this gap by presenting a performance evaluation and optimization study of the GIS services deployed in the edge computing architecture. A theoretical model for capturing the dynamics of the edge computing systems running GIS services is presented, and its corresponding quantitative analysis is conducted. With the analytical results, an optimization problem is formulated and a service deployment scheme is designed for obtaining the near-optimal performance of GIS services. With performance data generated from real-world GIS systems, simulation experiments are conducted to validate the effectiveness of the approach. e remainder of this paper is organized as follows. In Section 2, we discuss the related work most pertinent to this paper. In Section 3, we present a theoretical model for formulating the GIS systems with edge computing architecture, and provide quantitative analysis of the model. In Section 4, we formulate an optimization problem and design a performance optimization approach. In Section 5, we conduct real-life data based experiments to validate the efficacy of our scheme. Finally, we conclude the paper in Section 6.

Performance Evaluation.
A straightforward approach of performance evaluation is to obtain the performance metrics by direct measurement. Due to the dynamics of the system and environments, a series of experimental measurements are commonly required and statistical techniques are applied for handling the original measurement data. Truong and Karan [15] designed a mobile application of performance measurement and studied the impact of performance and data quality for mobile edge cloud systems. Morabito et al. [16] constructed a real testbed to evaluate the containerbased solutions in IoT environment at the network edge, and analyzed the power and resource consumption for performance evaluation. Chen and Kunz [17] combined measurement and emulation and designed a network emulator for performance evaluation of optimal protocols. Qi et al. [18] collected data from 18,478 real-world APIs and 6,146 real-world apps, and designed a data-driven approach for web service recommendation. Baptista et al. [19] deployed a web-based GIS and used two datasets as the benchmark to evaluate the performance of several optimization techniques in Web GIS.
Although the measurement-based approaches are effective in performance evaluation, their overhead is so expensive that sometimes especially in the design phase of a computing system, one may not be able to afford implementing all the feasible schemes for comparison in reality [20]. erefore, an alternative type of approaches has emerged, which applied theoretical models to formulate a system and then provide quantitative analysis by solving the models. With significantly lower overhead, the model-based approaches are able to evaluate the performance of the schemes before their implementations, making them increasingly popular in system design and improvement. Wang et al. [21] applied queueing theory to formulate an edge computing system, based on which a near-optimal offloading scheme for the Internet of Vehicles was designed. Ni et al. [22] generalized Petri net models and conducted performance evaluation of resource allocation strategies in edge computing environments. Li et al. [23] presented a performance estimation approach using M/M/k queueing model in Internet of ings (IoT) environments, which further helped to explore the optimal QoS-aware service composition scheme.

Performance Optimization.
e performance optimization is commonly based on the evaluation results and thus used to optimize the performance of a system by designing new policies, selecting the best candidate, or enhancing the existing ones. One popular way is to collect the performance data of the policies by either measurement-based approaches or model-based approaches and search for the optimal one. Sometimes due to the extremely large search space, such search-based optimization approaches may meet with search-space explosion problems, and thus how to search for the optimal solution with high efficiency has become a hot topic. Mebrek et al. [24] considered the QoS and energy consumption in edge computing for IoT, formulated a constrained optimization problem, and designed an evolutionary algorithm-based approach for searching the feasible solutions. Wu et al. [25] designed a service composition scheme for mobile edge computing systems by combining simulated annealing and genetic algorithm. Zhang et al. [26] used neural network models for search-based optimization and designed a proactive video push scheme for reducing bandwidth consumption in hybrid CDN-P2P VoD Systems. Xu et al. [27] designed a multiobjective evolutionary algorithm based on decomposition for adaptive computation offloading for edge computing in 5G-envisioned Internet of Connected Vehicles (IoCV).
Another feasible way is to build a mathematical model illustrating the relationships between the system parameters and the performance metrics, based on which optimization problems can be formulated and optimal policies can be obtained. Zhang et al. [28] presented a graph-based model for service composition and designed an optimization approach of service composition with QoS correlations. Mao et al. [29] formulated the resource management as a Markov decision process, and further applied deep reinforcement learning to construct an optimization algorithm. Chen et al. [30] applied queueing theory to capture the dynamics in the mobile edge computing environment, formulated a stochastic optimization problem and designed an energy-efficient task offloading and frequency scaling scheme for mobile devices.

Summary.
Although there have been several cuttingedge research works dedicating to performance evaluation and optimization for edge computing systems, this topic remains largely unexplored in geographic information systems. Since it has been shown by the existing literature that edge computing is able to improve the performance of the computing systems, especially for real-time services, we believe that a comprehensive study on the performance evaluation and optimization of GIS services deployed in edge computing architecture will have theoretical reference and practical value for the design, management, and improvement of geographic information systems.
Previously, we have conducted some research works on the topic of model-based performance evaluation and optimization in edge computing service systems. We have applied queueing network model to the performance evaluation of IoT services deployed in edge computing paradigm [31], and further put forward a simulation-based optimization approach of efficient service selection [32]. With queueing theory, we also proposed a multiqueue approach of energyefficient task scheduling for sensor hubs in IoT using Lyapunov optimization technique [33]. In [34], we investigated the task scheduling and resource management problem and designed an equivalent linear programming problem which could be efficiently and elegantly solved at polynomial computational complexity. In addition, we have explored generalized stochastic Petri net models for model-based performance evaluation and search-based optimization for both performance and reliability metrics [35]. However, the performance modeling, analysis, and optimization meet with new challenges in the background of GIS, due to the characteristics of different task arrivals and service procedures.
is paper is our first attempt at studying the model-based evaluation and optimization issue for GIS services.

Analytical Model for Performance Evaluation
In this section, we apply queueing theory to construct an analytical model for performance evaluation of GIS services in edge computing paradigm. We firstly present the atomic queueing model of a GIS server and then propose a queueing network model for evaluating the overall performance of an edge computing system. e quantitative analyses of the performance metrics are also presented by solving the models mathematically. e main notations and definitions which will be used in the following discussions are provided in Table 1.

Queueing Model of a GIS
Server. An atomic service represents a type of relationship-based interactions or activities between the service provider and the service consumer to achieve a certain business goal or solution objective [36]. In a GIS system, there are a number of atomic services that can provide different functionalities. For example, users upload requests to view satellite pictures of a certain area, sensors upload the temperature, humidity, and other data of a certain area in real time, and servers analyze and process a large amount of existing data. Due to the difference in the amount of calculation, some services with a small amount of calculation can be usually completed on the local devices, while some services with heavy computational workload should be deployed on more powerful edge servers. e dynamic behavior of atomic services includes the following three basic parts. First, the request arrives at the service node and completes specific tasks according to their needs. ese requests can be simple requests from users, routine sensing tasks on sensors, or complex data analysis in data centers. Second, because the resources on the service node are not unlimited, requests sometimes have to wait in the queue until the service is available. If the current queue is empty, the incoming request will be processed by the service immediately without waiting in line.
ird, after the request is processed, it leaves the system.
In a real-life GIS service system, a single server can handle a number of different types of services, and the capacity of each queue should be finite. us, we consider a multiqueue, finite-capacity, and single-server queueing model, where each queue specifically deals with tasks of the same priority.
It has been shown that the task arrivals above the session level in distributed systems can be basically formulated by Poisson distribution [37]. And according to the known data, we can figure out that the service rate of GIS system obeys the general distribution. erefore, we formulate a GIS server by a q-M/G/1/Ki queueing model [38].
We consider a scenario consisting of a set Q of q (|Q| � q) queues. Each queue q i , where i ∈ Q � 1, 2, . . . , q specifically deals with tasks with the same priority, is connected to the same server. Usually, tasks arrive to q i according to the i.i.d. Poisson process with rate λ i and are processed by the server under a general independent service rate μ i . e order in which the server accesses the queue is determined by the queue selection rule (QSR) or the queue scheduler. To facilitate our analysis, we define the state of the multiqueue model as a q-tuple array x � [n 1 , n 2 , . . . , n q ], where n i ∈ [0, K i ] represents the number of tasks in q i at the current moment.
With this description, we can clearly describe the current occupation of each queue with the state vector x. Furthermore, we have to introduce a secondary variable s to describe the queue currently being serviced. In this sense, another form of [x; s] ∈ R q+1 , s ∈ 1, 2, . . . , q , can give a more compact representation. Figure 1 illustrates an example of a queueing model where x � [3, 0, 2; 1].
Since the service time follows the general distribution, the memoryless feature of state evolution in traditional Markovian queueing models does not hold. To facilitate the analysis, we choose our observation time for the moments when the task has just completed its service procedure. At these points, the Markovian attribute is retained and the arrival and service processes are restarted. For the sake of distinction, [x; s * ] (s with a superscript * ) is used to emphasize the observation of time as the state of the moment of departure. It should be noted that the corresponding state probabilities of [x; s] and [x; s * ] are denoted as p x;s and π x;s * , respectively.

Queue Transition Probability (QTP).
Considering the state [x; s * ], the state transitions to this state can be either (i) from any arbitrary states [x; r * ] or (ii) from the null state [0, 0, . . . , 0; r * ]. And the QTP is different in these two cases. In Case (i), the QTP is related to the queue selection rule (QSR). For example, in the case of the QSR is FCFS (firstcome-first-served), the corresponding queue transfer probability is (1)

Security and Communication Networks
However, in Case (ii), the QTP depends only on task arrival rates, which is represented as In equation (2), the QSR is ignored since the QTP is merely related to the task arrival rates in Case (ii). For convenience, we do not need to label QSR unless it must be used.

Task Arrival Probability (TAP).
e TAP of k arrival tasks during the service interval in the M/G/1/∞ model is represented as   Figure 1: Security and Communication Networks where b(t) is the probability density function (PDF) of the service time. When we solve the multiqueue model, the extension of α k to multiqueue TAP is easily represented as where α l1,l2,...,l q ;s is expressed as the joint probability with l k arrival tasks in q k for ∀k during the service interval of q s , and b s (t) is the corresponding probability density function of the queue model. More specifically, the limited capacity of each queue should be taken into account. In the case of q-M/G/1/ Ki, the formula in equation (4) needs to be modified properly further. us, since there are already n i tasks in q i , the maximum number of tasks allowed by q i is K i − n i . And then the TAP can be expressed as ∞ m i �K i − n i α l 1 ,...,m i ,...,l q ;s . Furthermore, assuming that the queues Q k+1 to Q q are completely filled with tasks, α l 1 ,l 2 ,...,l q ;s is formulated as follows:

State Transition Equations (STEs).
After we have solved the QTP and TAP, the state probability π x;s * of [x; s * ] can be satisfied as the following STE to govern the dynamic of the queueing system: π 0,...,0;r * β 0⟶s α n1,...,n q ;s In equation (6), the first term in the right-hand side is the probability from the null state to [x; s * ], while the second term is the probability from [x; r * ] to [x; s * ]. Based on the above formulation, the STEs composed of all feasible states can be expressed more concisely as a matrix-vector form: where N q,π is the number of all feasible states, π is the aggregation of π x;s * , and the state transition matrix A π ∈ R N q,π×q,π consists of multiplications of QTP and TAP.

State Balance Equations (SBEs).
Based on the QTP, TAP, and π to set up the SBEs, the state probability p x;s * of [x; s] is easily to be solved. According to the fact that the task flows must be conserved in the equilibrium status, SBEs can be expressed in the following equation: where ψ s � ∀n i π n 1 ,...,n s ,...,n q ;s * . (iii) Arbitrary state probability p x;s where s ≠ 0: And then several performance measures can be obtained. For example, the average queue length L s can be calculated by where P m;s is the probability that there are m tasks in q s and can be expressed by P m;s � q i�1 n s �m P n 1 ,...,n s ,...,n q ;i , m > 0.
In equation (13), λ eff � λ(1 − P k s ;s ). In particular, P k s ;s is the probability when q s is completely filled with tasks.

Queueing Network Model of an Edge Computing System.
With the rapid development of the Internet and its applications, the single server cannot meet the needs of the vast majority of users, which is now replaced by a two-tier or even multitier group of server architecture. erefore, we introduce edge server into the GIS system to provide higher quality of service.

Security and Communication Networks
All the users and sensors and other individuals who can send requests are called terminals. In the GIS system, the edge server can overwrite all the tasks request of the terminals. We define t i as the i-th (i ∈ T � 1, 2, . . . , T { }) terminal covered by the edge server E.
A terminal can run multiple applications concurrently, and each application may contain many different tasks. We use a set H (|H| � H) to include all types of these tasks of all terminals in T, and h j (j ∈ H � 1, 2, . . . , H { }) is expressed as the j-th type of tasks.
Each h j is profiled by 3-tuple array [q j , s j , c j ], which is characterized by the following: (i) q j , the size of the task offloading request (including h j 's necessary description and parameters) for h j sent by a terminal to the edge server; (ii) s j , the size of the task offloading response (including h j 's execution result) for h j received by a terminal from the edge server; (iii) c j , the amount of h j 's computation. t i has a probability p i,j (p i,j ∈ [0, 1], j∈H p i,j � 1) to generate h j during its running period. And then we can use h i,j to express h j generated by t i . e total task generation rate of t i is defined as λ i .
ere are two ways to completing h i,j , i.e., (i) executing it locally, or (ii) offloading it remotely. On one hand, if h i,j is executed by t i locally, time and energy consumption may be taken due to the low computing capability of m i . On the other hand, if h i,j is offloaded to the edge server, it may suffer time and energy costs associated with the data transfer between t i and the edge server although meanwhile it may benefit from edge server's powerful computing resources. Such tradeoff will be carefully balanced by an approach for obtaining global optimality which will be discussed in the next section.
We define α � α i,j,k | i ∈ T, j ∈ H, k � 1 ‖ k � 0 as the selection probability to express the probability that terminal selects whether to execute the task locally or offload it to the edge server. For h i,j , the value of α i,j represents (i) the probability that h i,j is offloaded from t i to the edge server, if k � 1; or (ii) the probability that h i,j is executed by t i , if k � 0. And we have α i,j,0 + α i,j,1 � 1.
So far, we have been able to model the tasks generated by each terminal using the q-M/G/1/Ki model. For convenience, we define the task h i,j which is executed by t i as h T i,j and the task h i,j which is offloaded to the edge server as h Edge i,j . So, the task arrival rates λ T i,j of h T i,j can be expressed as Similarly, the task arrival rates λ can be expressed as en, we assume that the service rate for the terminal t i is μ i and the service rate for the edge server E is μ edge . With μ i μ edge and the amount of h j 's computation c j , the service rate μ T i,j of each task h T i,j is easily obtained as Similarly, the service rate μ Edge i,j of each task h Edge i,j is given by Note that μ i − j∈H λ i p i,j α i,j,0 c j > 0, i ∈ T and μ Edge − T i�0 j∈H λ i p i,j α i,j,1 c j > 0, i ∈ T are the hard constraint, which means the service rate must be greater than the task arrival rate to make sure the queue is stable.
e can be obtained from the following expression: In addition, the size of tasks' sending and receiving delays are so tiny that they can be ignored. And the time consumption caused by tasks to be offloaded on both terminals and edge server should be paid attention to. We define r T ⟶ Edge i as the uplink data transmission rate from t i to the edge server. en, the transmission delay from t i to the edge server can be given by Similarly, the downlink data transmission rate from the edge server to t i is denoted by r Edge ⟶ T i delay:

Energy Consumption Analysis.
In recent years, energy consumption has become a research hotpot in edge computing [39][40][41]. How to provide better services to meet the quality of service needs of users, while reducing the energy consumption of the systems and the operating cost of services, is one of the most important issues. It is different from [41], and we consider not only the energy consumption of mobile terminals, but also the energy consumption of edge server. In the GIS system, the energy consumption includes two aspects, i.e., task execution and task transmission. We define the energy consumption caused by executing h i,j at t i and caused by executing h i,j at the edge server as e T i,j and e Edge i,j , respectively. And they can be expressed as follows: where ξ i and ξ Edge are the energy consumed for each calculation at t i and at the edge server, respectively. Considering the energy consumption in the uplink data transmission process from t i to the edge server, the energy consumption of t i for the transmission is where ω i is the transmission energy consumption per unit time of t i . Similarly, the energy consumption of the edge server for the transmission is where ω Edge is the transmission energy consumption per unit time of the edge server. e energy consumption used by the t i to receive an offloading response is very low that it can be ignored. So far, we have got the tasks' response time and the energy consumption of task execution and transmission.

Utility Function.
With the help of time and energy consumption of each part, we can build the corresponding utility function. e total time consumed in executing the task includes two aspects: (i) the time consumption of terminal executing tasks, and (ii) the time consumption of the edge server executing tasks. In Case (i), the time consumption is caused by executing h T i,j at t i , that is, t T i,j . In Case (ii), the time consumption is caused by transmitting the offloading request of h from the edge server to t i , that is, t In summary, the total time consumption for executing h i j is easily obtained: e total energy consumed in executing the task includes two aspects: (i) the energy consumption of terminal executing tasks and (ii) the energy consumption of the edge server executing tasks. In Case (i), the energy consumption is caused by executing h T i,j at t i , that is, e T i,j . In Case (ii), the energy consumption is caused by transmitting the offloading request of h . (25) In general, total time consumption and total energy consumption in the GIS system can be easily obtained as i∈T j∈H t i,j and i∈T j∈H e i,j , respectively. erefore, the utility function can be built to evaluate the overall benefit of the GIS system. We normalize the energy consumption and time consumption, and thus the utility function is defined as follows: where τ ∈ [0, 1] is the balance factor between energy consumption and time consumption and t � i∈T j∈H t T i,j and e � i∈T j∈H e T i,j are the total time consumption and total energy consumption when the all tasks are executed in terminal without offloading, respectively. We should note that the closer τ is to 1, the more weight we put on time consumption. On the contrary, the closer τ is to 0, the more attention we pay on energy consumption. erefore, τ should be set properly by the system manager to balance the tradeoff between performance and energy consumption according to the requirements in real-life scenarios.

Optimization Problem Formulation.
With all the analytical results presented in the above sections, we formulate an optimization problem in GIS systems as follows: where constraints (28) and (29) are the hard constraint of the GIS system of each terminal and the edge server, which is used to make the queue system stable, respectively. And constraint (30) is the value range of α i,j,k , i ∈ T, j ∈ H, k � 0‖k � 1. Constraint (31) limits the total probability of offloading to the edge server, and local execution of each task is 1: Security and Communication Networks

Optimization Approach.
Due to the complexity of the utility function, we propose a heuristic algorithm based on differential evolution (DE) algorithm [42,43] which has good convergence properties with few control variables. DE is a parallel direct search method which utilizes NP D-dimensional parameter vectors, expressed as e DE algorithm includes the following four parts.

Initialization.
As shown in Algorithm 1, if the system is unbeknown, the initial population should be chosen randomly.

Mutation.
e core idea of DE is a new scheme for generating trial parameter vectors, which is called as mutation. DE generates new parameter vectors by using parameter F to add the weighted difference vector between two individuals to a third individual. For each vector x i,G (i � 0, 1, 2, . . . , NP − 1), a perturbed vector v i,G+1 is generated according to Algorithm 2, with r 1 , r 2 , r 3 ∈ [0, NP − 1], i ≠ r 1 ≠ r 2 ≠ r 3 . F ∈ (0, 2) is a real and constant factor, which controls the amplification of the differential variation (x r 2 ,G − x r 3 ,G ).

Crossover.
In order to improve the diversity of the perturbed parameter vectors, crossover is introduced. To this end, the vector with , for j � 〈n〉 D , 〈n + 1〉 D , . . . , 〈n + L − 1〉 D , is formed. e acute brackets 〈·〉 D denote the modulo function with modulus D. e starting index, n ∈ [0, D − 1], in equation (36) is a randomly chosen integer. e integer L, which represents the number of parameters that are going to be exchanged, is drawn from [1, D] with the probability is the crossover probability. e random decisions for both n and L are made anew for each process of crossover. e crossover procedures are presented by Algorithm 3.

Selection.
In order to decide whether the new vector u i,G+1 can become an individual in the population of generation G + 1, it will be compared to x i,G . If vector u i,G+1 yields a larger objective function value which is the utility function in equation (34) than x i,G , x i,G+1 is set to u i,G+1 , otherwise x i,G+1 retains x i,G . In addition, the optimal parameter vector x best,G is recorded for every generation G in order to keep track of the progress that is made during the optimization process. e selection scheme is formally presented in Algorithm 4.
Based on the following four parts, Algorithm 5 gives the main program of DE algorithm, which provides an approach on how to deploy the GIS services in the edge computing system. e near-optimal solutions for maximizing the utility function while satisfying the constraints can be obtained in an efficient way.

Experimental Setup.
We conduct experiments based on the data collected from a real-world GIS system which has been deployed in reality providing real-time street view mapping services. e services are a kind of virtual reality service that provides end-users a 360-degree view panorama of the cities, streets, and other details. All the original data of the mapping services have been collected from real world by cars equipped with 3-dimensional laser scanners, global navigation satellite systems (GNSS), inertial measurement units (IMU), and panoramic cameras. Such original data have been stored in cloud data centers and processed by GIS servers. Upon the arrival of a task for users requesting a mapping service at a certain location, the task is firstly analyzed and initialized, and is divided into several subtasks to be processed on a few cluster nodes in a parallel way. Each cluster node only processes a part of the original mapping data, and after completing the data processing, it returns the results to the centralized server for task convergence. e workflow of the GIS services is illustrated by Figure 2.
ere are five nodes in our GIS systems. e centralized main server is equipped with an 8-core Intel Ice Lake CPU working at the maximum frequency of 4.7 GHz, and memory with capacity of 16 GB. Each cluster node has a CPU with 4 Intel Kaby Lake cores at maximum 3.8 GHz frequency as well as 16 GB or 8 GB memory. e performance data are collected from such the GIS system during its service procedures for real-world users. We use the data to initialize the system parameters such as service rates and basic system architecture. Other parameters that we are not able to obtain from the system are set empirically shown as Table 2. en, we apply our approach to analyze the impact of deploying the GIS services into edge computing architecture on the performance attributes, and validate our analytical results. During the experiments, we (1) x best,G+1 ) (9) end for (10) G ⟵ G + 1 (11) end while ALGORITHM 5: Differential evolution of service deployment. ( Security and Communication Networks 9 also have to tune some system parameters for illuminating the effectiveness of our approach.

Experimental Results.
In order to verify the applicability of the strategy, extensive simulations experiments are carried out to evaluate its efficacy. e simulation results demonstrate that the optimization approach based on the DE algorithm performs well in both utility function value and calculation time in different scenarios.

Efficacy Analysis.
Although the DE algorithm cannot guarantee the global optimality, the simulation experiments show that the optimization algorithm has a strong global search ability. As shown in Figure 3, we illuminate the average utility values of population and their optimal values, which shows that our algorithm converges at about 300th generation. We increase the dimension of decision space by increasing the number of terminals to 50. As shown in Figure 4, the algorithm converges at about 900th generation and the results are very close to the global optimal solutions.
With the further increase of the dimension of decision space by increasing the number of tasks in each terminal to 50, we find that the results converge over 1000th generations in Figure 5. e experimental results shown in Figures 3 to 5 validate that our approach performs well in solving largescale optimization problems.
It has been well-known that, when the scale of the problem is small, the problem can be solved by some traditional optimization algorithms accurately. However, with the scale of the problem increases, the number of feasible solutions increases exponentially, which leads to the combination explosion of search space. And then, we analyze the calculation time of our algorithm in different dimension of decision space. Figure 6 shows that the computing time     increases linearly with the number of terminals, where H � 10 and T increases from 5 to 24. Similarly, Figure 7 shows that the computing time increases linearly with the tasks of each terminal, where T � 10 and H increases from 10 to 19. Experimental results demonstrate that the DE algorithm is efficient in solving large-scale problems.

Comparison Analysis.
Since there has been no existing well-developed scheme of service deployment optimization scheme for GIS services, we compare our approach with other three straightforward approaches which have been widely applied in practise. e first one is the random scheduling algorithm usually performs well in load balancing. e second approach is fixed algorithm which means that 50% of tasks are offloaded to the edge server. e third one is greedy algorithm in which tasks will be offloaded to the edge server as long as there is available resource.
We firstly tune the number of terminals T from 5 to 24, with fixed value H � 10 and τ � 0.5, and the experimental results are shown in Figure 8. With the increase of T, the workload of the GIS system increases at both of the terminals and the edge server. Meanwhile, the time consumption and energy consumption increase so the utility function value decreases. Figure 8 also illustrates that our approach performs 50% better than the random algorithm, fixed algorithm, and greedy algorithm in terms of utility value.
We then tune the parameter H which is the number of tasks can be executed in each terminal from 10 to 19, and the empirical results are shown by Figure 9. We have similar conclusion that the scheme presented in this paper is 50% better than the random approach.
Finally, we discuss the impact brought by the balance factor τ, which trades off the weight between energy consumption and time consumption. e experimental results are shown in Figure 10. With the increase of τ, we pay more weight on optimizing the response time. In such scenario, introducing edge computing layer can benefit dramatically because of its additional computational capability. Since our algorithm is able to fully utilize the edge layer and optimize the global utility function, the utility values obtained by our DE approach are increasingly higher than random scheduling with the increase of τ. 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Number of terminals

Conclusion
As GIS services become increasingly popular in daily life, the performance has drawn more and more attention.
Deploying GIS services into edge computing architecture is an effective way for improving the performance. is paper conducts a quantitative study on the performance evaluation and optimization issue in deploying GIS services into the edge. Queueing models are presented for formulating the GIS services, and their corresponding analyses are provided in detail. Based on the analytical results, a heuristic approach is designed for obtaining the near-optimal solution of service deployment. Experiments based on the dataset collected from real-life GIS service systems are conducted, and the efficacy of the approach is validated. is work is expected to provide a theoretical reference of the evaluation and optimization of edge computing GIS systems.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.