A Game-Theoretic Scheme for Parked Vehicle-Assisted MEC Computation Offloading

By ooading computation tasks, multi-access edge computing (MEC) supports diverse services and reduces delay and energy consumption of mobile devices (MDs). However, limited resources of edge servers may be the bottleneck for task computing in high-density scenarios. To address this challenge, by leveraging the underutilized resources of parked vehicles to execute tasks, we propose a parked vehicle-assisted multi-access edge computing (PV-assisted MEC) architecture, which enables MEC servers to expand their capability exibly. To achieve ecient ooading, we propose a PV-assisted MEC ooading scheme in a multi-MD environment. We design a game-based distributed algorithm to minimize the overhead of MDs and further reduce the burden on the MEC server. Simulation results show that compared with the commonMEC system, our scheme can reduce the burden on the MEC server by 5% and the ooading overhead by 17%.


Introduction
With the improvement of mobile devices' capabilities and the ever-increasing interest in mobile applications, delay-sensitive and computation-intensive mobile applications have been emerging and drawing signi cant attentions, spanning technologies such as augmented reality, speech-to-text conversion, image processing, and interactive online games. However, due to the scarcity of resources, mobile devices are usually unable to meet the massive computing demands. e solution to this problem lies in improving the communication infrastructure by computation o oading [1][2][3]. Multi-access edge computing (MEC) is regarded as a key technology and architectural concept for the improvement of the computation o oading e ciency. MEC aims at extending cloud computing capabilities to the edge. Mobile devices (MDs) can o oad tasks to nearby network edge servers [4]. For example, video streams and images collected through sensors or cameras mounted on the vehicles must be processed in real time to detect surrounding objects, recognize tra c lights, etc., to ensure the safety of autonomous driving. However, vehicles do not have the capacity to process large amounts of images and videos instantly, so tasks are o oaded to edge server for processing, reducing the incidence of tra c accidents. Computation o oading technology in MEC not only overcomes the shortage of computing capabilities on mobile terminals but also avoids huge latency caused by transferring tasks to the cloud [5,6]. However, existing MEC servers tend to have lightweight computing resources due to cost constraints, which means they are still not well equipped to handle the ever-growing task demands.
Scholars have studied the problem that it is di cult for a single VEC server to meet the strict latency requirements of MDs. e authors in [5] proposed a tiered o oading framework for edge computing, which utilizes nearby backup computing servers to make up for the insu cient MEC server resources. Guo and Liu [2] proposed a cloud-MEC collaborative computation o oading scheme with centralized cloud and multi-access edge computing over Fi-Wi network architecture. In addition, idle resources in unmanned aerial vehicles (UAVs) were used as a supplement to the edge computing server to provide effective resource utilization and reliable service [7].
With the rapid development of the automotive industry, vehicles are equipped with an ever-increasing amount of communication and computing resources. Several works have focused on vehicle-assisted edge network to improve network service quality by leveraging idle resources in vehicles. In this network, idle resources are used to compute tasks to assist the edge network as vehicles with idle resources approaching vehicles carrying computation tasks. In daily life, 70% of personal vehicles are parked for an average of more than 20 hours per day [8]. ese parked vehicles have a lot of idle computing, storage, and communication resources, as well as plenty of energy. erefore, utilizing these idle resources is a promising way to improve network efficiency.
e use of parked vehicles to support network services has two advantages that cannot be ignored. On the one hand, parked vehicles are relatively stable in terms of communication. A moving vehicle may change its position frequently, which may cause the connection between the vehicle and the server to become unstable and affect the efficiency of task execution. In contrast, parked vehicles may remain stationary for long periods of time. On the other hand, parked vehicles involved in task offloading indirectly extend the service area of VEC. Outside the coverage of roadside units (RSUs), parked vehicles can serve as static nodes and service infrastructure, alleviating the shortage of edge server resources and supporting interconnection between vehicles and servers [9].
In this work, unlike existing computation offloading studies, we focus on reducing MDs' delay and energy consumption to improve quality of service (QoS). In addition, parked vehicles that can be used as service nodes in this work include not only those parked centrally in parking lots but also those parked scattered on the roadside, where legally permitted. We focus on the design of parked vehicleassisted MEC architecture and the corresponding efficient computation offloading scheme. e main contributions of this study are as follows: (i) A parked vehicle-assisted multi-access edge computing (PV-assisted MEC) architecture is presented, in which nearby parked vehicles can help extend the service capabilities of the MEC system.
(ii) e offloading decision problem is formulated as a noncooperation game. A game-based PV-assisted task offloading algorithm (GPTOA) is proposed, which decides whether each MD should offload and, if so, to which channel of MEC server or which PV.
(iii) Simulation results show that the GPTOA not only effectively reduces the burden on the MEC server but also achieves significant performance improvements in terms of offloading overhead. e rest of this study is organized as follows. First, related works are discussed in Section 2. Second, the PVassisted MEC architecture is described in Section 3. Next, Section 4 presents the system model. After that, Section 5 formulates the task offloading problem and proposes a game-based PV-assisted task offloading algorithm. Extensive simulation results are provided in Section 6, followed by conclusions in Section 7.

Related Work
ere are a number of studies focusing on mobile applications in MEC. Most of these focused on processing data and improving service qualities [10][11][12][13][14][15]. Zhang et al. [16] considered load balancing of computation resources on the edge servers and the highly dynamical nature of the vehicular networks, which led them to introduce fiber-wireless (Fi-Wi) technology to enhance vehicle edge computing network (VECN). en, they used a game theory-based nearest task offloading algorithm and an approximate load balancing task offloading algorithm to solve the delay minimization problem. Cheng et al. [17] proposed a method to predict Wi-Fi offload potential and access costs by jointly considering user satisfaction, offload performance, and mobile network operators' revenues. e results showed that this scheme can improve the average utility of users and reduce service latency. Chen et al. [18] showed that it is NPhard to find centralized optimum for task offloading in MEC with the goal of minimizing the overall computation overhead. Hence, they adopt a game-theoretic approach for achieving efficient offloading in a distributed manner. e recent advent of vehicle-to-everything (V2X) communication technology makes vehicles an important network resource for improving network performance. Ding et al. [19] used CR (cognitive radio) router-enabled vehicles to transmit data to the desired location. Feng [20] proposed the hybrid vehicle edge cloud (HVC) framework, which made it possible to share available resources with neighboring vehicles through vehicle-to-vehicle (V2V) communication. Zhang et al. [21] investigated the effectiveness of computational transport strategies for vehicle-to-infrastructure (V2I) and V2V communication modes. ey proposed an efficient predictive combination-mode relegation scheme that adaptively offloaded tasks to the MEC servers via direct uploading or predictive relay transmissions. Huang et al. [22] introduced the concept of vehicle neighbor group (VNG), which made it convenient to share similar services through V2V communication. Considering the similarity of tasks and computational capability of vehicles, Qiao et al. [23] divided vehicles into task computing sub-cloudlet and task offloading sub-cloudlet. Based on the two sub-cloudlets, they proposed a collaborative task offloading scheme that can effectively reduce the number of similar tasks transferred to MEC servers.
Furthermore, certain existing works focused on exploring ways to leverage the communication, storage, and computation capacity of parked vehicles, in which vehicles became service nodes for computation offloading. Liu et al. [24] proposed a vehicle edge computing network architecture in which vehicles act as edge servers to compute tasks. A problem with the objective of maximizing the long-term utility of the VEC network was presented in the study, modeled as a Markov decision process, and solved using two 2 Scientific Programming reinforcement learning methods. Huang et al. [25] modeled the relationship between users, MEC server, and parking lot as a Stackelberg game. ey presented a sub-gradient-based iterative algorithm to determine the workload distribution among parked vehicles and minimize the overall cost to the users. Li et al. [26] proposed a three-stage contract-Stackelberg offloading incentive mechanism to maximize the utility of vehicles, operators, and parking lot agents. Han et al. [27] proposed a dynamic pricing strategy that minimizes the average cost of the MEC system under the constraints on service quality by continuously adjusting the price according to the current system state. By introducing parking lots as agents, many existing studies focused on utilizing the communication and computation capabilities of parked vehicles. e benefits and costs of parking vehicles and the costs to service users were taken into account. However, in addition to the vehicles parked centrally in parking lots, computing and communication channel resources of vehicles scattered on the roadside are not negligible. Moreover, in most cases, the quality of user experience should be prioritized. erefore, in this study, based on the research work proposed in [18], we propose an PV-assisted MEC architecture to enhance the MEC network, in which parked vehicles can serve MDs directly. In addition, we propose a game-based task offloading algorithm to minimize the delay and energy consumption for service users.

Parked Vehicle-Assisted Multi-Access Edge Computing Architecture
With the advent of smart cars, more and more cars can be awakened to perform tasks even when parked. For example, when a parked Tesla car is in sentry mode or dog mode, some of its safety-related features are still working. With the increasing development of artificial intelligence, we believe that cars will become increasingly more intelligent. In the future, parked cars may support some modes that could provide services to other vehicles. e research presented in this study is conducted on this premise. Although aspects such as incentives, communication costs, security, and scheduling should be considered if onboard computers in parked vehicles are to be used for edge computing, the focus of this study was on computation offloading strategy. erefore, these aspects are not considered in this study, but should be taken into consideration in future studies to pursue a more complete solution.
A representative PV-assisted MEC service scenario is illustrated in Figure 1. ere are a large number of MDs and parked vehicles running computationally intensive and delay-sensitive mobile applications. However, lightweight MEC servers and limited bandwidth resources are insufficient for these applications. Idle resources in parked vehicles can be used to relieve the pressure on the MEC. However, due to "selfishness," not all parked vehicles are willing to provide resources. We assume that some parked vehicles can be recruited through certain incentives, such as extended parking opportunities or reduced parking fees. In addition, we assume that the MEC system can certify recruited parked vehicles to ensure the security of the service and can update and monitor available resources of these parked vehicles in real time to improve resource utilization. We refer to these recruited certified parked vehicles as PVs. In summary, both MEC servers and PVs can provide services to MDs. Figure 2 illustrates a representative PV-assisted MEC network architecture. Based on the original vehicular edge computing architecture, we move the vehicles capable of providing services from the device layer to the MEC layer to enable utilization of parked vehicles' resources and allow them to provide services directly to MDs.
e first layer provides centralized cloud computing services and management functions such as critical or complex event handling, key data backup, and information authentication. e PV-assisted MEC architecture employs a softwaredefined network (SDN) controller to program, manipulate, and configure network in a logically centralized way.
(2) Edge Cloud Layer (MEC Layer). e second layer consists of edge network access devices (e.g., RSU and base station (BS)) and data service devices (e.g., MEC servers and PVs). Edge network access devices are used for communication among edge facilities or between layers. MEC servers with lightweight storage and computing capabilities are deployed on edge network access devices. MEC servers are responsible for collecting service status information from themselves and from PV service nodes parked in the coverage area of RSU. Based on this information, MEC servers can process or assign tasks to MDs. By moving PVs from mobile device layer to MEC layer, the service capacity can be improved and bandwidth consumption can be reduced.
(3) Mobile Device Layer. e third layer consists of mobile devices requesting services, such as vehicles, smartphones, tablets, and laptops. MDs request services by connecting to BSs via cellular network. MEC server and parked vehicles can provide services to terminal devices via cellular network or V2X. Here, V2X may be a link via cellular network or a link via dedicated short-range communications (DSRCs). Note that as a special kind of mobile device, vehicles are divided into two categories in this study: PVs and others. e former are located at the MEC layer as service providers, while the latter are located at the mobile device layer as service requesters. Figure 3 illustrates the communication procedure between MD, MEC server, and PVs. First, when an MD generates a task, it sends the task request to the MEC server. Second, through iterative negotiation between the MEC server and the MDs, the task allocation result is calculated based on the status of the MDs and the MEC server. en, the MEC server returns the task allocation result to the MD. When the task allocation result indicates that the task should be offloaded to a PV, the MEC also needs to notify the relevant PV (dotted arrow). ird, the MD sends task input Scientific Programming data to the MEC server (solid line) or PV (dotted line) according to the task allocation information. Fourth, the MEC server (solid line) or PV (dotted line) processes the task and then returns the task result to the MD. Finally, the MD obtains the result and sends service satisfaction information back to the MEC server to reward the specific PV.

Network Model.
We assign a unique identifier to each task and record the characteristics of tasks, such as traffic size and computation workload, in a globally shared feature table: T � T 1 , . . . , T M . Without loss of generality, we assume that each MD generates only one task in a time period and tasks cannot be further divided. Here, d i denotes the size of the task generated by MD i and c i denotes the computation resources required by this task. We assume the existence of a wireless BS through which any MD can offload its computation task to a nearby MEC server (MS). Each wireless BS has C orthogonal frequency channels, denoted as C � 1, 2, . . . , C { }. Besides, in the coverage area of a BS, there is a set of PVs, denoted by P � C + 1, . . . , C + P { }. We consider a quasistatic scenario where the status of MDs, PVs, channels, and the MEC server remains unchanged for a given time period, whereas in different time periods, the status may change. For simplicity, we ignore the cost of establishing secure connections during transmissions. We denote   Scientific Programming i ∈ M, as the selection decision variable. As shown in (1), let s i � 0 denote that MD i executes its task locally, and s i > 0 denotes that MD i chooses to offload this task. When s i � j, j ∈ C indicates that MD i will offload task T i to MEC server via channel j, while j ∈ P indicates that the task will be executed by PV j. Let s � s 1 , s 2 , . . . , s M denote the set of selection decisions for all MDs. For ease of reference, we list key notations used in this study in Table 1.

Communication Model.
In this section, we try to define the transmission rate of offloading. It is assumed that mobile device is equipped with a single antenna that can transmit data for one task at a time. When many MDs offload their tasks to the same MEC server, severe wireless channel interference may occur. erefore, wireless channel conditions should be considered during transmission. If MD i chooses to offload its task to the MS via wireless channel, the data transmission rate for T i can be expressed as follows: Here, W M is the bandwidth, q i and h MS i are the transmission power and channel gain of MD i to the MS via nearby BS, respectively, and ϖ 0 is the background noise; is the wireless channel interference generated by other MDs using the same channel.
MD i and PV j can communicate with each other only if the distance between them is less than a certain distance d V2V max . We assume that any PV can only serve one MD during the computation offloading period. erefore, there are no channel conflicts between MDs when tasks are offloaded to PVs. When MD i offloads its task T i to PV j that is not occupied by other MDs, the data transmission rate can be expressed as follows: Here, W P is the bandwidth between MD and PV.

Computation Model of Mobile
Devices. We use f Loc i to denote the computational power of MD i. us, the delay of the locally executed task T i can be expressed as follows: Similar to [28], we assume that the power consumption of a certain MD is proportional to the cube of its computational power. e energy consumption coefficient μ is related to the chip's hardware architecture. e device's energy consumption for local execution can be expressed as follows: Considering that MDs are usually energy and delay sensitive, we define parameters α i and β i (α i , β i ∈ [0, 1], α i + β i � 1) as the weights for delay and energy in the computing of overhead for MD i, respectively. MDs tend to save time (larger α i ) when tasks are delay sensitive, and they tend to save energy (larger β i ) when batteries are low.
us, the overhead of local execution can be expressed as follows:

Computation Model of MEC Server.
For most mobile applications, such as fingerprint, face, or iris recognition, and sensor data processing, the size of the computation result is much smaller than the size of the input data. We ignore the transmission time of computation results. Scientific Programming erefore, the delay for offloading task T i to MEC server MS can be divided into two parts: data uploading time and task execution time, expressed as follows: where f MS i is MS′ computing capability. Usually, the MEC server has sufficient power supply, so the energy consumption on the MEC server can be ignored. From MD's perspective, the energy consumption of offloading task to MS comes from transmitting data over wireless network and can be expressed as follows: us, the overhead for offloading task T i to MEC server can be expressed as follows:

Computation Model of Parked
Vehicles. Let f PV j denote the computing resource allocated to task T i from PV j. e delay for offloading task T i to PV j(j ∈ P) can be expressed as follows: Similarly, energy consumption on PV j is ignored (which will be considered in future works), and energy consumption on MD for offloading task T i to PV j can be expressed as follows: us, the overhead of MD i for offloading task T i to PV j can be expressed as follows:

Problem Formulation.
According to Section 4, the overhead of task T i can be expressed as follows: ere are 1 + C + P choices available for each task. Delay and energy consumption may vary depending on offloading strategies. erefore, the overall goal is to minimize the total overhead of all MDs. us, the problem of optimizing the total overhead for all MDs can be expressed as follows:

if the event E is true and I E
{ } � 0 otherwise. ere are four constraints for problem (P). Constraint (C1) is that every task should be executed. Constraint (C2) is that each PV serves at most one MD. Constraint (C3) is that MD i and PV j can communicate only when they are close enough to each other. Similarly, constraint (C4) is that MD i and the MEC server can communicate only when they are close enough to each other. e task set T can be divided into three mutually exclusive subsets by the selection decisions: T � T Loc ∪ T MS ∪ T PV . T Loc means that tasks are processed locally, T MS means that tasks are offloaded to the MS, and T PV means that tasks are offloaded to some PV.
By incorporating PVs as extra service providers for computation offloading, the problem proposed in this study is essentially a generalization of that proposed in [18]. However, it has been shown in [18] that the centralized optimization problem for minimizing the system-wide computation overhead is NP-hard. erefore, with PVs as additional computation offloading providers, the problem proposed in this study is also NP-hard and difficult to solve. Similar to [18], the centralized cost minimizing problem for PV-assisted MEC computation offloading can be transformed into a distributed computation offloading decision problem among mobile device users. In the computation offloading process, each MD wants to reduce its overhead as much as possible. erefore, they need to be aware in which the overhead function of mobile device i can be defined as follows: Problem p ′ can be formulated as a noncooperative game: G � (M, S i i∈M , K i i∈M ) with finite players, where M is the set of players, S i is the set of selection decisions for player/MD i, and the overhead function K i (s i , s −i ) is the cost function to be minimized by each MD i.
In the next subsection, we will analyze the existence of Nash equilibrium in the PV-assisted MEC computation offloading game.

Nash Equilibrium Analysis.
Here is the definition of the important concept of Nash equilibrium [29].

Definition 1.
A selection decision set s * � (s * 1 , . . . , s * M ) is a Nash equilibrium of the PV-assisted MEC computation offloading game, if at the equilibrium s * , no MD can further reduce its overhead by unilaterally changing its selection decision, i.e., To study the existence of Nash equilibrium, we will first introduce the concept of potential game [30].

Definition 2.
A game is said to be an ordinal potential game if the incentive of all players to change their strategy can be expressed using a single global function called the potential function: Φ(s), such that ∀i ∈ M, en, An important feature of the finite ordinal potential game is that it always has a Nash equilibrium and it has the finite improvement property. In other words, if finite players start with an arbitrary strategy profile and iteratively deviate to their unique best replies in each period, the process terminates in an NE after finite steps. Next, before giving detailed proof that the PV-assisted MEC computation offloading game is an ordinal potential game, we have the following lemma.  According to equations (7) to (9), the condition K MS i < K Loc i is equivalent to the following: at is, According to (2), we then have the following: which is c i (s) < T ML i in condition (C m). According to equations (7) to (12), the condition K MS i < K PV i is equivalent to the following: en, we have the following: Furthermore, according to (2), we can get c i (s) < T MP i in condition (C m).
For condition C p: the proof is straightforward and is omitted here.
Based on Lemma 1, we will show that the PV-assisted MEC computation offloading game is a potential game with the potential function as follows:

□ Theorem 1. e PV-assisted MEC computation offloading game is a potential game with Φ(s) (equation 21) as the potential function and hence always has a Nash equilibrium and the finite improvement property.
Proof. Suppose that MD i ∈ M updates its decision selection from s i to s i ′ , and this leads to a decrease in the overhead function, i.e., K i (s i , s −i ) > K i (s i ′ , s −i ). According to Definition 2, we must show that this also leads to a decrease in the potential function, i.e., Φ(s i , s −i ) > Φ(s i ′ , s −i ). ere are eight possible cases. For case (1), according to equations (7) to (9), the premise Since s i ′ ∈ [1, C] and s i ′ ∈ [1, C], according to (28) and (29), we have the following: For case (3), since s i ∈ [1, C] and s i ′ ∈ (C, C + P], from Lemma 1(condition C p), we have c i (s) > T MP i′ . is implies that Case (4) is the opposite of case (3), and its proof is omitted here.
For case (5), since s i ∈ [1, C] and s i ′ � 0, it can be deduced Case (6) is the opposite of case (5), and thus, its proof is omitted here.
For case (7), s i ∈ (C, C + P] and s i ′ � 0 imply that Case (8) is the opposite of case (7), and thus, its proof is omitted here.
Combining results from the above cases, we can conclude that the PV-assisted MEC computation offloading game is a potential game. □ Scientific Programming 5.3. Algorithm Design. Algorithm 1 illustrates the gamebased PV-assisted task offloading algorithm (GPTOA) for problem (P ′ ). Similar to [18], the algorithm will run iteratively on each MD. e main idea of GPTOA is that, based on the current state, each MD makes the best decision by calculating the overhead according to (16) (Line 3). Meanwhile, constraints (C1)-(C4) will be checked in each iteration. When constraints (C3) and (C4) cannot be satisfied, we set the overhead to infinity. During each iteration t, MD i updates its decision selection s i ′ (t) based on the best response and sends it to the MEC server as an update request, if s i ′ (t) ≠ s i (t − 1). e MEC server randomly selects one decision selection s i ′ (t) from all update requests and sends s i ′ (t) back to MD i for updating its decision for the next iteration (Lines 4-8). e iteration continues until the decision selection remains unchanged. At the end, the MEC server will broadcast end message to all MDs and each MD will execute the computation task according to the last decision selection. According to the finite improvement property of potential game ( eorem 1), the algorithm will converge to a Nash equilibrium within finite number of iterations.
In GPTOA, MDs execute operations in parallel in each time slot. e most time-consuming operation is the computing of the best response update process in Line 3, which mainly involves the sorting operation over the overhead of available offloading strategies for all MDs. Since the sorting operation typically has a time complexity of O(n log n), and the maximum number of available choices for all MDs is not greater than C + P + 1, therefore, the computational complexity of each time slot will not exceed O(x log x), in which x � C + P + 1. If the algorithm takes y time slots to terminate, the total computational complexity of Algorithm 1 is O (y · x log x).
For the upper bound of y, similar to [18], we have the following result.

Theorem 2.
When G i and T i are nonnegative integers for any i ∈ M, the game-based PV-assisted task offloading algorithm will terminate within at most Proof. According to equation (21) and the definition of G i , G max , G min , and T max , we have the following: According to eorem 1, during each time slot, MD i ∈ M updates its decision s i to decision s i ′ and this action leads to a decrease in its overhead function, i.e., . e key idea of this proof is to show that this also leads to a decrease in the potential function by at least G min , i.e., Similar to the proof of eorem 1, there are eight cases to consider.
For case (1), s i ∈ [1, C] and s i ′ ∈ [1, C]; according to (23), we have the following: Since G i are nonnegative integers, we have the following: en, based on (37): For other cases, the proofs are similar and are omitted here.

Parameter Settings.
e GPTOA was simulated and evaluated using Python with packages such as NumPy, random, and SciPy. We considered the scenario where the wireless BS had a coverage area of 50 * 50 m2. Each BS had C � 10 channels with a channel bandwidth of W M � 20 MHz. e transmission power was q i � 400 mWatts, and the background noise was ϖ 0 � −100 dBm. Based on the radio interference model for urban cellular radio environment, we set the channel gain to h MS i � d −α i,r , where d i,r was the distance between MD i and the wireless BS, and α � 4 was the path loss factor [18]      locally. (2) MEC offloading (scheme 2): the tasks are either computed locally or are offloaded to the MEC server [18]. (3) PV-assisted MEC offloading (our scheme): the tasks are computed locally, offloaded to the MEC or PVs. e work presented in [18] was treated as a special case with number of PVs set to 0 in this study. To eliminate the effect of randomness on the algorithm results, we conducted 1000 tests and performed statistical analysis of the results as follows.
First, we fixed the number of PVs (service vehicles) to 40 to observe the changes in the metrics (average delay, energy consumption, and total overhead of tasks, as well as task assignment results and load on the MEC server) as the number of MDs (service requesters) increased.
In Figure 4, the average delay, energy consumption, and total overhead of three schemes are compared. We can see that all three metrics of scheme 1 are higher than the other schemes due to the limited local computation power. When the number of MDs is less than 10, the three metrics are the same for schemes 2 and 3. is can be explained by the lack of tasks offloaded to PVs. As the number of MDs increases, the metrics of scheme 3 grow less rapidly than the other two schemes. When the number of MDs is 30, scheme 3 results in a 26% reduction in delay and a 17% reduction in total overhead (on average) compared with scheme 2. Figure 5 shows the task assignment results of the proposed scheme. When the number of MDs is less than 10, all tasks will be offloaded to MS, because MS has a shorter task execution time. However, as the number of MDs increases, no more tasks can be offloaded to MS. is is due to the fact that when multiple tasks are offloaded to MS, it leads to strong channel interference and heavier computational load, which in turn causes intolerable delays. is causes some MDs to give up offloading their tasks to the MS. In addition, due to limited resources and short communication distance, only a small portion of the PVs can serve MDs. erefore, as the number of MDs increases, eventually, the number of tasks executed locally will exceed the number of tasks offloaded to PVs. Figure 6 shows the total workload allocated to the MEC server under the three schemes. In scheme 1, no computation tasks are offloaded, so the burden on the MEC server is 0. e workload allocated to the MEC server in scheme 3 is lower than that in scheme 2, because PVs share some of the computation tasks. When the number of MDs is 30, scheme 3 reduces the workload for MS by 5%.
en, we fixed the number of MDs to 30 to observe the change in the metrics as the number of PVs increases.
In Figure 7, the average delay, energy consumption, and overhead of the three schemes are compared with different numbers of PVs. Scheme 3 outperforms both schemes 1 and 2. is is because as the density of PVs increases, it leads to higher utilization of idle computing resources of PVs. Figure 8 shows the results of task allocation for the proposed scheme. As the number of PVs increases, the number of tasks offloaded to PVs also increases, while the    number of locally executed tasks continues to decrease. From another perspective, as the density of PVs increases, more MDs are likely to be connected to PVs, so that MDs that have given up offloading to MS have more opportunities to offload. In addition, the computational power of the MS is much greater than that of PVs. Tasks are assigned to PVs only when MS cannot serve more tasks. erefore, the number of tasks executed by MS will not decrease as the number of PVs increases.

Conclusion
In this study, we proposed a parked vehicle-assisted mobile edge computing architecture that enhances the task processing capability of MEC servers and improves the resource utilization of parked vehicles. In this work, we first discussed in detail the design principles behind the system model of PV-assisted MEC architecture, which served as a premise for the formulation of computation offloading scheme. Next, by formulating the computation offloading problem as a noncooperative game, we proposed a PV-assisted MEC computation offloading scheme that effectively reduces the burden on the MEC server. Simulation results confirmed the feasibility and high efficiency of the proposed computation offloading scheme. As mentioned in Section 3, incentives are not considered in this study; thus in the future, we will further investigate how to incorporate incentives into the PV-assisted MEC task offloading scheme proposed in this study. Deep reinforcement learning-based techniques have obvious advantages when the problem size is large or when there are multiple conflicting offloading goals [6,32]. erefore, another feasible research direction is to apply deep reinforcement learning to further improve the task offloading scheme proposed in this study.

Data Availability
e data used to support the findings of this study are simulated by the algorithm proposed in this article, and the parameters used in the simulation are included within the article.

Conflicts of Interest
e authors declare that they have no conflicts of interest.