A Smart Semipartitioned Real-Time Scheduling Strategy for Mixed-Criticality Systems in 6G-Based Edge Computing

,


Introduction
Nowadays, with the 6G wireless communication networks and various smart sensors widely applied, the IoT applications grow rapidly in a wide range of areas, including industrial robot, driverless car, and edge computing [1][2][3][4][5]. Especially, edge computing, a new application paradigm, is growing popular with 6G technology's development. For these systems in 6G-based edge computing, various sensors and mobile devices with different importance or criticality levels are integrated into a single computation platform for less space and energy. Criticality is designed for assurance needed against system failure [6]. Generally, criticality can be divided into several levels, such as low and high criticality. For example, in automotive systems, the tasks brought by steering and braking sensor are safety-related high, while the tasks of multimedia players, used for infotainment, are low-criticality (LO-criticality) tasks. These systems that have components with more than one distinct criticality level are mixed-criticality systems (MCS) [7], which are the special kind of 6G-based application with multiple criticality. The task scheduling is a fundamental issue in MCS, to reconcile the conflicting requirements for resource usage. The proper task is scheduled so that all high-criticality (HI-criticality) tasks' execution is guaranteed which is a major challenge of MCS. Because the reliability verification and the mixed criticality exist simultaneously in MCS, the traditional real-time scheduling algorithms cannot be directly adopted [8,9].
With the prompt of computing requirements, the platforms of MCS are migrating from a single processor to multiprocessor hardware. MCS scheduling on multiprocessors can be mainly divided into global scheduling [10][11][12] and partitioned scheduling [13][14][15][16][17]. Fully global scheduling, because of task migration globally, has an overhead of the context switching and associated caches, while purely partitioned scheduling, in which some processors are too busy and others are too idle because of forbidden migration, causes waste of system resources [18]. Following this, the researchers mixed the above two scheduling methods and proposed the semipartitioned scheduling strategy [19][20][21].
The semipartitioned algorithms apply two-phase allocations for the different system criticality modes. During a phase of criticality mode update, the executing low-criticality (LOcriticality) tasks (jobs) will be aborted and new ones can be executed on a different processor, and thus, these jobs' deadlines are met to achieve better schedulability of system. However, in most existing MCS semipartitioned scheduling algorithms, when the system criticality mode switches into HI-criticality from LO-criticality, the LO-criticality tasks are directly discarded to ensure HI-criticality tasks' completion [22][23][24], which seem too negative. Firstly, LO-criticality levels are not noncritical, and dropping the executing LOcriticality tasks may damage the system's acceptance rate. During the scheduling process, the processors could be so idle that they can be assigned to perform LO-criticality tasks and thereby improving system utilization and task acceptance [25,26].
Furthermore, acceptance rate and utilization rate are the main schedulability concerning parameters in MCS, to ensure the HI-criticality tasks' completion. However, for tasks with the identical criticality, they can have different influences to MCS in actual applications, where some tasks are more significant or have higher quality of service (QoS) to a certain extent. To describe the QoS property of task, a notion, for example, value [27,28], is usually used. The higher the value, the better the quality brought by task. And calculative value brought by finished tasks is recorded as TV to respect the whole tasks' QoS under scheduling algorithm [29,30].
1.1. Organization. The paper's structure is listed as follows: the related work is described in Section 2. Section 3 describes the paper's overall framework. Section 4 defines the proposed MCS model and notation in detail. We analyze the schedulability of tasks in MCS in Section 5. Section 6 designs the task priority assignment. The detail of the proposed scheduling algorithm SSPS is introduced in Section 7. The simulation experiment setup and results are presented in Section 8. Finally, in Section 9, we summarize the conclusion and future work.
The review of the literature [13][14][15][16][17] shows that the partitioned scheduling approach can achieve better schedulability than the global scheduling approach. By partitioned scheduling method, the task sets are firstly allocated to each processor, and then, they are executed according to the single-processor scheduling algorithm. The optimal partitioning of task sets on multiprocessors is a NP-hard problem, and the researchers mainly use heuristic partitioned algorithms to obtain suboptimal solutions. For the MCS on the identical multiprocessor platform, a fixed partitioned scheduling algorithm was firstly proposed in [14], and the impact of different task set sorting as well as a heuristic division on the system performance has been investigated. It showed that decreasing criticality (DC) can gain better schedulability than decreasing utilization (DU). In implicit-deadline sporadic MCS, a partitioned scheduling algorithm MC-PARTITION based on DC was proposed, which can get a better speedup bound. Since the task criticality level may change, the tasks are fixed via the Best-Fit Decreasing (BFD) of continuous criticality and utilization and improved resource utilization [15]. However, the pure partitioned scheduling algorithm may reduce the utilization of the entire system because of the migration forbidden between processors [16,17].
These above reasons lead to the emergence of a semipartitioned scheduling strategy. In this scheduling, most tasks are assigned to the fixed processor and some tasks can be scheduled to different processors globally [18][19][20]. And for the MCS, a series of semipartitioned scheduling algorithms have been proposed. Santy et al. designed a heuristic scheduling strategy, combining reserved, semipartitioned, and periodic conversion, which reduces the migration overhead and obtain better performance [21].
In the original Vestal model, LO-criticality jobs sometimes are treated the same as noncritical jobs that will be not guaranteed in HI-criticality system mode, which ensures the completion requirements of HI-criticality tasks. Nevertheless, from the engineering perspective, LO-criticality task is not an NO-criticality task; it cannot be dropped easily [22][23][24][25][26]. Su and Zhu firstly focus on the LO-criticality tasks dropped in mixed critical scheduling and discuss the feasibility of restarting LO-criticality tasks from a multimodal perspective [22]. Burns and Baruah constructed an elastic mixed critical task model, which enables more frequent execution of LO-criticality tasks set through elastic processing [23]. And Baruah et al. [24] introduced an additional less pessimistic WCET for LO-criticality jobs to guarantee service regardless of the executions of HI-criticality jobs. The works in [25] follow the MC-Fluid framework to address the corresponding scheduler to handle LO-criticality service, having a good speedup factor. Some researchers agree that real-time task has importance or quality, which should be treated as a factor to improve the quality of service (QoS) of system or application [27,28]. In these papers above, a notion, namely, value, is given to respect the quality of a task, as a basis of the scheduling algorithm. Moreover, the value density (value of a time unit) and urgency of a task are considered comprehensively into dynamic scheduling algorithm and improved the realtime application performance [29,30].

Overall Framework
We consider the scheduling on mixed-criticality systems (MCS) under a multiprocessor platform, in 6G-based edge computing environment. Firstly, the schedulability analysis based on response time is used to obtain schedulable tasks. Then, these tasks are sorted by priority assigned by criticality, value, and deadline. The tasks are divided to processor by first fit (FF) in the priority of descending order. The smart semipartitioned scheduling strategy (SSPS) is proposed, in 2 Wireless Communications and Mobile Computing which some tasks can be migrated to other processors as needed. During the scheduling, the slack time collection is working to execute more task's job. The overall framework of the SSPS is shown in Figure 1.

Our Contributions.
For the mixed-criticality systems (MCS) in 6G-based edge computing of homogeneous multiprocessors, the timing and service quality of the system tasks are taken into consideration, and a smart semipartitioned scheduling strategy (SSPS) is proposed in the paper. Besides, when the system mode switches from LO-criticality to HIcriticality, a mechanism that facilitates LO-criticality tasks (jobs) is designed in SSPS, to improve both the schedulability and the QoS.

System Model and Notation.
Here, a mixed-criticality system (MCS) S = ðT, PÞ is defined as below, a task set T comprised of n independent and periodic tasks τ 1 , τ 2 , ⋯, τ n and n processor set P with m identical processors p 1 , p 2 , ⋯, p m . Meanwhile, the dual MCS model is adopted in this paper, which runs in either a HI-criticality mode or a LO-criticality mode.
Definition 1. MCS tasks. The task of MC model can be characterized by a 5-tuple of parameters: (1) ζ i ∈ fLO, HIg denotes the criticality of task τ i , where LO < HI. A task with HI-criticality is subject to be certified, whereas a LO-criticality task does not need to be certified (2) C i ðlÞ denotes the task τ i 's worst-case execution time (WCET) in criticality mode l, where l ∈ fLO, HIg. C i ðHIÞ and C i ðLOÞ denote the WCET of task τ i at HI-criticality mode and LO-criticality mode, respectively. It meets the constraint C i ðLOÞ < C i ðHIÞ specifies the value of task τ i in criticality mode l, where l ∈ fLO, HIg. V i ðLOÞ and V i ðHIÞ respect the value of task τ i at LO-criticality mode and HI-criticality mode, respectively, and it meets V i ðLOÞ < V i ðHIÞ Each task τ i in MCS can give rise to potentially infinite sequence of jobs.
Definition 2. MCS jobs. Each job J j i released by task τ i can be described by a 4-tuple of parameters: The system starts in the LO-criticality mode and remains in this mode as long as all jobs finished their execution.
If any job does not complete its execution within its LOcriticality execution time C i ðLOÞ, the system criticality mode will arise and HI-criticality tasks are executed with C i ðHIÞ.

Assumptions of the Model.
In the MCS model, the LOcriticality does not mean noncriticality, and these tasks should be executed to the extent possible.
Assumption 3. If MCS switches into HI-criticality mode from LO-criticality mode, some of the LO-criticality tasks (and jobs) will be not dropped directly and allowed to be scheduled later.
Assumption 4. Tasks are independent of each other; they only share a processor, but not any other resource, such as bandwidth or memory.

Schedulability Analysis
In this section, we will investigate the schedulability by analyzing the response time of the job.
For job J j i , released by task   Suppose t raise is the time when the system mode is raised from LO-criticality to HI-criticality. When t raise ∈ ½a it means that the system has been raised to HI-criticality mode before the job J j i starts (a) The system initials at LO-criticality mode during the interval ½a j i , t raise Þ. And the job J j i executes in LO-criticality mode; the interference time can be calculated as where hp ðiÞ indicates the tasks with higher priority than task τ i .
(b) In the interval ½t raise , b j i , the system is raised to HI-criticality mode. Then, the job J j i executes at HI-criticality mode; the interference time is defined as where hc ðiÞ indicates the tasks with higher criticality than task τ i .
(2) When t raise ∈ ½b In summary, the response time of J j i can be expressed as Based on Equation (3), the task τ i ' s response time R i , which can be determined by the jobs released by τ i , selects the maximum response time of its job and satisfies According to the discussion above, the pseudocode of the schedulability analysis algorithm can be described as Algorithm 1. In this algorithm, the inputs include two pieces of information: the undivided task set T and unallocated processors P. The output is the partitioned task queue of the processors. At first, each processor is not allocated any task (lines 1-3). Then, sort the tasks T according to its priority in order (line 4). After this, the algorithm allocates the tasks to processors (lines 5-15). At last, return the partitioned result (line 16). For each task τ i in the queue (line 5, the outer for), starting from the first, the algorithm tries to allocate τ i to a processor p j to execute (line 6, the inner for). If τ i can finish, insert it to the processor p j ' s ready queue and allocate it to the next task (lines 7-10).
In Algorithm 1, it contains two-layer loops, where the outer-layer loop (lines 1, 5) can be evaluated in constant time OðnÞ and the inner-layer loop (line 7)'s complexity is also constant level OðmÞ. The step of calculate R i in line 6, between the outer-layer loop (line 5) and the inner-layer loop (line7), in which complexity is OðnÞ. Consequently, Algorithm 1's run time complexity is Oðn * mÞ.

Priority Assignment of Task
In general, the priority of a task is the basis of schedule. This section mainly considers the criticality level and the value of task and proposes the priority assignment strategy.
6.1. Criticality and Value. According to Definition 1, HIcriticality task τ i 's value V i is related to the criticality ζ i , satisfying V i ðHIÞ > V i ðLOÞ.
In the existing strategies, the LO-criticality level tasks will be dropped out when the system criticality mode upgrades. Total value (TV) of the system can be expressed as TV = ðLOÞ is the system total value in LO-criticality system mode, and ∑ τ i ∈T HI V i ðHIÞ is the system total value in HI-criticality mode.
To compare the two values in HI-criticality mode and LO-criticality separately, V i ðHIÞ and V i ðLOÞ, let V i ðHIÞ = CF × V i ðLOÞ, where CF is criticality factor of task, which satisfies CF > 1. ΔTV indicates the total value difference between in HI-criticality mode and in LO-criticality mode.

Wireless Communications and Mobile Computing
In Equation (4), ∑ τ i ∈T HI ðCF − 1Þ × V i ðLOÞ indicates the value difference of HI-criticality tasks in HI-criticality mode and in LO-criticality mode, and ∑ τ i ∈T LO V i ðLOÞ represents the values obtained by the LO-criticality tasks.
If ΔTV > 0, it means that the TV increases as system criticality mode upgrades. In other words, the value difference of HI-criticality tasks in different criticality modes is larger than the LO-criticality tasks' values at this time.
If ΔTV ≤ 0, it means the TV does not rise when the system criticality mode switches from LO-criticality to HIcriticality.
6.2. Assignment of Task's Priority. In MCS, the task's priority should reflect its attributes, including criticality level, value, and deadline. When constructing the priority assignment function Pr i , we consider the importance of these factors.
(1) All HI-criticality tasks should be executed firstly (2) Next, tasks with high value are prioritized in the same criticality level In different system criticality modes, for a task τ i , its priority is recorded as Pr l i : where C i ðlÞ and V i ðlÞ vary as the system critical mode l changes. And l satisfies l ∈ fLO, HIg.

Scheduling Algorithm
For the MCS under a homogeneous multiprocessor platform, we propose a smart semipartitioned scheduling strategy (SSPS).
7.1. Smart Semipartitioned Scheduling Strategy (SSPS). SSPS includes both the processes of partitioned scheduling and global scheduling; the details are as follows: (1) Task order. All the tasks of T are sorted in a descending order according to their priorities calculated by Equation (5) (2) Processors allocation. Each sorted task is allocated to processors by First-Fit Decreasing (FFD) method (3) Schedulability test. The task subset's schedulability is tested by Algorithm 1 (4) Task execution. This process includes executing jobs released by the task and collecting the processors' idle time for LO-criticality tasks' execution (a) Tasks allocated in each processor execute by priority, and these tasks do not migrate. During this process, the slack times of each processor are collected and stored in the queue Que¯Slack (b) When the system criticality mode upgrades, all unfinished LO-criticality jobs are sorted and managed globally and then assign their execution times in Que¯Slack. At the high mode, we allow the execution of LO-criticality tasks but do not allow them to preempt HI-criticality tasks; i.e., the HI-criticality tasks will not incur any interference from the ones with a LO-criticality level Here, the queue Que¯Slack is used to store feasible slack fragment sf . Each sf is represented in the form of ðq, dÞ, where q is the length of time and d is the end time of sf . The algorithm of slack time collection is shown as Algorithm 2.
In Algorithm 2, the inputs include two pieces of information, including the given executing job J j exe and slack fragment sf i of processor p i . The output of the algorithm is the idle time queue Que¯Slack. At first, the condition of collect processor slack time is that the job J j exe can finish until its deadline d j exe (line 1). And if the input sf i is null, insert into queue Que¯Slack (lines 2-6). Otherwise, discuss the values of J j exe executing time e j exe = q and the sf i 's length q, the former should be not larger than the latter to ensure J j exe 's execution. If the two are equal, then remove the sf i from Q ue¯Slack (lines 9-10), and if e j exe is less than q, q's remaining time after completing J j exe is update to Que¯Slack (lines [11][12][13][14]. At last, return Que¯Slack (line 17).

Inputs:
Task set to be partitioned T = τ 1 ,...,τ n ; Processors set to be allocated P = p 1 , ⋯, p m . Outputs: PT = Que_Ready_p 1 ,...,Que_Ready_p m . / * where Que_Ready_p i is the tasks ready queue on p i . * / 1: for each p j in P do 2: Set Que_Ready_p j = {}; 3: end for 4: Sorted T according to priority in descending 5: for each τ i in Tdo 6: Calculate R i by Eq. (3); 7: for each p j in P by descending order do 8: if R i ≤ D i then 9: Add τ i into Que_Ready_p j ; 10: break; 11: end if 12: end for 13: end for 14: return PT; Algorithm 1: Schedulablity analysis. Table 1 and is divided into two homogeneous processors.

Example 1. A task set including 5 tasks is shown in
The system is in LO-criticality mode at the initial time, and if the task set is presorted using the DU method, the sequence of system tasks is τ 1 , τ 3 , τ 4 , τ 2 , and τ 5 . In accordance with the FFD strategy, tasks τ 1 and τ 3 are divided into processor p 1 , while tasks τ 2 , τ 4 , and τ 5 are divided into processor p 2 . In this case, two processors meet the conditions. In LO-criticality system mode, the resource utilization of p 1 is 83.3%, and the corresponding resource utilization of p 2 is 78.3%. When the system mode upgrades to a HI-criticality mode, all LO-criticality tasks τ 3 , τ 4 , and τ 5 are discarded. In this case, processor p 1 obtains a resource utilization of 62.5% and the resource utilization of processor p 2 is 50%.
The existing scheduling algorithms, such as MC-PARTI-TION, drop out all LO-criticality tasks directly in HI-criticality system mode, even if there is some idle time on the processor at some moment. In order to improve the acceptance rate of the task and the utilization of the processor, the cutoff period and value attribute of the task can be considered comprehensively, and the LO-criticality task is dispatched globally by using the idle time of the processor when the system mode switches from LO-criticality into HI-criticality. A hyperperiod execution (here, 24 time units) for the task set of Example 1 is shown in Figure 3, in which the task set is presorted according to the priority function of Equation (5), with the same result as the DU method.
After the system critical mode upgrades, we perform global scheduling for LO-criticality tasks τ 3 , τ 4 , and τ 5 on the premise of ensuring HI-criticality tasks τ 1 and τ 2 and allocate idle time on processors p 1 and p 2 dynamically. This method can achieve high acceptance ratio and improve the utilizations of both processors to 91.7% (11/12).

Analysis of SSPS.
In the smart semipartitioned scheduling strategy (SSPS), the task's schedulability is analyzed by Algorithm 1; tasks allocated by first fit (FF) are assigned priority by Equation (4). The queue Que¯Slack is used to collect slack time by Algorithm 2, and the LO-criticality jobs are collected to the queue Que¯Low. The system selects the segment in Q ue¯Slack to execute the job in Que¯Low. Meanwhile, the queue Que¯Ready is used to store prepared tasks. And the pseudocode of SSPS is shown in Algorithm 3 as follows.
In Algorithm 3 mentioned above, the inputs include seven parameters: the task set T, the processor set P, the queue for LO-criticality jobs, the queue storing processor's idle time, the queue for ready tasks, the initial system mode (Sys¯Mode = LO), the total successful jobs' number N ST and the total job number N T . The output of Algorithm 3 is the acceptance ratio which equals N ST /N T . At first, analyze all tasks of T and allocate to a ready queue Que¯Ready¯p m of each processor p m (line 1). Then, for the queue Que¯Readyp m , get the task J m exe and compare its response time R m exe to C exe ðLOÞ (lines 2-4). If J m exe cannot finish its completion (R m exe > C exe ðLOÞ), then discuss if J m exe is a HI-criticality job and Sys¯Mode = LO; the system mode switches to HOcriticality from LO-criticality according the MCS definition; abort LO-criticality tasks in ready queue of each process, insert LO-criticality jobs into the queue Que¯Low, and execute the J m exe (lines 5-10); if J m exe is a LO-criticality job, then abort it and continue (lines [11][12][13][14]. Otherwise, J m exe can finish, execute it, and collect the slack time of its completion (lines [15][16][17][18][19].
During the scheduling in HO-criticality system mode, the processor of executing LO-criticality uses idle time of each processor (lines 20-30). For each slack fragment sf s in queue Que¯Slack, the top job J m exe of Que¯Low is chosen to execute. It is necessary to compare e m exe to q s , the length of sf s . If the former is not less than the letter, then complete J m exe and collect the slack time of its completion (lines [22][23][24][25].

Inputs:
Executing job J j exe ; Slack fragment sf i .

Output:
The queue for collect idle time Que_Slack. 1: if J j exe finish at t 0 (t 0 <d j exe ) then 2: if sf i = null then 3:  For the SSPS algorithm, it is necessary to discuss the system value V i .
(1) In LO-criticality system mode, the system total value of the T is ∑ τ i ∈T V i ðLOÞ.
(2) In HI-criticality system mode, the total value contains two parts: the value obtained by the HI-criticality task ∑ τ i ∈T HI V i ðHIÞ, where T HI ⊂ T, and the value obtained by the finished LO-criticality task set, named as The total value TV SSPS can be described as According to Equation (6) about the TV of the classic strategies, TV SSPS ≥ TV obviously.
To discuss the TV SSPS changed with system criticality mode, let ΔTVV SSPS indicate the total value difference between in HI-criticality mode and in LO-criticality mode. If ΔTV SSPS > 0, it indicates that the TV SSPS in HI-criticality mode is bigger; conversely, it means that the TV SSPS in HIcriticality mode is smaller.
where CF = V i ðHIÞ/V i ðLOÞ is the criticality factor of task, and CF > 1.
In Equation (7), ∑ τ i ∈T HI ðCF − 1Þ × V i ðLOÞ is the obtained value of HI-criticality task difference in HI-criticality mode and in LO-criticality mode. ∑ τ i ∈ðT−T HI −T LO ′ Þ V i ðLOÞ is the value obtained by the dropped LO-criticality tasks.  if J m exe is HI-criticality and Sys¯Mode = LO then 8:

Simulations and Analysis
Abort all Jobs of LO-criticality in PT; 9: Insert LO-criticality jobs into Que¯Low as Pr p¯LO in descending order; 10: Sys¯Mode = HI; 11: Execute J m exe in HI-criticality; 12:  Wireless Communications and Mobile Computing experiments were run on a PC with a 3.40 GHz 4 identical processor and 8 GB memory. In the simulation, we compared SSPS to the existing partition scheduling algorithms, DC-RM [13] and MC-PARTITION [14], which are the classic and representative algorithms in the research community of MCS partition scheduling. Based on these two algorithms, there are lots of derived algorithms for other real-time application scenarios [16][17][18][19]. The task set parameters of the experiments were randomly generated as follows: (1) AR = N ST /N T , where N ST is the number of successful tasks and N T is the number of system taskset. The AR shows the proportion of successful tasks to total task set The WS indicates the total utilization of each task TV represents the QoS of whole successful tasks In order to measure the average number of job migrations, 100 trials of simulations with different tasks are conducted in the experiment.  Figure 4. It can be seen that all algorithms' AR decrease significantly as the U LO i grows from 0.3 to 1.0. And the DC-RM algorithm has the least AR and the SSPS algorithm obtains the best AR. When the U LO i is below 0.5, SSPS is close to MC-PARTITION and DC-RM in AR. But as U LO i becomes larger than 0.6, the SSPS begins to outperform the other two algo-rithms, because in HI-criticality mode, SSPS executes the LO-criticality tasks selectively, improving the AR of the whole system, while the other two algorithms discard LOcriticality tasks directly, which leads to a sharp descending in AR.
It is illustrated that the AR varies in different high-critical task proportion HTP in Figure 5, when U LO i is set to 0.6 and the HTP grows from 0.3 to 1.0. As shown by the result, with the HTP increases, the additional HI-criticality tasks require more executing time; thus, all algorithms's AR continues to decline. When the HTP is less than 0.5, all algorithms' AR is close because the competition of tasks' execution is not intense in LO-criticality system mode. As the HTP grows larger than 0.6, the intense competition among tasks reduces the AR, in which some task cannot finish its execution and the system mode arises to HI-criticality.
Once the system mode upgrades to HI-criticality, the task's C i becomes large. Compared to the MC-PARTITION and DC-RM, the SSPS algorithm achieves a more stable and higher AR, because the former two algorithms drop the LO-criticality tasks directly. When the system mode switches from LO-criticality to HI-criticality, the SSPS algorithm executes some LO-criticality jobs in an idle time of the processor and gradually decreases as HTP increases.

Schedulability Analysis.
It is shown that the weighted schedulability WS is declining with U LO i arguments, where HTP is set to 0.5 (see Figure 6). Compared to MC-PARTITION and DC-RM algorithms, the SSPS gets higher and more stable in WS through schedule the LO-criticality task in HI-criticality system mode. In the beginning, the WS obtained by all algorithms is falling steadily. When HTP becomes larger than 0.6, MC-PARTITION and DC-RM accelerate degradation in WS due to the more execution time required by increased HI-criticality tasks and the discarded of LO-criticality tasks directly in HI-criticality system mode.
That weighted schedulability WS results change as HTP grows is plotted in Figure 7, where U LO i = 0:5. The HTP is represented on the horizontal axis, and the vertical axis is WS. We can see that WS is gradually declining as HTP arguments. Because with the number of HI-criticality tasks increasing, the system resources they need are increased. When the HTP is below 0.3, SSPS is almost identical to MC-PARTITION and DC-RM algorithms that get a steady decline in WS. With the HTP becoming larger and the system criticality level arising, the execution time of the HI-criticality task becomes longer, which can intensify competition among tasks and reduces the system schedulability. The SSPS can obtain better WS than the other two methods, because SSPS in HI-criticality mode can take advantage of slack time produced by HI-criticality task, to execute the selected LO-criticality task globally.
8.4. Total Value Analysis. The simulation results for total value TV changed with LO-critical system utilization U LO i 's growth are shown in Figure 8, where the horizontal axis is U LO i and the vertical axis is TV. As shown in Figure 8, all algorithms' TV decrease as the U LO i increases from 0.3 to 0.9. Compared to the other two algorithms, the MAPPS has a significant advantage over the other two in TV, which gradually decreases as the U LO i grows. Because only the SSPS chooses the task with high urgency and high value, thereby obtaining better TV and improving the performance of the system. Figure 9 plots the TV with the change of HTP. It shows that the total value TV presents the fluctuation of first up then down as HTP grows from 0.1 to 0.9. In the beginning, the growing number of HI-criticality tasks can take a larger value. But with HTP increasing, the HI-criticality task's longer executing time reduces the WS of the system, as shown in Figure 9, which brings the TV decreasing. And SSPS can obtain the best TV due to its choice of high urgency and high-value tasks.

Conclusions and Future Works
In recent years, with the increasing popularity of 6G wireless communication technology, mixed-criticality systems (MCS) in 6G-based edge computing have been grown quickly in application scenarios. Meanwhile, with the multiprocessors' development widely applied, including homogeneous, the relative MCS scheduling technique is necessary to research. In this paper, a smart semipartitioned scheduling algorithm (SSPS) was designed on MCS in the homogeneous multiprocessors. Firstly, we analyze the task's schedulability based on the response time and allocate the processors. Then, a task's priority assignment function with multiple attributes, including criticality, urgency, and the total value, is constructed. Besides, a scheduling algorithm titled by SSPS has been proposed with the schedulability analysis algorithm and the priority assignment above. In the SSPS, we allocate the tasks in LO-criticality mode, while in HI-criticality mode, the SSPS not only finish the HI-criticality tasks but also choose the LO-criticality tasks to execute under the utilization of the processor's slack time globally. The experimental results illustrate that the SSPS could achieve the best performance among the existing algorithms.
However, there are still some limitations of the SSPS algorithm. In practical 6G-based edge computing applications, the task real-time scheduling is often related to the sharing of limited resources. With the heterogeneous multiprocessors' development, heterogeneous will be more the case for the 6G-based real-time applications. We will explore the scheduling and resource sharing issues of edge computing of heterogeneous multiprocessors based on the SSPS algorithm. Besides, in other complex real-time applications, like parallel industry systems and smart industrial networks [31][32][33], it needs to consider several factors in data transmission and task scheduling; we are also planning to investigate these issues. Moreover, we notice that modern IoT devices are increasingly being equipped with multiple network interfaces; our future work will consider to apply the proposed SSPS algorithm to optimize the promising multipath parallel data transmission methods [34,35] for the multihomed IoT environment.

Data Availability
The data, including task's properties and performance indicators in the experiments, used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
The authors declare that they have no conflicts of interest.