An Intelligent Real-Time Traffic Control Based on Mobile Edge Computing for Individual Private Environment

Department of Software Convergence, Soonchunhyang University, Asan-si, Chungcheongnam-do 31538, Republic of Korea Department of Information Engineering, Yangzhou University, Yangzhou 225127, China Department of Computer Software Engineering, Soonchunhyang University, Asan-si, Chungcheongnam-do 31538, Republic of Korea Department of Computer Engineering, Kyung Hee University, Gwangju-si, 17104 Gyeonggi-Do, Republic of Korea


Introduction
e presence of the 5 th generation (5G) communication network significantly aims to deliver an extremely fast communication speed, high reliability, and low end-to-end (E2E) delay in milliseconds (ms). Each base-station cell coverage provides local service within 100 meters, which attempts to offer strong connectivity, real-time (RT) communication, and Device-to-Device (D2D) and supports massive User Equipment (UE) connection as well [1,2]. 5G provides huge bandwidth and ultra-low latency (ULL) ten times over 4G-LTE. Moreover, the 5G paradigm aims to support future user applications such as IoT traffic, Wireless Sensor Network (WSN) traffic, automotive transportation, and gaming traffic. With this support and these contributions, the huge traffic is generated from the Radio Access Network (RAN) devices and goes through to the Evolved Packet Core (EPC) gateways such as a Service Gateway (S-GW) and Packet Data Gateway (P-GW). Nevertheless, there is still limited capacity in the EPC area while it is a common optical network. erefore, EPC architecture keeps the similarity to the previous communication system which possibly arises traffic congestion problems and insufficient resources in the EPC area. To reduce the outgoing traffic to the remote network, local clouds have been proposed to establish local communication [3]. Currently, Mobile Edge Computing (MEC), Network Slicing (NS), Software-Defined Network (SDN), and Network Function Virtualization (NFV) are necessary technologies which have been proposed to overcome the aforementioned troubles and challenges in 5G communication, to improve the network performance, and to take benefits from cost reduction [4,5]. Figure 1 illustrates the typically 5G end-to-end (E2E) communication system architecture. e bottleneck network area is located in the fronthaul and backhaul, whenever the incoming traffic from the variety of Radio Remote Heads (RRH) surpasses the serving capacity, the network congestion will occur. e ULL perspective is required for RT communication, so it is obliged to cope with the existing problems in the EPC gateways. Due to the fact that the fronthaul (S-GW) and backhaul gateway (P-GW) share the same network architecture, in this paper, the gateway refers to the S-GW or P-GW. MEC servers were integrated into the gateway to cache the traffic sliced before forwarding to the remote network.

Mobile Edge Computing (MEC).
e local clouds (i.e., MEC, fog computing, and cloudlet) has been released to enhance QoS for various network applications such as the Internet of ings (IoT), Heterogeneous Internet of ings (HetIoTs), gaming, and other applications, especially for RT applications [6][7][8][9]. e presence of MEC establishes the intelligence network in the edge area [10][11][12].
is technology claims to gain higher communication bandwidth and provides ULL for real-time communication. Meanwhile, MEC consists of challenging issues in capacity limitation, while it is required to offer heterogeneous services for massive users. Some applications with higher resource computation requirements are required to access remotely the MCC server. Moreover, privacy protection for the local cloud is required to be considered for safety communication and data integrity [13][14][15][16]. As shown in Figure 2, the caching method is enabled by MEC servers which synchronized with MCC servers, and the frequently requested contents or popularly used applications are targeted to be cached to MEC servers. e caching methods are beneficial in latency reduction, gain a higher bandwidth, and save the resources at EPC for both user-plane and data-plane. Anyway, several challenges have been introduced in MEC employment, such as expanded RRH infrastructure, power consumption, resource management, and security problems [17]. Due to that a variety of user's information is stored in MEC in an edge network, the complicated security methods are required to enable trusted communications for edge networking [3,8].
ere exists a huge, especially, convergence of heterogeneous applications, services, and infrastructures in 5G edge networks, both physical and virtual. ese network environments are not convenient to handle for both security and excellent network QoS [18,19]. So, Network Slicing presents a novel opportunity to handle the issues by slicing the user applications into different groups. With machine learning algorithms combination, it is possible to facilitate Network Slicing in terms of classification of the complicated user information, applications, and devices [20,21]. e user applications can be sliced by grouping the user applications which are sharing the same or similar resource requirement into the same group. e sliced applications are more convenient to control and provide flexible control and security configuration by the controller. Undoubtedly, Network Slicing is a key candidate to enhance future network QoS and network safety to meet the perspective of 5G technology [22,23].

Software-Defined Network (SDN)
. SDN is a key adoption candidate to enable future networking driven to softwarization and intelligent networks. SDN provides a global view of network status and a completely programmable system at the control plane [24]. Also, SDN is a concept of decoupling a forwarding plane from the control plane. Plus, this separation gains more convenience in terms of flexibility and scalability, while the user-plane requires higher bandwidth and the control plane requires lower latency [25,26]. e computing, routing, monitoring, scheduling, policy control, security, and load-balancing are performed by the SDN controller [27]. Not only can SDN, especially, be used to enhance the QoS for RT traffic, but also it can be used to enhance trusted communication based on blockchain [28]. e controller gathers information from the user-plane by the southbound interface and communication with the upper layer by the northbound interface. e communication interface between them is provided by the OpenFlow protocol [29]. Even though SDN could independently stand without other technologies getting involved, but the integration of SDN and NFV presents a great opportunity to enhance virtualized computing in future network environments [30,31].
is idea aims to provide virtualized resources to SDN entities and enables the controller to generate both physical and virtual resources. In the cloud systems, the converged SDN and NFV can be benefited in computing resources and dynamic resource configurations with a fault-tolerant technique [32,33]. Based on this mention, the controller possibly generates the virtual controller and offloads from the physical to the virtualized for computing purposes.

2.4.
e Proposed Intelligent Real-Time Traffic Control. e proposed method is to enhance the QoS for RT communications that can be caused by the limited resources at the backhaul gateway. e proposed intelligent real-time traffic control handles the incoming traffic based on traffic classification and integration of the MEC server. Figure 3 shows the proposed network architecture by integrating the MEC server with the backhaul gateway. e MEC servers act as a caching server that buffers the incoming traffic, such as conversation, streaming, interactive, and background communication.
As formerly mentioned, the proposed scheme comprises three stages, namely, traffic classification, caching, and controlling the classified traffic. e following expressions are details about the three stages above.

Traffic Classification and MEC Caching.
In this paper, Network Slicing is referred to as the splitting of user traffic as the 4 different slices such as slice 1 for conversation, slice 2 for streaming, slice 3 for interactive, and slice 4 for background communication. erefore, the classification process was based on each traffic characteristic such as a packet error ratio (PER), protocol data unit (PDU) size, and other QoS parameters.
Subsequently, each slice of the traffic was cached to different MEC pools, and each MEC pool provides the buffer resources for queueing the incoming traffic to wait for serving. e traffic slicing can be made by employing the Kmean machine learning method as shown in Figure 4. Start with the determining number of groups K � 4 and then calculate the centroid. For the first time, the centroid has to select 4 different subsets as 4 classes randomly. In the next step, distance has to be calculated for each class. e distance can be calculated by using the Euclidean distance (ED) equation given as follows:  Security and Communication Networks e subset variables x, y, a, and b can present PDU size, PER, TCP, and UDP, respectively. As depicted in Figure 3, four MEC servers were integrated into the backhaul gateway to serve as buffers for the four traffic classes, so each traffic class has its individual MEC server. In this paper, the traffic classification was generated by computer software simulation. e conversation, streaming, interactive, and background traffic was generated based on its QoS parameters.

Management of the Classified Traffic.
When the backhaul network becomes bad condition, it is required to serve realtime communication classes as the first priority. In the proposed scheme, the gateway has been configured to serve the cached traffic based on conditions of backhaul. e scheme will not be crucial to use when the backhaul gateway is considered as a normal status; otherwise, it is critical to employ the scheme during the backhaul gateway assumed as the congestion state. e backhaul network can be defined as the M/M/1 queue model as follows: where z is denoted as the ratio of incoming traffic and serving rate and also represents the status of backhaul. λ c , λ s , λ i , and λ b are denoted as the incoming rate of conversation, streaming, interactive, and background traffic, respectively. μ represents the serving rate of the backhaul gateway. e gateway condition was referred to the user-plane status as the forwarding traffic based on the controller. e controller handles each slice of traffic from MEC pools. e backhaul status can be analyzed based on z, if z ≥ 1 (the backhaul gateway resources are sufficient to handle for incoming traffic), and the status can be assumed as a normal condition. During the backhaul condition assumed as natural status (normal), the serving rule has been configured as default. e default rule handles the incoming traffic based on first come first serve (FCFS). So, the serving resources and rules are equalized for any incoming traffic classes. us, the real-time traffic will be dropped and low QoS will increase the waiting period in the MEC server. In another scenario, if z ≥ 1 (this means that the serving resource of backhaul gateway µ is less than the incoming rate of user traffic λ), the network congestion in the system will occur, so the priority control of each traffic class has to be considered, as shown in Figure  5. RT traffic classes have to be considered as primary control rather than NRT. e scheme increases the communication rating of the RT traffic classes and reduces the communication rating of NRT traffic classes based on the increasing rate ratio of RT. e proposed scheme considers/classifies the cached traffic by four different classes as shown in Figure 6: (i) Conversation (RT) has been configured as a first primary class and the serving resources have to be increased more than the other communication classes   (ii) Streaming (RT) has been configured as a second primary class and the backhaul resources will be increased to greater than the interactive and background communication but lower than conversation (iii) Interactive (NRT) has been configured as a third priority class and the serving resources will be decreased to lower than conversation and streaming but greater than background communication     Table 1.
Moreover, the RT traffic in the MEC servers will be keeping adjusted similar to the normal network status, because, during the network limitation of backhaul resources, the algorithm is restricted to serving resources for NRT. In this scenario, NRT user traffic will be queued in longer periods, due to the fact that some of the NRT resources will be used for RT traffic. e scheme limits the NRT resources until the backhaul gateway becomes a normal status z ≥ 1 and will configure the serving scheme to handle without restriction, as depicted in Figure 7.

Analysis.
e E2E latency occurs during packet transmission in the 5G communication system and can be written as T as follows: where (i) T RAN is the latency of packet transmission from UEs to eNB. is latency is mainly from the physical and data-link layer, such as time of negotiation, channel coding, modulation, cyclic redundancy check, and other duties consisted in the physical and data-link layer. (ii) T Backhaul is the latency of the packet transmitted from eNB to the backhaul. e common connection between eNB and Core can be fiber optic or microwave link. e latency of the switching process can occur at the SGW. (iii) T Core is the time of building the connection to the core gateway. e latency can be contributed by both the control plane and the user-plane. e control plane consists of the latency of various EPC entities such as Mobile Management Entity (MME), Home Subscriber Server (HSS), Policy and Charging Rule Function (PCRF), and the SDN controller.
(iv) T Transport is the time taken by data transmission to the remote network; the latency is depending on the distance, link bandwidth, routing, and switching protocol: where (i) t Q is the waiting time of incoming traffic depending on λ, if λ ≥ μ then t Q is increased. (ii) t FA is the latency that occurred by frame alignment. (iii) t TX is the latency of transmission depending on radio channel condition, payload size, and transport protocol. (iv) t bsp is the latency at the eNB.
(v) t mpt is the delay of at UEs and eNBs terminal; it is depending on the capacity of both terminals: where (i) t E is the time delay of the circuit through network devices. (ii) t S is represented as the switching delay. (iii) t epc is represented as the delay between communication interfaces of EPC entities such as MME, HSS, and PCRF. e communication delay between EPC entities will take a few microseconds. (iv) t SR is represented the latency of the switching and routing periods.
MEC pool can be modeled as m/m/1 queue model. So, the average waiting time of user traffic is denoted as t Q , where en, the Round-Trip Time (RRT) of E2E delay is defined as 2 × T approximately. e E2E delay and latency occurring in the communication system are well discussed in [34].
In the communication environments, delay D t (at any time t) can occur constantly and vary based on the network statuses. So, the variance delays occur during the communication called jitters ΔJ, since the jitters at the time t are denoted as ΔJ t and can be calculated as the following equation: According to equation (7) , the communication jitters of the system at time t are denoted as ΔJ(syst) t and can be formed as Based on equation (8), the average jitters of the four traffic classes in the system can be modeled as ΔJ(c, s, i, b) t , where 6 Security and Communication Networks ΔJ(c) t ≥ 0; ΔJ(s) t ≥ 0; ΔJ(i) t ≥ 0; and ΔJ(b) t ≥ 0, t � 1, 2, 3, 4, ..., n, where ΔJ(c) t , ΔJ(s) t , ΔJ(i) t , and ΔJ(b) t are the average jitters of conversation, streaming, interactive, and background, respectively. e communication delays of the system at any time t are conveyed by D(syst) t and can be formed as Corresponding to equation (10), the average delay of the system with four traffic classes can be determined as D (c, s, i, b) D(c) t , D(s) t , D(i) t , and D(b) t , are the average delays of conversation, streaming, interactive, and background, respectively.

Simulation Environments.
e experiment was conducted by using a computer simulation program named network simulation version 3 (NS3) that was implemented by using the C++ library. e simulation topology was composed of RAN area, fronthaul, and backhaul gateway. e RED-queue disc has been used to buffer the incoming traffic to represent the MEC server. e total simulation packets are 1025148, conversation packets are 458226, streaming packets are 532598, interactive packets are 22605, and background packets are 11719. Due to the communication link interval that was configured to 10 milliseconds, simulation times for each traffic class are 600 seconds. ere are 8 user devices used for simulation, and the distances were configured with 15 meters between each other. Figure 8 illustrates that the simulation stages were conducted for experimentation, such as initialization which initializes the state for simulation of conversation, streaming, interactive, and background traffic and will be generated for periodic communication. NCON is referred to as the configuration of network condition, and TCON is the handling of incoming traffic based on the proposed scheme. And finally, it is the collection of the simulated results.

Experiment Results and Discussion
In this paper, the system evaluations are based on the comparison between the proposed approach and the conventional approach. Evaluations are regarding the average E2E delays, average E2E jitters, and average throughputs of the individual of each RT communication (conversation and streaming) and total average values integrating RT and NRT in the communication system. Figure 9 shows the comparative average delays of the proposed approach with the conventional approach. e revaluation results are related to the analysis in equation (11) and the average delays are compared by integrating each average delay value of the four communication classes D(c) t , D(s) t , D(i) t , and D(b) t , respectively. e graph shows that the average delays of the proposed approach are lower than the conventional approach. Referring to the graph, the average value of the proposed approach is mostly 0.01361326 seconds, while the average delay of the conventional approach is mostly 0.013676041 seconds. For the RT communication system, E2E delays have to be ultra low to perform the great QoS for each user. Typically, the backhaul traffic will be reduced rapidly during the increasing of forwarding rate of the RT traffic. e proposed approach can reduce the number of traffic queues in the MEC server and possibly reserve or reduce the MEC resources. With the possibility of the higher forwarding rate at the backhaul, the buffer resources will not be required and lessen the computing resources of network devices. Figure 10 shows the comparison of average throughputs of the system between the proposed and conventional approaches. Based on the showing graphs, the proposed approach has higher communication throughputs than the conventional. e average throughputs are relying on the average E2E delays and PDU sizes. e throughputs can be calculated by division of PDU size with communication delays. In this paper, the PDU size was configured as constant; thus, the throughput will be varied based on communication delays. e evaluation was conducted by  calculating the average throughputs of conversation, streaming, interactive, and background communication and sum as a total average. e average E2E delays of the proposed approach are lower than the conventional, as shown in Figure 9. e proposed scheme enhances the higher communication capacity of forwarding incoming traffic. So, the heavy user traffic in the backhaul gateway and bad statuses can be reduced. Moreover, this proposed approach is suitable to use for handling massive 5G user traffic, as well as enhancing QoS for RT communication classes. Figure 11 shows that communication jitters of the proposed approach are lower if compared to the jitters of the conventional approach. e jitter evaluation is based on equation (9). e average jitters are compared based on each average of communication class, including average jitters of conversation, streaming, interactive, and background denoted as ΔJ(c) t , ΔJ(s) t ΔJ(i) t , and ΔJ(b) t , respectively. Lower jitters are indicating the communication stability of the network. e communication jitter will occur when the backhaul network becomes congested. Consequently, the serving interval at gateway will be varied based on the obvious situations; when the network fluctuation of the serving times are higher, the communication QoS will be decreased concurrently. In the RT communication, especially, ultralow communication jitters are required. e proposed approach provides ultra-low jitters in communication systems, so the E2E communication jitter will be consistent.
e comparison of E2E communication jitters of conversation and streaming is presented in Figures 12(a) and 12(b), respectively. Figures 12(a) and 12(b) illustrate that the E2E jitters of the proposed approach outperform the conventional for both conversation and streaming traffic classes. e jitter evaluation was analyzed based on equation (8). Based on the graphs in Figure 12(a), the jitters of conversation traffic class of the proposed scheme have been improved, because the proposed scheme is restricted on the time-insensitive user traffic serving rate and increased the serving rate for the conversation traffic class.
us, the communication stability can increase. e streaming jitters are shown in Figure 12(b). e graphs show that the E2E jitters of streaming traffic class of the proposed scheme have been improved, while there are higher jitters occurring in the conventional approach. Based on the graphs, the proposed scheme is significant to control the serving resources to enhance the quality of services for time-sensitive communications.
e E2E delays comparison between the proposed and conventional schemes of conversation and streaming communication is exhibited in Figures 13(a) and 13(b), respectively. e evaluation graphs in both Figures 13(a) and 13(b) were analyzed based on equation (10), in the above section. As shown in the evaluation graphs, the proposed approach has lower communication delays for both conversation and streaming, while the conventional approach has higher communication delays for both conversation and streaming communications. Due to the scheme being targeted for RT classes, the serving rate of NRT classes has been restricted and increased the serving rate of RT classes. So, the waiting time of NRT classes will be increased while the waiting time of RT will be reduced. However, the network performance of time-insensitive traffic does not rely on communication times.

Conclusions
e 5G backhaul gateway consists of massive incoming traffic from heterogeneous devices with a variety of communication traffic.
us, it is necessary to handle the communication traffic based on each traffic class, especially for RT communication that required ultra-low latency and higher communication rates more than the NRT traffic classes. e proposed approach handles the incoming traffic based on classifying the user traffic into four different classes, including conversation, streaming, interactive, and background communication. e MEC servers have been used to integrate with the backhaul gateway to buffer each of the traffic classes individually. Each communication class has an individual MEC server. When the backhaul is considered as a bad status, the proposed approach will be used to handle by giving more communication rates to RT (conversation and streaming) and reducing the communication rates of NRT based on the increasing ratio of the RT communication. Based on the simulation results, the proposed approach enhances the QoS over the conventional approach for RT communication in terms of reducing jitters, delays, and enhancing higher communication throughputs.
is approach is suitable for enhancing QoS for RT in bottleneck 5G backhaul network environments and the privacy protection for each communication class based on Network Slicing. Finally, for further research, we aim to integrate more effective methods to enhance the massive user traffic in the bottleneck area.

Data Availability
No data were used to support this study.

Conflicts of Interest
e authors declare that they have no conflicts of interest.