QoS-Aware Optimal Radio Resource Allocation Method for Machine-Type Communications in 5G LTE and beyond Cellular Networks

In this paper, we consider the saturation problem in the 3GPP LTE cellular system caused by the expected huge number of machine-type communication (MTC) devices, leading to a significant impact on both machine-to-machine (M2M) and humanto-machine H2H traffic. M2M communications are expected to dominate traffic in LTE and beyond cellular networks. In order to address this problem, we proposed an advanced architecture designed for 5G LTE networks to enable the coexistence of H2H/M2M traffic, supported by different priority strategies to meet QoS for each traffic. The queuing strategy is implemented with an M2M gateway that manages four queues allocated to different types of MTC traffic. The optimal radio resource allocation method in LTE and beyond cellular networks was developed. This method is based on adaptive selection of channel bandwidth depending on the QoS requirements and priority traffic aggregation in the M2M gateway. Additionally, a new simulation model is proposed which can help in studying and analyzing the mutual impact between M2M and H2H traffic coexistence in 5G networks while considering high and low priority traffics for both M2M and H2H devices. This simulator automates the proposed method of optimal radio resource allocation between the M2M and H2H traffic to ensure the required QoS. Our simulation results proved that the proposed method improved the efficiency of radio resource utilization to 13% by optimizing the LTE frame formation process.


Introduction
1.1. Background and Problem Statement. Wireless communication has become an integral part of everyday life for anyone who owns a mobile device or a smart sensor. Connecting these devices and sensors to the data network changes the usual idea of the Internet in general. After all, they can exchange data with each other automatically without human involvement, thus generating machine-to-machine (M2M) traffic, also known as machine-type communications (MTC) [1].
Global trends in the telecommunication market show that wireless communication networks are becoming one of the key elements in the implementation of the Internet of Things (IoT) paradigm [2]. Due to the rapid growth of mobile data traffic, the popularity of IoT, and M2M, mobile operators are constantly focused on improving the quality of service (QoS) provision, developing 4G networks into the future software-defined heterogeneous 5G/6G networks based on Long-Term Evolution (LTE) technology [3][4][5][6]. LTE is now seen as the best technology to support MTC due to its Internet compatibility, high capacity, flexibility in radio resource management, and scalability. Compared to previous cellular generations, 5G LTE uses a much wider portion of the radiofrequency (RF) spectrum, which gives the 5G network the ability to deliver exponentially faster data speeds and lower latency. The current capabilities of 4G/5G mobile networks provide high-speed data transmission (HDT). But they are not yet fully ready to qualitatively serve the growing volume of information from mass connections of mobile and IoT devices. The lack of possibility in the 4G/5G networks to implement end-to-end differentiated management of individual flows from mobile and MTC devices, taking into account their requirements to the parameters of QoS, leads to inefficient radio resource allocation and degradation of the QoS of real-time services [7]. In this regard, the main challenges in the development of future wireless networks are to optimize the process of allocating a limited number of time-frequency radio resources between users and MTC devices on the criterion of quality of service [8].
In the 4G/5G networks, radio resource management modules, called schedulers, are responsible for solving planning tasks, allocating radio resources, and assigning access priorities depending on the type of traffic with a given quality of service requirements. The existing radio resource allocation methods in 4G/5G LTE networks, which are historically optimized to serve mobile users of traditional communications services, are not flexible enough for common resource allocation in the face of a growing number of incoming MTC requests with different QoS requirements [9].

Motivation.
Due to the rapid development of IoT technologies and the constant growth of the number of mobile users, the actual scientific and practical task is to increase the efficiency of radio resource utilization and quality of service in the new-generation mobile communication systems by improving flexible information flow management models and methods for optimal allocation of network resources.

Our Contributions.
Our main novelty contributions can be summarized as follows: (1) The optimal radio resource allocation method in 5G LTE networks was developed, which, unlike the known ones, is based on adaptive selection of channel bandwidth depending on the QoS requirements and priority traffic aggregation in M2M gateways. This method improved the efficiency of licensed radio resource utilization by optimizing the LTE frame formation process and reducing the share of signal traffic in these frames (2) A new simulation model for research of the state-ofthe-art mobile communication network functioning process was developed. This simulator takes into account a significant basic number of technical parameters of 5G LTE 3GPP standard functioning to create real research conditions and automates the proposed method of optimal radio resource allocation between the traffic of mobile users and MTC devices to ensure the required QoS The remainder of this paper is organized as follows: Section 2 presents the related works, Section 3 presents the proposed LTE network architecture for 5G/6G mobile communication systems, Section 4 describes the radio resource allocation algorithm between HTC and MTC to ensure QoS, Section 5 presents the development of an optimal radio resource allocation method in 5G LTE networks based on adaptive channel bandwidth selection of the conclusions of the study, Section 6 presents simulation results, and Section 7 presents the conclusions of the study.

Related Work
The relevant literature that addresses the abovementioned challenges includes [10][11][12][13][14][15][16][17][18][19][20][21][22]. So, in paper [23], a detailed review of LTE uplink schedulers for M2M devices is presented. In this study, the authors point out that existing schedulers can be divided into three main types: powersaving schedulers, which are aimed at reducing power consumption in M2M devices [10]; QoS-based schedulers, which are aimed at providing QoS processing for each type of M2M application [11,12]; and multihop schedulers. To reduce the number of base stations and improve system performance by extending the coverage area, it is proposed to use a QoS hybrid uplink scheduler [13]. The proposed solution in this paper is a hybrid model between two schedulers, each being the best solution for real-time service scheduling and the other for non-real-time service scheduling, in order to meet QoS criteria maximizing throughput and minimizing packet loss.
Rekhissa et al. [14] presented two uplink scheduling algorithms for M2M devices over LTE. The algorithms are based on channel quality in the resource block allocation for devices, have the advantage that it reduces the number of resource blocks (RBs) required for data transmission, and hence reduce the energy consumed by the devices. CBC-M can achieve the desired goal, which is to reduce the energy consumption compared to the round-robin (RR) scheduler. However, the algorithm developed by the authors cannot guarantee the latency requirements of M2M devices. It has also been shown that RME-M, which considers delay constraints, does not allow all classes to meet QoS requirements with a large number of MTC devices in terms of delay.
In [15], the authors propose a hybrid scheduling algorithm for a heterogeneous network operating with both H2H and M2M communications. The network traffic is classified into two queues, each of which is scheduled separately. The first queue includes all delay-sensitive H2H (UE) and machine-type communication device (MTCD) users. Scheduling is based on a combination of metrics including buffer wait times, proportional fairness, and delay thresholds. The second queue includes all remaining (delay-tolerant) MTCDs, which are scheduled using a combination of channel-state-based and round-robin-based schedulers.
Zhang and his colleagues [16] propose a new method for distributive estimation of optimal bounce parameters. In particular, each machine-type device (MTD) only needs to observe its own successful and total transmissions of access requests during the evaluation interval, after which each MTD can obtain an estimate of the optimal backoff parameters from an explicit expression of the observed statistics. It is found that the evaluation interval determines the performance of the proposed scheme. When the number of MTDs in the network does not change or changes slowly, the throughput of the network can be increased with the right choice of the evaluation interval. The proposed method can achieve similar performance with much lower signaling overhead compared to existing centralized methods. The proposed scheme can be applied to both homogeneous and 2 Wireless Communications and Mobile Computing heterogeneous scenarios and provides a practical access design method for M2M communication.
In the paper [17], the authors proposed a cluster-based group paging (CBGP) scheme for managing congestion and congestion by mMTC scenarios. Based on the current cluster-based (GP) mechanism, paging groups are further clustered to collect data. Due to the advantages of low cost, high bandwidth, and ease of deployment, IEEE 802.11ah is used to collect data from devices in clusters and headers downloading the collected data to the 5G LTE/LTE-A cellular network. In addition, the authors obtain mathematical models that can characterize the performance of the proposed CBGP scheme. In addition, the effect of different numbers of clusters on CBGP performance is investigated and the optimal number of clusters can be derived in order to obtain the best performance under different access scales. Finally, the authors' numerical results show that the analytical model agrees well with the simulation results, and the effectiveness of the proposed CBGP scheme is confirmed. The optimal number of clusters for the proposed CBGP scheme, adaptable according to the number of access attempts, is also verified.
The authors in paper [18] compared the benefits of using different filtering techniques to configure the access control scheme included in the 5G standards: access class denial (ACB), depending on the intensity of access requests. These filtering methods are a key component of the access class configuration scheme proposed by the authors, which can lead to more than a threefold increase in the probability of successful completion of a random access procedure under the most typical network configuration and mMTC scenario.
In [19], Alsenwi et al. proposed a proportional fair resource allocation formula that allocates resources to incoming ultrareliable low-latency communication (URLLC) traffic while ensuring eMBB and URLLC reliability. Ma et al. in [22] studied the dimensionality of network slice with resource pricing policies, investigating the relationship between resource efficiency and profit maximization, with the goal of maximizing operator revenue in terms of slice price. To solve the multislice resource allocation problem in a slice, Wang et al. [21] developed a scheduling algorithm according to which the scheduler schedules one type of service with its own features in each slice. Although the authors of the paper mentioned a multislice virtual network with a network slice, the design of spectral efficiency with a multislice network slice and the reliability of the URLLC network were not considered.
Ma and his colleagues [22] used network slicing technology to address the spectral efficiency of the network and the reliability of URLLC in the 5G network. The authors proposed that a mixed programming problem is formulated by maximizing the spectral efficiency of the system in limiting user requirements to two slices, i.e., eMBB slice requirements and URLLC slice requirements with high probability for each user.
From the analysis of the work made, it follows that one of the main challenges in the deployment of future 5G/6G for M2M communication is the problem of optimal radio resource management and scheduling. Existing H2H LTE scheduling algorithms, focused mainly on maximizing the bandwidth and maintaining the continuity of radio resources that are assigned to a particular device, are not effective for use with MTC. This is because M2M connectivity has different characteristics compared to H2H connectivity. MTC traffic consists mostly of small burst loads that exist primarily in the uplink direction (i.e., from the device to the serving base station). MTCDs are also used in a wide variety of applications. Each application has its own requirements, which may include some level of quality of service (QoS), minimizing power consumption, or timing of data transmission. Transmission timing in this context is the time by which data must be transmitted in order to avoid undesirable consequences, e.g., in the event of an emergency alert.
Since radio resources are limited for massive M2M communications, the scheduling algorithm should mainly consider the importance of the information carried by the data traffic of the different MTCDs. In this paper, we propose the radio resource allocation method which takes into consideration the coexistence of M2M and H2H communications ensuring the satisfaction of QoS requirements.
So, the purpose of our work is to improve the efficiency of radio resource utilization and the QoS provision in 5G LTE mobile networks by developing the traffic management models by prioritizing M2M traffic and methods for optimal resource allocation.

Enhanced LTE Network Architecture for 5G Mobile Communication Systems
To date, the existing LTE architecture and 4G standard are not yet ready to handle the traffic of the growing number of mobile devices and smart sensors. In this regard, this paper proposes an improved architecture of the fourth-generation mobile network (Figure 1), which can also be the basis for the construction of 5G/6G networks in the context of mass deployment of MTC [24]. The proposed hybrid 5G/6G architecture is suitable for all kinds of data traffic including M2M and human-to-machine (H2M) traffic streams. In detail, we should be clarified how a cellular LTE system and a MTC data collection network can be dynamically integrated into a 5G LTE with MTC and Human-Type Communications (HTC). First of all, we proposed to form clusters in an area where there is a significant accumulation of M2M sensors, with the main sensors collecting data from the subordinates using available narrowband or Wi-Fi technology and sending it to the new M2M gateway. The M2M gateway would then send aggregated data to the eNodeB base station via the LTE wireless channel. In fact, M2M gateways are very often used in practice, but the logic of the data transmission process in our work is new; in particular, the innovation M2M gateway will prioritize and aggregate the traffic according to the approach proposed below, after which the aggregated data will be sent to the base station for service in which the rational scheduling of frequency and time resources between the generated M2M traffic and mobile devices will occur. The new gateway will be able to transmit aggregated traffic 3 Wireless Communications and Mobile Computing through both the macrocell base station and the femtocell base station, i.e., to perform load balancing.
Meeting the special requirements of M2M traffic, while maintaining a high degree of user-perceived QoE, especially in the wireless domain, necessitates advanced network control methods. As shown in Figure 1, the developed techniques will allow real-time management of the spectrum, the available resources, and the applied MAC at the M2M gateway as well as the 5G eNodeB, according to the received network feedback.
The proposed architecture of the 5G/6G network will significantly reduce the service load on the base station and allow to serve a growing number of devices and sensors by improving the process of device clustering, aggregation, and prioritization of traffic with the possibility of balancing it between different base stations [25].
The main problem emphasized in this paper is the rapid growth of demand for data transmission by wireless networks. Given this problem, we can assume a significant load on the key structural elements of a mobile network operating on the basis of LTE technology. These structural elements are primarily the base station controller and interfaces for exchange of service information. Reduction of the load on these elements of the LTE network architecture can be achieved through the integration and joint use of different technologies. The combination of these technologies will make it possible to properly distribute radio resources among all devices wishing to transmit data, as well as to increase the number of served devices, which are provided with at least a minimum acceptable value of QoS.
Due to the fact that the number of devices is constantly growing, and the radio resources are distributed between all wireless technologies in different sizes, there is a need to support a significant number of these technologies with one structural element of a heterogeneous LTE network. This new element in the 5G LTE network may be a multiservice gateway [26].
From Figure 2, it can be understood that the gateway has on one side the LTE channel of information exchange with the base station and on the other side a set of technologies, together with the LTE technology to form a common heterogeneous network. The principle of such gateway operation is to "unpack" the data transmitted via LTE technology and "pack" it into any other technology it supports and vice versa. In this case, the key feature of the preference of one or another technology is the availability of transmission by a leading or wireless way, i.e., the possibility of using Ethernet protocol for data transmission.
In contrast to well-known principles of M2M gateways in which data is collected and automatically transmitted, the paper proposes to improve the process of data transmission and processing at the gateway itself. In particular, the  Wireless Communications and Mobile Computing multistandard gateway accepts data for transmission to the base station from the main sensors of each cluster and sorts the data received from them into four queues of so-called memory buffers. This approach is due to the fact that today M2M traffic is classified as critical real-time data and unimportant unreal-time data. Queues of different priorities with different QoS requirements are shown in Figure 2. Data queue dwell time is fixed by the gateway and affects the allowable service time of the M2M sensor. The release sequence of each queue depends on the type (priority) of the traffic it contains, as well as on the maximum allowable transmission time set in accordance with the user's QoS requirements for these data, taking into account the gateway queue dwell time. In general, the following inequality must be met for guaranteed M2M data service by the gateway and LTE network core by the service time criterion: where t E2EðQoSÞ LTE is the acceptable traffic service time within LTE architecture, t GW queue i is the time of the data of the ith device in the queue at the multistandard gateway, and t BS i is the time to transmit the first "portion" of data within an LTE frame. The transmission time of the first "chunk" of data within a frame depends on the frame number within which the data transmission will start for the ith device and the number of the reserved subframe.
Device data forwarding from one queue to another occurs when the allowable service time approaches a critical value.
The master sensors, which send data to the multiservice gateway, collect data from child sensors within the cluster. Typically, cluster membership as well as cluster size can change due to the need to select a new master sensor [15,16]. The new master sensor can be reselected [10]. For this purpose, the current master sensor is reported by a multiservice gateway. The gateway, in turn, sends appropriate instructions to all sensors within the cluster of the master sensor to be replaced, as well as to sensors in neighboring clusters. The selection of the new master sensor can be done by analyzing such parameters as the battery level and radio link status with the multiservice gateway (the selection of the master node can be based on other conditions and can also be automated by intelligent control logic [27]).
When the M2M gateway receives a message to select a new master sensor, it collects information about the status of the radio link between it and each sensor within the cluster where the master sensor whose function is to be changed is still operating. It also requires this same information to be sent to it and to the sensors in the neighboring cluster, relative to the cluster whose function needs to be reformed. The multiservice gateway also requires information from the same sensors about the current battery level. Next, the collected information is analyzed, on the basis of which the multiservice gateway generates a message to grant a new functional status to the sensor, which will later act as a  Figure 2: Clustering and prioritization of M2M traffic on the newly introduced multistandard gateway. 5 Wireless Communications and Mobile Computing master. After sending this message to the new main sensor, the clusters within which the information about the radio channel status and the current battery level from the sensors located within them is collected are reformed. When the clusters are reformed, the clusters of the sensors, which before the reforming were functioning in the territory belonging to the "old" clusters, are determined to belong to the formed clusters. After clusters are fully formed, data transmission from M2M sensors through newly selected main sensors, which have an appropriate communication channel (according to supported technology) with the multiservice gateway, takes place as usual. It should be noted that the detection of a new master sensor is also affected by the wireless data technology it supports. When deploying and installing new (additional) M2M sensors, the capabilities of wireless technologies should be taken into account in providing the maximum possible (required) bandwidth.

Radio Resource Allocation
Algorithm between HTC and MTC to Ensure QoS One of the most important issues addressed in the work is to ensure quality of service and flexibility of providing services to users in a heterogeneous mobile network LTE. However, with the increasing variety of M2M and H2H traffic, the requirements for QoS increase. When providing the necessary amount of time-frequency radio resources for HTC and MTC, the problem of optimal allocation under conditions of their limitation is acute. For this purpose, first of all, it is necessary to formalize a model of the radio network resource allocation process. In the proposed 5G LTE networks, the radio network consists of three main components (wireless data channel, base station, and user equipment (UE)/M2M gateway). So, we described the structure and aspects of the wireless channel of 5G LTE networks. As mentioned in Introduction, existing LTE networks allocate time-frequency resources at the physical layer. In the time domain, all operating time is divided into periods of equal duration, so-called subframes. Typical duration of a single subframe is 1 millisecond (ms). With LTE, the timely delivery of data can be achieved by a flexible allocation of time-frequency resources, which are contained within a frame of 10 ms duration. In the frequency domain, the entire band is divided into regions of equal width equal to 180 kHz. The minimum resource unit is a resource block. The set of resource blocks form a common data channel, the resources of which can be distributed in the radio network between UE and M2M devices. In addition to the common channel, service channels are implemented, which solve problems of synchronization support and control of the transmission of subscriber devices, assessing the quality of the channel, ensuring reliable data transmission over an unreliable channel, etc. These channels occupy resources at the beginning and inside resource blocks.
An important distinguishing feature of wireless communication channels is the instability of their characteristics in both time and frequency domains. This instability is caused by the presence of interference from other communication systems, the mobility of the user and his environment, etc. This fact leads to the fact that a single HTC/MTC device in different resource blocks can be transmitted with different amounts of data. For each resource block at the base station is calculated propagation attenuation based on information from the service channels. To describe the quality of the channel at the upper levels of the base station, the resulting value of signal attenuation for a particular resource block and HTC/MTC device will turn into a modulation-code scheme (MCS) [28]. MCS is a combination of modulation (QPSK, QAM-16, QAM-64, and QAM-256) and noise coding settings.
The model under consideration assumes that UE and M2M gateways may be in different wireless channel conditions caused by distance from the base station, movement device, and terrain. Consider the uplink and downlink wireless channels sequentially. In today's mobile communication systems, the uplink channel load is less compared to the downlink. As a consequence, the uplink is considered reliable and request transmission latency for streaming video segments can be neglected. However, the data of M2M services are mostly transmitted over the uplink, with the high popularity and intensity of such services requiring new approaches to the processing of signaling data by the 5G/6G LTE base station and resource allocation.
The analytical model pays much attention to the downlink and uplink channels, as its performance determines the user satisfaction in the network. In this paper, a radio channel model is formalized with the following property. It is assumed that the state of the wireless channel changes, so that during the loading or unloading of packet k from segment j from the gateway, the maximum achievable channel rate of the user and M2M gateway i is constant: where t i,j,k is the moment when the user and M2M gateway i start downloading packet k from segment j, Δt i,j,k is the download time of user and M2M gateway i of packet k from segment j, and C i,j,k is the maximum possible channel rate of the UE and M2M gateway i during the download of packet k from segment j.
After defining the structure of the wireless channel, it is necessary to describe how, using this structure, data exchange between the base station and the UE or M2M gateway takes place. To do this, consider the functional structure of the base station. The main task of the base station is to organize a reliable data transmission over an unreliable radio channel. To solve this problem, the structure is used, which consists of four levels: Let us bring the data packet delivery from the moment it is received at the base station to the moment it appears at the network layer of the user device or M2M gateway. From the operator's backbone network, the packet gets to the packet processing layer; this layer performs two functions. The first function is to compress packet headers of the transport and network layer, to reduce the amount of data transmitted over the wireless channel. The implementation of packet header compression allows to level out overhead when transmitting data over the wireless channel and to consider that in a package there is no redundancy, which is attached to the network protocols. The second function is to identify the subscriber and transmit the information part of the packet in its lowlevel queue.
Below is the queue level for transmitting data over the wireless link. At this level, for each user connected to the base station, there is a queue (buffer) for data. The queue layer is intermediate between the packet processing and link layer, acting as temporary storage for data when transmitted over the wireless channel. Queues are handled by a lower layer, the Medium Access Control (MAC) layer.
The Medium Access Control layer solves a key task for the system as a whole: to distribute the resources of the wireless channel on the basis of the information from the physical environment layer and the top level of the base station and to ensure the reliability of data transmission. This task is handled by the wireless channel resource scheduler installed on the base station (Media Access Channel Scheduler or MAC Scheduler). Further in this paper, to reduce the length of the text, the term "scheduler" will be used to refer to "wireless channel resource scheduler." The scheduler in each subframe performs allocation of radio channel resources (resource blocks) according to some algorithm. It is important to note that the scheduler does not allocate resources to UE or M2M gateway who have no data in this subframe. Each frame generates a resource block allocation map, which will be transmitted to the physical layer. Thus, the operation of the scheduling algorithm can be represented as the distribution of channel shares between H2H and M2M traffic.
The resulting resource block allocation map will be transferred to the physical environment layer, which will provide data transfer from the queueing level in the allocated frequency-time resources. Subsequently, if all the described base station layers work correctly, the packet will be delivered over the wireless channel to the user device and, having traversed the stack in reverse order, will become available on the network layer of the HTC/MTC device.
Despite the important role of the scheduler, scheduling algorithms are not standardized for existing communication systems, so each base station manufacturer uses its own implementations of scheduling algorithms. The creation of such algorithms is an open task, because the requirements for them are very large and there is no rigorous theoretical study of their maximum performance. Two heuristic resource scheduling algorithms are currently known: proportional fair [29] and round-robin [30]. The proportional fair scheduling algorithm is aimed at providing equal data rates for all active users. The round-robin scheduling algorithm provides equal access to radio channel resources for all active users. Therefore, the main task of our work is to investigate scheduling methods, in order to improve the efficiency of time-frequency resources and, as a consequence, the entire wireless heterogeneous network for streaming video and mass M2M connections.
Central to the special analytical model is the wireless channel resource allocation scheduling algorithm installed at the base station. Let us introduce the definition of the scheduling algorithm: Assertion 3. The scheduling algorithm is the rule according to which the base station at time t allocates shares of the wireless channel resources α i ðtÞ ≥ 0 to UE/M2M gateway i.
Thus, the work of the planning algorithm at time t can be described by a vector of function values: AðtÞ = fα i ðtÞ, i = 1, Ng. Obviously, the following inequality is a restriction on the possible values of the functions α i ðtÞ: Inequality (4) can be interpreted as follows: at any time of the scheduling algorithm, the total amount of allocated resources does not exceed the available resources for the data 7 Wireless Communications and Mobile Computing link. So, for any UE/M2M gateway i, the instantaneous data rate in the downlink/uplink S i ðtÞ can be calculated as follows: The scheduling algorithm solves the wireless channel resource allocation problem at each point in time. To solve this problem, it has access to information about the prehistory, namely, the shares of allocated channel resources, the value of the maximum possible channel speeds, and the volume of transmitted data for each user: where AðtÞ is the scheduling algorithm. In formula (5), the information about the prehistory of the allocated shares of the wireless channel and the volumes of transmitted data for UE/M2M gateway i is aggregated in the sense of S i ðτÞ, as these parameters are given by relation (4). It is important to note the fact that for the BS scheduler, a UE/M2M gateway can be in one of two states at any given time: active and inactive. A UE/M2M gateway is considered active if this device has data for transmission in the downlink/uplink at the base station; otherwise, the UE/M2M gateway is considered inactive. In a real system, UE/M2M gateway activity is also determined based on queue occupancy. It is important to note that data loading consists of successive transmissions of packets with video data, and the user is considered inactive during the pauses between their loading.
In this paper, we consider a new resource scheduling algorithm that satisfies the following set of properties: (i) At each moment of time, the active user is guaranteed to allocate a minimum share of channel resources α min i : device i is inactive at the moment t: The minimum share of channel resources is a value other than zero: α min i > 0, and the sum of the minimum shares of channel resources for all users does not exceed the total amount of resources available for planning: (ii) Wireless channel resources cannot be allocated to an inactive subscriber (iii) At any given time, the scheduler allocates all available resources to active subscribers: Before moving on to the optimal resource allocation algorithm, let us highlight the peculiarities of the functioning of a heterogeneous LTE network. It is determined that most often M2M data blocks have extremely small size and are generated by a large number of different M2M devices. In this regard, according to the generally accepted standard, it is not reasonable to allocate six resource (6RB) blocks within the channel width of 1,4 MHz by the eNodeB base station scheduler for data transmission from a single M2M device, as it leads to inefficient use of radio resources. In order to solve this problem, we propose to conduct clustering by deploying the above proposed M2M gateways to localize the M2M traffic within the cluster, followed by aggregation and classification during transmission to the core LTE network, which will allow the scheduler to optimally use the radio resource and reduce the signal load on the eNodeB. Also, such a solution will allow the aggregation of completely or partially identical messages in order to reduce the amount of data transmitted by M2M devices.
It should be noted that the scheduler's decision on the allocation of network resources is primarily based on the QoS requirements. Therefore, the task of allocating timefrequency resources in LTE 5G/6G technology should be formulated as the task of allocating the network RB between UEs and M2M devices depending on the stated bandwidth requirements and QoS parameters. We propose to determine the class to which this or that UE traffic belongs on the basis of the QCI parameter (QoS class identifier) [25].
In accordance with this, the proposed algorithm of optimal use of base station resources in terms of increasing traffic is generated by HTC and MTC devices. When it is used, the user devices are guaranteed to get the minimum bandwidth for data transmission. With available resources at the base station, users are given the option of increasing the speed. A greater volume of bandwidth resources will get those devices which, firstly, transmit real-time traffic, for this algorithm such users are regarded as a priority; secondly, require a wide bandwidth; and thirdly, have a higher channel quality indicator (CQI).
The proposed algorithm (Figure 3) is expedient to apply at fuzzy division of the traffic allocated by HTC and MTC devices, i.e., in the conditions of mixed service, without a strict allocation of separate radio resources under H2H and M2M traffic.

Wireless Communications and Mobile Computing
Description of the sequence of the algorithm: (1) Information is collected and analyzed on the eNodeB (block 1), which concerns the requirements of providing the resources required for data transmission. Fix the minimum required number of frequencytime blocks, as well as the value of the channel quality indicator, further used to select adaptive modulation and code rate If it is not possible to change the parameters, an alternative option to increase the rate is proposed (block 10), which is to apply the principles of M2M gateway interaction, spectrum aggregation, or switching to service in a less busy station (load balancing). The transmission of information on the LTE network is carried out using time-frequency resources, in which management signals and payload are displayed. For mobile users and M2M devices, the allocation of frequency bands is proposed as shown in Figure 2. This spectrum arrangement enables the use of spectrum aggregation for M2M and H2H (UE).
In the uplink and downlink channels, subcarriers are positioned differently, since different technologies are used to transmit information. Allocation of resource blocks is offered for two channels, both mobile and M2M users.

Optimal Radio Resource Allocation Method in LTE 5G/6G Networks Based on Adaptive Channel Bandwidth Selection
This paper proposes a method for optimal allocation of frequency-time resources in LTE networks, based on the adaptive selection of radio frequency bandwidth. Using this method for the base station scheduler allows optimal selection of the channel width, in which the resources of one sub-frame will be minimally "idle." For a more detailed understanding, let us consider an example. Let the HTC provide bandwidth (throughputs) equal to R req = 5 Mbps and be in such radio conditions in which it can get such a rate, because its location provides 64 QAM MCS and code rate 666/1024. However, the base station can provide services at low modulation, if it provides the necessary bandwidth. For example, we calculate the number of antennas to transmit and receive which equals to 2. Seven symbols in one resource block are transmitted. According to the formula, determine the number of resource blocks that need to be transferred in one frame to provide the necessary bandwidth at a given modulation and code rate: where R req is the bandwidth required (Mbps), N potRBframe is the total number of resource blocks in the frame, and R pot is the possible bandwidth according to the location conditions (Table 1) (Mbps). The possible throughput for the UE/M2M gateway is calculated according to the following formula: whereSis the percentage of useful data in the frame (S = 0, 75 ); the number of allocated resource blocks isN RB within a second; the number of resource elements is12 ⋅ 7, where 12 is the number of subcarriers and 7 is the number of symbols; the number of antennas isMIMO; the code rate isKod RATE ; and modulation positionality islog 2ðModulÞ.
To find the number of subframes needed to provide the necessary bandwidth, consider the number of resource blocks in the frame at the corresponding channel width, as well as the total number of subframes in the frame equal to 10. Therefore, the formula for finding the required number of subframes will be as follows: The required number of subframes to be reserved in one frame to provide the necessary throughput is summarized in Table 2. The number of subframes in bold cannot be allocated within one frame because according to the standard a frame can contain only 10 subframes. Therefore, in what follows we will focus only on all other values of the number of subframes with appropriate combinations of CQI and channel bandwidth. 10 Wireless Communications and Mobile Computing To calculate the "idle" value of a subframe, use the following formula: The calculated "idle" values are entered in Table 3. The "-" sign is set in those cells of the table, where it is impossible to select subframes within one frame (according to the selected values in Table 2). Table 3, we can conclude that the rational use of base station resources will be in the case of selection by the controller of the channel bandwidth and modulation-code scheme, in which the "idle" of resources will be minimal. Given that the base station provides resources to both real-time and unreal-time services, we can say that serving unreal-time traffic "idle" subframe in each frame can be eliminated by filling the subframe to its full volume. Thus, the unreal-time service data will reach the delivery point faster and, accordingly, will rather fire all the base station resources that it has reserved for this  For them, it is impossible to predict the time of the end of the session or the amount of data that will be transferred during the session. In order to optimally use the resources of the base station for real-time traffic, we can allocate resources (if such a possibility exists at all) in the frequency band where the "idle" of the subframe is the smallest. The values from Table 3 must be calculated by the base station controller for each next frame. Then, the rationality of the use of radio resources of the base station will increase.

Taking into account the values shown in
A second approach that ensures efficient use of timefrequency radio resources is the deployment of M2M gateways that allow M2M traffic to be aggregated and transmitted in a broader channel bandwidth. This achieves less frame idle by rationally classifying and backing up the necessary throughput at the gateway; a detailed example is shown in the next chapter of the paper when simulating and investigating the resource optimization process in LTE frames.
For the analytical study of the effectiveness of the developed method of optimal resource allocation, we will use mathematical expressions (Equations (8)- (11)). The general scheme of servicing HTC and MTC devices in the proposed heterogeneous 5G/6G mobile network is depicted in Figure 4.
The diagram shows the sequence of steps the 5G/6G LTE base station controller must take when reviewing incoming service requests. The controller should perform the following steps to optimally reserve the necessary time-frequency resources: (1) Analysis of requirements (2) Determination of the required number of subframes within a frame (3) Checking the availability of the necessary subframes (4) If they are available, reserving according to one of the principles (FIFO principle, priority service, and priority service taking into account the number of time-frequency resources) Figure 4 shows that for the case when the number of free subframes is not available and the controller will reserve the subframes by the principle of FIFO, then the device (MTC) will be lost (arrow labeled "1"). When the controller operates according to principle number 2 (priority service), it will delay data transmissions for the device (sensor) which is already transmitting data whose service is lower priority than the service priority specified in the input service request and will serve the device (MTC) whose service has the highest priority (arrow labeled "2"). Servicing of devices (MTC) according to the third principle is characteristic for those cases when, according to statistical data (or in connection with an important event which should take place in the base station area any minute), a significant number of incoming requests for service are predicted and the reservation according to principle 2 provides significant losses. But these losses can be reduced by deferring data transmission for the device (MTC), which is a lower priority and occupies the largest, among other devices (MTC), number of subframes.

Developing a Simulation Model of Radio Resource Allocation Process within the Operation of One 5G LTE Base Station
To develop a simulation model, we analyzed the principle operation of LTE Downlink Resource Element Visualisation v1.1 software, which allows selecting downlink parameters and using these parameters to determine the useful bandwidth for the subscriber. Unfortunately, the use of the For this purpose, a unique adequate simulation model of the process of radio resource allocation within the operation of one LTE base station is developed. Accordingly, to investigate the service capability of UE and M2M devices, a simulation model is developed that takes into account the features of throughput calculation for the downlink similar to the abovementioned software, which also determined the number of resource elements for the uplink and similarly calculated throughput that would be in this case. The simulation model shows the attenuation as the signal propagates from the user device to the base station (or in the opposite direction). These attenuations are calculated according to radio propagation patterns. The GUI also allows studying the variation of the QoS parameters for the case where a subscriber is mobile.
In general, the graphical interface of the simulation model contains the following tabs: For example, the QoS parameter simulation for UE (H2H) and M2M traffic is depicted in Figure 5.
On this tab in area 1 is collected the minimum and maximum possible throughput for the services that will use mobile (9 priorities) and M2M (4 classes) devices. The need for two bandwidth values (minimum and maximum) at once can be explained by the fact that there are different services for a device with the corresponding priority (mobile user) or class (M2M). So, if we take, for example, M2M devices of class 1, then devices of this class can work with services with bandwidth requirements ranging from the minimum (2 Mbps) to the maximum (3,5 Mbps) value. Area 2 reflects the maximum allowable latency for each of the classes (M2M devices) and 9 priorities for mobile users. Areas 3 and 4 allow to set the data transmission intensity for 4 classes (M2M devices) and 9 mobile user priorities, respectively.
The investigation of the mobility of subscribers in a heterogeneous network will be based on the movement of a single subscriber in the service area of the base station. For a group of subscribers, the results of the study will be similar, given their number.
Thus, the study will be conducted for the number of antennas per transmission 1, the normal cyclic prefix, and the PHICH group scaling value equal to 1. In the upper left corner of the map with the base station are randomly generated mobile devices and M2M traffic with their respective different bandwidth requirements ( Figure 6). The first device generated is the mobile. We move it over the coverage area of the base station. We see that the distance between the mobile device and the base station changes. At first, the subscriber was walking towards the base station, after that, he moved a little away from it and stopped at a distance of 636,37 m. As the subscriber moved, the radio channel conditions changed, which affected the signal-to-noise ratio, the change in values of which is shown in the graph called "SNR." According to each point on this graph simulation model, a curve of change in the channel quality indicator was built, for each value of which the modulation-code diagram is fixed. The modulation-code scheme affects the maximum value of bandwidth. The maximum throughput value for the 3 MHz channel is reflected in the graph in the lower left corner of the network map (manual control) tab. In Figure 6, the throughput (bandwidth in simulation model) for UE is 2,7 Mbps. In the upper right corner of the map of the base station service area, the numerical values of the distance to the base station, the signal-to-noise ratio, and the total number of lost devices as of the current time are displayed. When servicing mobile devices (UE) and M2M devices without traffic prioritization, all incoming service requests will be handled by the base station controller according to the FIFO queue (first come-first served). Figure 6 shows the location of mobile and M2M devices, in which all are provided with the necessary resources for data transmission. As we can see, there are 5 users with mobile devices and 4 M2M sensors in the service area of the base station. The mobile device with ID 9 is lost because the number of resources it needs to transmit data cannot be provided, because all other devices in the base station area are provided with the bandwidth they need, and the remaining amount of subframe (if any) is insufficient to meet the needs of the secondary mobile device.
As we can see, the mobile device with ID 9 required a bandwidth of 4 Mbps. We also observe the inefficiency of  The studies in Figures 7 and 8 were conducted to serve UE devices and M2M sensors according to the classical architecture. Now, let us conduct a similar study when supplementing this architecture with a proposed optimal radio resource allocation method and new multiservice M2M gateway (GW). Figure 9 shows the location of all UE devices and M2M sensors that currently have a need to transmit data. The requirements for the necessary throughputs (required bandwidth in the simulation model) shown in Figure 8 are similar to the requirements shown in Figure 10. As can be seen from Figure 10, the inefficiency of using the 1,4 MHz band has decreased to 0,29% and the 3 MHz band is completely free. Taking into account that the multiservice gateway sorts the traffic into 4 queues: real-time high-bandwidth traffic, realtime low-bandwidth traffic, unreal-time high-bandwidth traffic, and unreal-time low-bandwidth traffic, each of which is served with the maximum allowable service time, we can claim that the losses when adding the classic architecture with the multiservice gateway follow to zero.
In this section of work, an analytical study of the proposed method of LTE 5G/6G network radio resource allocation and peculiarities of the subframe in different channel bandwidth are noted. The effectiveness of the proposed  method of subframe planning for small volume data transmission by M2M sensors is shown; a generalized scheme of UE and M2M sensors service is derived. The simulation model is described; implementing the proposed solutions with its help shows the possibility of studying the mobility of subscribers in a heterogeneous 5G/6G mobile network, the availability of base station resources depending on the location of UE and M2M devices. The main points of timefrequency radio resource allocation for mobile users and M2M sensor service according to the classical LTE architecture and the proposed architecture augmented with a multiservice M2M gateway are shown. Also shown is a way to reduce the percentage of subframe idle time when reserving resources for data from the M2M gateway.

Conclusions
Based on our review of works, we found that the known methods of radio resource allocation do not have the necessary flexibility to serve mobile users and M2M sensors. We proposed a 5G LTE architecture with a new multiservice M2M gateway and a method for flexible management of information flows and network resources to meet the requirements of a growing number of M2M sensors with a Figure 9: Location of UE and M2M devices in the study of service quality assurance capabilities according to the proposed LTE 5G/6G architecture, augmented with a multiservice gateway, without regard to service priorities. Figure 10: Requirements for the necessary throughputs of mobile and M2M devices when investigating the possibilities of quality of service according to the proposed architecture, augmented with a multiservice gateway, without taking into account the priority of services. 16 Wireless Communications and Mobile Computing guaranteed level of service quality. With M2M sensor clustering and traffic aggregation at the gateway, the base station controller will not consider incoming service requests separately from each sensor but will reserve subframes within a frame for a complex service request coming from the gateway. The M2M gateway to form a complex application must take into account the condition of guaranteed service, which is to fix the time of the data in the queue at the gateway while taking into account the service time at the base station. We have developed a method for optimal resource allocation in LTE networks, based on the adaptive choice of channel bandwidth depending on the service quality requirements, as well as on the priority traffic aggregation in M2M gateways. This approach allowed improving the efficiency of licensed radio resources by optimizing the process of LTE frame formation and reducing the share of signal traffic in them.
We present our developed simulation model, which can generate both H2H and M2M traffic, with full flexibility to add queues or priorities for any traffic, in order to study the mutual impact of H2H and M2M traffic. The developed method of optimal resource allocation for 5G LTE networks allowed, depending on various modelling situations (without and with consideration of data priorities), i increasing from 4% to 13% of the efficiency of frequency-time resources in the process of LTE frame optimization, under conditions of simultaneous use of 1,4 MHz and 3 MHz channel bandwidth.

Data Availability
The data used in this research are available upon request to the corresponding author Mykola Beshley (mykola.i .beshlei@lpnu.ua).