We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.
The current trends in telecommunication infrastructure with packet oriented networks bring up the question of supporting Quality of Service (QoS). Methods, that are able to assign priorities to flows or packets and then service them differently according to their needs in network nodes, were proposed for the demands of QoS support. Queue Scheduling Discipline (QSD) algorithms are responsible for choosing packets to output from queues. They are designed to divide the output capacity fairly and optimally. Algorithms that are able to make this decision according to priorities are the basic component of modern QoS supporting networks [
For an optimal configuration of these algorithms we need to calculate or simulate the result of our setting to expect the impact on QoS. The network nodes can be modeled using Markovian models [
Most of the existing WFQ bandwidth allocation models do not consider variable utilization of queues or bandwidth redistribution of unassigned link capacity. For this reason we proposed our iterative mathematical model for bandwidth allocation of WFQ. The model can be used for the analysis of the impact of weight settings, analyzing the stability of the system and modeling of delay and queue length of traffic classes.
The next sections of the paper are structured as follows. At first the WFQ algorithm is presented followed by a short presentation of common used bandwidth constraint models. The third section of the paper describes the proposed model for average bandwidth allocation of WFQ followed by examples of WFQ bandwidth allocation and simulation results proving the proposed model.
There are many scheduling algorithms and several bandwidth allocation models proposed for bandwidth allocation estimation. We focused on WFQ and bandwidth allocation models proposed for MPLS traffic engineering.
WFQ was introduced in 1989 by Demers et al. and Zhang [
One of the goals of DiffServ or MPLS traffic engineering is to guarantee bandwidth reservations for different service classes. For these goals two functions are defined [
classtype (CT) is a group of traffic flows, based on QoS settings, sharing the same bandwidth reservation;
bandwidth constraint (BC) is a part of the output bandwidth that a CT can use.
For the mapping between BCs and CTs the maximum allocation model (MAM), max allocation with reservation (MAR), and Russian dolls model (RDM) are defined.
The MAM model [
MAR [
The RDM model is more effective in bandwidth sharing. It assigns BCs to groups of CTs. For example CT7 with the highest QoS requirements gets its own BC7. The CT6 with lower QoS requirements shares its BC6 with CT7, and so forth. In extreme cases the lower priorities get less bandwidth as they need or even starve [
In general, WFQ and some other scheduling algorithms like WRR, WF^{2}Q+, and so forth allocate bandwidth differently as the models described in Section
The proposed model is a part of the research of modelling of traffic parameters of NGN networks, is a modification a presented model for bandwidth allocation of the WRR algorithm, and will be further used for delay and queue length modeling of these algorithms.
We assume a network node with
For the bandwidth calculation an iterative method will be used. The
To describe the bandwidth allocation of WFQ, we have to analyze all possible situations that can occur. We will use an iterative method for the analysis.
Let us take a look at the possible situations that can appear in the first step of bandwidth allocation. The WFQ algorithm works at the principle that a number of bits represented by the weight value are sent at once to a virtual output. The bits are then reorganized to the original packets and the packet which is completely transmitted in this way is dequeued as the first. This assures an exact bandwidth allocation between queues according to assigned weights. The distribution of the available bandwidth can be written as follows:
After the bandwidth is divided between the queues according to (
The first possibility is that each queue gets and uses the bandwidth calculated in (
The second option is that each queue is satisfied with the assigned bandwidth. In this case:
In these two cases, the bandwidth assignment is finished in the first iteration step. No unused bandwidth needs to be divided between other queues. A queue gets the bandwidth which it needs (
This (
If the conditions (
If the queues bandwidth requirements are met, the result of (
The reallocation of the unused capacity will be done only between the queues whose bandwidth requirements are not satisfied until all capacity is divided or all queue requirements met and can take
Equation (
The whole output bandwidth is already distributed between the queues:
or all the requirements of the queues are satisfied:
These conditions are also met if in the next iteration no redistribution of bandwidth occurs:
Let us demonstrate the performance of our model in the comparison with WFQ on some examples. In these examples we will assume 4 priority classes. We will show 4 different behaviors. The first example presents the situation, where all traffic classes get the required bandwidth. The second one shows the case in which the bottleneck link has less capacity than is needed and the distribution is done according to packet size and weights. The third example shows us the worst case in which redistribution of bandwidth occurs and the calculation of bandwidth takes
In this example we will assume a 100 Mbps output link. The first class represents a VoIP flow with high traffic. The mean packet size is set to 100 B what equals to
The input bandwidths calculated using (
The weights are set in the following way:
This example uses the same traffic settings as in Example
The bandwidth allocation calculated using (
In this example we will show the worst case in which the bandwidth allocation stops after the maximal
This settings result into the following bandwidth requirements calculated using (
In the first iteration the bandwidth allocated using (
In the second iteration the result of (
In the 3rd iteration the remaining capacity of 0.375 Mbps is divided between classes 3 and 4 by the ratio of 2 : 1 due to the assigned weights. This capacity is added to the previously assigned and results into 3 Mbps, 3.125 Mbps, 2.583 Mbps, and 1.292 Mbps.
In the 4th and last reallocation of bandwidth the unused capacity 0.042 Mbps of class 4 is reassigned to the last unsatisfied class 3 and fully used. The resulting allocation of bandwidth is as follows: 3 Mbps, 3.125 Mbps, 2.625 Mbps, and 1.25 Mbps.
All these results correspond with the proposed models (
Allocation of bandwidth in Example
Traffic class  Bandwidth  

Required (Mpbs)  1st iteration  2nd iteration  3rd iteration  4th iteration  
Allocated (Mpbs)  Unallocated (Mpbs)  Allocated (Mpbs)  Unallocated (Mpbs)  Allocated (Mpbs)  Unallocated (Mpbs)  Allocated (Mpbs)  
Traffic class 1  3  4  1  3  3  3  
Traffic class 2  3.125  3  3.5  0.375  3.125  3.125  
Traffic class 3  3  2  2.333  2.583  2.625  
Traffic class 4  1.25  1  1.167  1.292  0.045  1.25 
This example describes the bandwidth allocation, where the calculation has to be stopped after the conditions in (
The weight and packet size settings are the same as in the previous Example
In the first iteration using (
The second iteration reassigns the 1 Mbps divided using the ratio 2 : 1 to queues 3 and 4. The bandwidth assigned to them is 2.667 Mbps and 1.333 Mbps, but queue 3 needs only 2.25 Mbps output capacity and the remaining part of the capacity can be reassigned to the last unsatisfied queue 4.
In the third iteration we assign 3 Mbps to the first queue, 2 Mbps to the second queue, 2.25 Mbps to the third queue, and 1.75 Mbps to the last queue. The fourth queue needs only 1.5 Mbps and this means that bandwidth requirements of all queues are met. The iterations have to stop at this moment according to (
We can change the arrival rate of the fourth queue to 750 pps and raise the bandwidth requirements 2.25 Mbps. In this case in the 3rd iteration the bandwidth allocations are 3, 3, 2.25, and 1.75 Mbps. This means that the whole output capacity is divided to the queues (
To proove the results of our mathematical model we used simulations in the NS2 simulation software [
For the simulations a simple network model with 4 transmitting nodes (1–4) and four receiving nodes (6–9) was used. The transmitting and receiving nodes are interconnected with one link between nodes 0 and 5. The node 0 uses WFQ to schedule packets on this bottleneck link where the mentioned bandwidths are set. All other links have a capacity of 100 Mbps. The model is shown in Figure
Simulation model.
We used two types of traffic sources. The first one generates packets only with one packet size and constant packet interval. These settings are easier to simulate and represent a D/D/1/∞ Markovian model.
The second traffic source type represents an M/M/1/∞ model. There is a lack of possibility to generate traffics with different packet sizes in NS2 simulator. For this reason the M/M/1 source is modeled using an ON/OFF source where each node generates one packet with a random size (exponential distribution) and the interval for the next packet transmission is a random time (again a random number with exponential distribution).
An example of input data generated at one node with the mean packet size 375 B and arrival rate 1000 pps is shown in Figures
Exponential probability distribution of arrival rate with mean value 1000 pps.
Exponential probability distribution of packet sizes with mean value 375 B.
We made many simulations under different parameter settings. The presented results correspond with described examples or present different extreme settings. The results of simulations of M/M/1 and D/D/1 models and the results of our proposed model are shown in Table
Simulation results compared with mathematical model results.
Simulation variant (#) 










Mean packet size (B)  100, 1000, 1000,1500  100, 1000, 1000, 1500  375, 375, 375, 375  375, 375, 375, 375  375, 375, 375, 375  1000, 100, 10, 1  1000, 1000, 1000, 1000  1000, 100, 1000, 100  1000, 1000, 1000, 1000 
Mean arrival rate (pps)  100, 10, 10, 1  100, 10, 10, 1  1000, 1041.67, 1000, 416.67  1000, 1000, 750, 500  1000, 1000, 750, 750  100, 100, 100, 100  1, 10, 100, 1000  1000, 100, 1000, 100  100, 100, 100, 100 
Input bandwidth (Mbps)  0.08, 0.08, 0.08, 0.012  0.08, 0.08, 0.08, 0.012  3, 3.125, 3, 1.25  3, 3, 2.25, 1.5  3, 3, 2.25, 2.25  0.8, 0.08, 0.008, 0.0008  0.008, 0.08, 0.8, 8  8, 0.08, 8, 0.08  0.8, 0.8, 0.8, 0.8 
Weight settings  4, 3, 2, 1  4, 3, 2, 1  4, 3, 2, 1  4, 3, 2, 1  4, 3, 2, 1  1, 1, 1, 1  4, 3, 2, 1  4, 3, 2, 1  40, 3, 2, 1 
Link capacity (Mbps)  100  0.05  10  10  10  0.5  4  8  1.6 
D/D/1 simulation results (Mbps)  0.079, 0.079, 0.079, 0.012  0.019, 0.015, 0.01, 0.005  3.00, 3.15, 2.59, 1.248  2.99, 2.99, 2.249, 1.50  2.99, 3.00, 2.25, 1.749  0.41, 0.08, 0.0079, 0.0008  0.0078, 0.08, 0.8, 3.111  5.226, 0.079, 2.6133, 0.079  0.79, 0.4, 0.267, 0.133 
M/M/1 simulation results (Mbps) 

















 








 










Model results (Mbps)  0.08, 0.08, 0.08, 0.012  0.02, 0.015, 0.01, 0.005  3, 3.125, 2.625, 1.25  3, 3, 2.25, 1.5  3, 3, 2.25, 1.75  0.411, 0.08, 0.008, 0.0008  0.008, 0.08, 0.8, 3.112  5.2266, 0.08, 2.6133, 0.08  0.8, 0.4, 0.266, 0.133 
We measured the bandwidth after achieving “steady state.” The measurement started after 20 s of simulation when the bandwidth was stable and queues filled up with waiting packets [
The results of the mathematical model mostly correspond with the simulation results. The results of the D/D/1 simulation model are more exact due to the exact setting of packet size. The small inaccuracy can be caused by measurement errors, where the bandwidth calculation is stopped closely to an arrival of a packet when small arrival rates are set. Due to the deterministic parameter settings there is no difference between more runs of simulations and no result variance occurs.
The presented results for the M/M/1 simulations are an average value calculated from 10 simulation runs and the standard deviation of the runs is also provided. The simulation runs for most parameter settings lasted 200 s. In cases an extreme low arrival rate was set we extended the simulation duration up to 1000 s. We provided also simulations with WF^{2}Q+ [
We presented a new iterative bandwidth allocation model for WFQ in IP based NGN networks. The proposed model uses the weight settings of the WFQ scheduler and average input bandwidth of different flows for the bandwidth calculation. The variable utilization of different queues and packet redistribution is considered in the calculations. The proposed model allows to easily predict the impacts of the scheduler, traffic shapers, and input traffics on QoS of the transported data.
The functionality of the model was presented on five different examples and confirmed by simulations in the NS2 simulator for both D/D/1 and M/M/1 input traffics.
The proposed iterative bandwidth allocation model was tested with WF^{2}Q+ scheduler with the same simulation results. Therefore we can say that proposed model is also applicable on other WFQ based schedulers.
The results of this bandwidth allocation model will be used in further research of delay and packet loss modeling using Markovian queue models.
This work is a part of research activities conducted at Slovak University of Technology Bratislava, Faculty of Electrical Engineering and Information Technology, Institute of Telecommunications, within the scope of the project “Support of Center of Excellence for SMART Technologies, Systems and Services II., ITMS 26240120029, cofunded by the ERDF.”