Cross-Layer Control with Worst Case Delay Guarantees in Multihop Wireless Networks

The delay guarantee is a challenge tomeet different real-time requirements in applications of backpressure-based wireless multihop networks, and therefore, researchers are interested in the possibility of providing bounded end-to-end delay. In this paper, a new cross-layer control algorithm with worst case delay guarantees is proposed. The utility maximization algorithm is developed using a Lyapunov optimization framework. Virtual queues that ensure the worst case delay of nondropped packets are designed. It is proved through rigorous theoretical analyses and verified by simulations that the time average overall utility achieved by the new algorithm can be arbitrarily close to the optimal solution with finite queue backlogs. The simulation results evaluated with Matlab show that the proposed algorithm achieves higher throughput utility with fewer data dropped compared with the existing work.


Introduction
With the exponential increase in wireless multihop networks in the last two decades, increasingly sophisticated approaches that target resource allocation, congestion control, routing, and scheduling have been developed.Among the various policies that have been developed, the backpressure scheduling/routing policy, which was first proposed in the seminal work by Tassiulas and Ephremides [1], is a promising scheme because of its optimal throughput characteristic.Cross-layer algorithms that provide throughput utility optimal operation guarantees for different network structures can be designed by applying the Lyapunov optimization technique and by combining the backpressure scheme with flow control [2].The flow controller at the transport layer ensures that the admitted rate injected into the network layer lies within the network capacity region.In recent works, spectrum sharing and pricing mechanisms [3], energy management [4], and social selfishness of users [5] have been considered in backpressure-based cross-layer algorithms.Cross-layer algorithms have also been combined with MAC (Media Access Control) layer [6], TCP (Transmission Control Protocol) layer [7], and application layers [8].
Besides throughput utility, end-to-end delay is another important long-term performance metric of the backpressure style algorithms, and it is crucial to many essential applications.As applications with real-time requirements are being developed, it is necessary to design backpressurebased algorithms that provide bounded worst case delay guarantees.Backpressure algorithms usually bear poor delay performance mainly attributed to the following three reasons.First, the slow startup process to form a stable queue backlog gradient from the source to the destination causes large initial end-to-end delay.Second, unnecessarily long or looped paths form owing to the fluctuation of the queue backlog.Finally, the absence of consistent backpressure towards the destination can cause large latency in networks with short-lived or low-rate flows.In [9], average delay bounds are derived for one-hop wireless networks using maximal scheduling.In [10], the delay bounds in wireless ad hoc networks are studied using backpressure scheduling with either one-hop or multihop traffic flows.In [11], the authors propose a cross-layer algorithm providing average end-to-end delay guarantees.These prior works can only provide bounds on the overall average delay via Little's Theorem, except for individual sessions.There are several works aiming to reduce end-to-end delay for individual sessions.In [12], a virtual queue-based gradient is established for nodes.In [13], the authors develop a delay-aware cross-layer algorithm using a novel link-rate allocation strategy and a regulated scheduling 2 Journal of Electrical and Computer Engineering policy.A hop-count based queuing structure is used in [14] to provide a worst case hop count to the destination.However, these works fail to provide explicit end-to-end delay guarantees.Deterministic worst case delay guarantees are derived from the algorithm in [15] which uses explicit delay information from the head-of-line packet at each queue in one-hop networks.Considering both one-hop and multihop wireless networks, [16] designs an opportunistic scheduling scheme that guarantees a bounded worst case delay for each session.Our paper is mostly related to the study in [16].However, different from [16], our algorithm consists of two phases and the persistent service virtual queue [16] is redesigned.
The key contributions of this paper can be summarized as follows.
(i) The paper proposes a two-phase algorithm which can provide a bound on the worst case end-to-end delay of individual sessions by designing a novel virtual delay queue structure.
(ii) By transforming the stochastic control problem into a deterministic optimization problem using the Lyapunov drift-plus-penalty technique, we design a joint congestion control, routing, and scheduling algorithm.
(iii) The performance in terms of utility optimality and network stability of the algorithm is demonstrated with rigorous theoretical analyses.It is shown that the proposed algorithm can achieve a time average throughput utility that can be arbitrarily close to the optimal value, with queue backlogs being bounded by constants.
The remainder of this paper is organized as follows.Section 2 introduces the system model and problem formulation.In Section 3, the algorithm is designed using Lyapunov optimization.The performance analyses of the proposed algorithm are presented in Section 4. The simulation results are given in Section 5. Conclusions are provided in Section 6.

Network Model and Problem Formulation
2.1.Network Model.Consider a multihop wireless network consisting of several nodes.Let the network be modeled by a directed connectivity graph (, ), where  is the set of nodes and (, ) ∈  represents a unidirectional wireless link between node  and node  which is in the transmission range of .Let  be the set of unicast sessions  between sourcedestination pairs in the network.  is the set of source nodes   and   is the set of destination nodes   of session .Packets from the source node traverse multiple wireless hops before arriving at the destination node.
The system is assumed to run in a time-slotted fashion.Nodes in the network communicate using only one channel.  () ∈ {0, 1} is used to indicate whether link (, ) is used to transmit packets in time slot .  () = 1 implies that the link is scheduled.In this model, scheduling is subjected to the following constraints: where node  is in the transmission range of  and () denotes the set of nodes with (, ) ∈ .() denotes the set of nodes with (, ) ∈ .Constraint (1) implies that each node is equipped with only one radio, and thus, it can either transmit or receive data at any given time.Constraint (2) states that a node transmitting packets will interfere with the data receptions of the nodes in its transmission range.

Virtual
Queue at the Transport Layer.  () ∈ [0,  ()  max ] denotes the arrival rate of session  injected into the transport layer from the application layer at the source node and  ()  max is the maximum arrival rate of session .  () ∈ [0,   ()] is the admitted rate of session  injected into the network layer.  () ∈ [0,  ()  max ] is an auxiliary variable known as the virtual input rate.The virtual queue at the transport layer of source node   of session  is denoted by   that is updated as follows: If each virtual queue   is guaranteed to be stable, according to the necessity and sufficiency for queue stability [17], it is apparent that   ≤   , where the time average value of time-varying variable () is denoted by  = lim →∞ (1/) ∑ −1 =0 (()).Therefore, the lower bound of   can be derived from   which can be calculated.

Data
Queue at the Network Layer.The data backlog queue for session  at the network layer of node  is denoted by  ()   ().In each slot , the queue is updated as where  ()  () is the amount of data of session  to be forwarded from node  to  in time slot . 1 {=  } is an indicator function that denotes 1 if  =   and 0 otherwise.In addition, ∑ ∈  ()   () must not be greater than  max,out  . ()   ∈ [0,  max ] represents the number of packets of session  that are dropped by node  in slot .The optimization of  ()   () is the routing decision.As assumed in [18], in this paper, the transmission capacity of any link is set to be 1.
Therefore,  ()   () is either 0 or 1, and it can not be greater than  ()   (), which is denoted as and ∑ ∈  ()  () =   (), ∀(, ) ∈ , can also be derived.The actual amount of packets of session  dropped in slot  can be defined as 2.4.Persistent Service Virtual Queue.The -persistent service queue designed in [16] can ensure bounded worst case delay for general types of utility functions.We denote this queue by  ()  , and in each slot, the queue is updated as From the algorithm in [16] we find that  ()  is used in decision of resource allocation and packet dropping.Since in most slots  ()   () > 0,  ()  may increase fast.According to the packet drop decision algorithm, high value of -persistent service queue leads to serious packets drop.Therefore, the fast increase of  ()   leads to dropping of packets and this results in a significant drop in throughput utility.
In this paper, we redesign the -persistent service queue that is denoted by  ()   .In each slot , the queue is updated as where  1 >  2 > 0.  1 and  2 are constants. () ,standard is a constant value that is calculated in phase I of the algorithm which will be given in Section 3. Initial backlog  ()   (0) is supposed to be 0.
() ,standard is the time average of length of queue of session  in node .According to (8),  ()   increases fast only when  ()   () >  () ,standard , and thus  ()  should grow slower than  ()   .According to the packet drop decision algorithm, the number of packets dropped in our new algorithm should decrease and throughput should increase, compared with the algorithm in [16].
Any algorithm that maintains bounded  ()  () and  ()   () ensures persistent service with bounded worst case delay, as shown in Theorem 1.
Theorem 1 (worst case delay).For all time slots  ∈ {0, 1, 2, . ..} and all sessions  ∈ , suppose that the algorithm can ensure where  (),max  and  (),max  are finite upper bounds for  ()  () and  ()   (), respectively.Assuming First Input First Output (FIFO) service, the worst case delay of the nondropped data at node  is bounded by the constant  (),max  , which is given as where ⌈⌉ denotes the smallest integer that is greater than or equal to .
Proof.Fix any slot  ≥ 0, and let  ()  () represent the data that arrives at queue  ()  on slot .As the service is FIFO, the data  ()   () is placed at the end of the queue  ()  on slot  + 1.We want to prove that all of the data  ()   () departs queue  ()  on or before slot  +  (),max  .We prove this in three cases.
Then, the throughput utility maximization problem 1 can be defined as follows: max (1) , ( 2) , ( 5) , where  ()  is the time average value of  ()  ().  is the maximum slope of the utility function   ().Constraint (24) means that the network stability is guaranteed.

Dynamic Algorithm via Lyapunov Optimization
The Lyapunov optimization technique is applied to solve 1.
The conditional Lyapunov drift in time slot  is
The algorithm CCWD is based on the drift-plus-penalty framework [17] and the main design principle of the algorithm is to minimize the right-hand side of (29).The algorithm includes two phases.
Phase II.This phase includes five components.
Source Rate Control.For each session  ∈  at source node   , the admitted rate   () is chosen to solve where  −1  (⋅) is the inverse function of    (⋅) that is the firstorder derivative of   (⋅).Since the utility function   (⋅) is strictly concave and twice differentiable,    (⋅) must be a monotonic function, and therefore,  −1  (⋅) must exist.
Packet Drop Decision.For each session  ∈  and each node  ∈ , choose  ()   to solve Problem ( 41) is a linear optimization problem, and if  ()  ()+  ()   () >  ⋅   ,  ()  () is set to be  max ; otherwise it is set to be zero.

Performance Analysis
Theorem 2 (bounded queues).Assume that  max ≥ max{ 1 ,  ()  max +  max,in  } holds, where  max,in  denotes the maximal amount of packets that node  can receive from other nodes in one slot.Then under the algorithm CCWD, all queues are bounded for all  ≥ 0 as follows: provided that these inequalities hold at  = 0.The queue bounds are given by Proof.The theorem is proved by induction.
where  is a constant value.According to Theorem 4.5 in [17] and Lemmas 5.6 and 5.7 in [19], the following inequality can be derived from (50): where  1 ,  2 ,  3 > 0. Inequality (51) can be transformed to the exact form specified by Theorem 5.4 in [19].According to Theorem 5.4 in [19] and the condition   ≤   , the following inequality can be derived: Inequality (52) implies that the overall throughput utility achieved by the algorithm in this paper is within a constant gap from the optimum value.

Simulation
In the simulations, the commonly used greedy maximal scheduling (GMS) method is used for schedulable link set generation for each algorithm under comparison.This method is widely used for implementing backpressure-based centralized algorithms under sophisticated networks [20].

Performance Comparison.
In this section, the performance of CCWD is compared with that of an existing method called NeelyOpportunistic, which too can provide bounded worst case delay.NeelyOpportunistic is proposed in [16].The throughput utilities and the time average number of dropped packets achieved by CCWD and NeelyOpportunistic are compared in Figures 1 and 2, respectively. is set to be 1000.The data arrival rate is set to be from 0.2 packets to 1 packet per time slot.In Figure 1, the utility achieved by CCWD is higher than that of NeelyOpportunistic.Figure 2 shows that fewer packets get dropped using CCWD than with NeelyOpportunistic.According to the packet drop decision algorithm, high value of -persistent service queue leads to serious packets drop.As mentioned in Section 2.4, the virtual queue of CCWD being redesigned in this paper grows slower  than the virtual queue of NeelyOpportunistic.Therefore, the virtual queue structure in NeelyOpportunistic leads to more serious packet drop and lower throughput utility.

Impact of 𝑉.
According to the analyses in Section 4, with the increase of , the utility achieved by CCWD can be arbitrarily close to the optimal value with an increase in the length of queues that is linear in .The data arrival rate   () is set to be 0.4 packets per time slot.Since the utility function is concave and nondecreasing, the optimal value of throughput utility should be  session ⋅ log(1 +   ()), where  session is the number of sessions. session is 4 in this section.In this simulation, optimal throughput utility should be 1.34.Figure 3 shows that the utility value is increased with an increasing .According to (44), it is easy to calculate  (),max ,  (),max , and  (),max .In this section, since the maximum arrival rate  () max of each session is set to be 2 and the throughput utility function of each session is uniform,  max ,  max , and  max can also be calculated using (44).In Figures 4, 5, and 6,  is increased from 500 to 2000 and on a log base 10 scale.From Figures 4, 5, and 6, we can learn that the time average sizes of , , and  all increase approximately proportionally with the increase of  and are not larger than the bounds given in Theorem 2. The simulation results show a match between the simulations and the theoretical analyses.

Figure 1 :
Figure 1: Throughput utility versus average data arrival rate.

Figure 2 :
Figure 2: Time average number of dropped packets versus average data arrival rate.
(),   () is set to be   (); otherwise it is set to be zero.Virtual Input Rate Control.For each session  ∈  at source node   , the virtual input rate   () is chosen to solvemax  ⋅   (  ()) −   () ⋅   () ,(34) s.t.0 ≤   () ≤  () max .(35)Since the utility function   (⋅) is strictly concave and twice differentiable, (34) is a concave maximization problem with linear constraint.  () can be chosen by   () = max {min { For simulations, a network with 20 nodes randomly distributed in a square of 1600 m 2 is considered.A transmission is successful if a receiver is within the transmission range of its sender and outside the range of other concurrent senders.The transmission or interference range of a node is 15 m.There are four unicast sessions with randomly chosen sources and destinations.Data of each session is injected into the transport layer with the same rate in each slot at the source nodes.Parameter  is set as  = [500 1000 1500 2000].The throughput utility function is () = log( + 1).Simulations are run in Matlab R2014a.The simulation time of phase I lasts 30000 time slots.The simulation time of phase II lasts 50000 time slots.All initial queue sizes are set to be 0 and the default values are set as follows: 5.1.Simulation Setup.