^{1}

^{2}

^{1}

^{2}

The network utility maximization (NUM) framework, widely used for wireless networks to achieve optimal resource allocation, has led to both centralized as well as distributed algorithms. We compare the convergence performance of centralized realization of the NUM framework with that of distributed realization by implementing the algorithms using a hardware test-bed. Experimental results show a superior convergence performance for centralized implementation compared to the distributed implementation, which is attributed to the dominance of communication delay over processing delay. The convergence results for the distributed case also show a tradeoff between processing time and the associated communication overhead providing an optimal termination criterion for the convergence of different subproblems.

Since the seminal work by Kelly et al. [^{1}

To compare the convergence performance of distributed and centralized implementations, we have selected the following NUM problem of [

In (

To decompose the NUM problem in (

The test-bed used for realization is equipped with wireless transceiver [

The dual decomposition of the problem in (

The rate-allocation subproblem in (

Using the gradient projection method for convex objective with linear inequality constraints [

The power allocation subproblem in (

For the dual problem of minimizing

To compare convergence performance of distributed and centralized implementations, we develop performance evaluation model for processing and communication overheads. The per iteration processing overhead is the time required to update the primal and dual variables, and the per iteration communication overhead corresponds to the time spent in obtaining the required updated primal and dual variables.

Processing overhead comprises of gradient and projection evaluations denoted by

The communication overhead is a function of the number of node pairs involved in information exchange, the associated number of hops, and link packet success rate

For convergence performance comparison, we use an example network shown in Figure

(a) Example network with each node equipped with TI’s DSP (TMS320C6713) running at 225 MHz and MicroLinear’s ML2722 RF transceiver; (b) Convergence performance comparison of overhead model with experimental realization for centralized and distributed implementations.

To validate the processing and communication overhead models, we obtain

Higher communication overhead for the two cases of distributed implementation compared to its centralized counterpart is attributed to the fact that updated variables are exchanged at each iteration among the nodes executing different subproblems in case of distributed realization. Table

Overall convergence time components.

Algorithm component | Communication time (ms) | Processing time (ms) | Convergence time (ms) | |
---|---|---|---|---|

46.8 | 13.1 | |||

11.8 | ||||

6.3 | 98.7 | |||

10.9 | ||||

9.8 | ||||

41.7 | 6.5 | |||

10.2 | 72.5 | |||

14.1 | ||||

Centralized ( | 6.2 | 27.4 | 33.6 |

Next, for distributed implementation, we analyze how the termination criterion for the convergence of the subproblems affects the overall network resource optimization convergence. Figure

Convergence time as a function of percentage tolerance for the termination of iterative subproblem algorithm.

Finally, we compare the convergence performance of centralized implementation for its realization at nodes

Convergence performance comparison for centralized implementation at the network nodes

An iterative algorithm is implemented for the case of distributed and centralized realizations to solve the network utility maximization problem for wireless control networks. It is observed that the convergence performance for centralized case is better than that of distributed implementation due to the dominance of communication delay over processing delay. For distributed implementation, we observe a tradeoff between processing delay and the associated communication overhead for different subproblems, which provides an optimal overall convergence performance. A further performance improvement, for centralized case, can be achieved by using faster algorithms and higher processing power at the central node. However, for the case where processing delay dominates the communication overhead (e.g., fiber-optic link with a relatively slower centralized processing capability), we expect the distributed implementation to perform better compared to the centralized implementation.

We define WCN as a wireless network that supports exchange of feedback and control command information for a distributed control system.