A Planning and Optimization Framework for Ultra Dense Cellular Deployments

Powered by TCPDF (www.tcpdf.org) This material is protected by copyright and other intellectual property rights, and duplication or sale of all or part of any of the repository collections is not permitted, except that material may be duplicated by you for your research use or educational purposes in electronic or print form. You must obtain permission for any other use. Electronic or print copies may not be offered, whether for sale or otherwise to anyone who is not an authorised user. González González, David; Mutafungwa, Edward; Haile, Beneyam; Hämäläinen, Jyri; Poveda, Héctor


Introduction
Mobile network operators face the continuous challenge of upgrading their networks which are rapidly expanding traffic volumes. This trend is mostly attributed to the increased adoption of smart devices (e.g., smartphones). Recent projections for global mobile traffic growth anticipate a tenfold increase in average monthly data consumption from the current 2-5 GB/month to 20-50 GB/month by 2020 [1]. Moreover, the average year-on-year subscriber growths of 5%-15% are expected to continue well into the next decade, notably with most of this demand from emerging markets [1,2]. At the same time, user expectation on service quality also continues to increase, with high-speed connectivity becoming the baseline requirement for most users, regardless of their location or network load conditions. To accommodate those projected traffic growths and meet user needs, mobile network operators are densifying their networks through heterogeneous deployment of lowpower base stations (BSs) or small cells (typically less than 10 W transmit power) to complement the existing highpower (20 W or higher) macrocells (umbrella coverage) [3][4][5][6]. Nowadays, small cell is a term to refer to compact lowpower BSs (e.g., microcells, picocells, or femtocells) and other macrocellular network extensions (e.g., relays, remote radio heads) that are deployed to enhance coverage and capacity in homes, enterprise environments, underserved areas, and other indoor and outdoor traffic hotspots [6]. This exponential data traffic growth is already setting an imperative requirement for Ultra Dense Networks (UDNs), identified as a key enabler for the 5th generation (5G) [1,5,[7][8][9]. Indeed, 5G targets the operation in higher 2 Mobile Information Systems frequencies together with smaller cell sizes to achieve the envisioned extreme Mobile Broadband (xMBB) [1,7]. To that end, UDNs are characterized by small cell deployments with intersite distances (ISD) of a few tens of meters for outdoor deployments (even shorter distances for indoor) and site density exceeding 100 sites/km 2 in dense urban scenarios. This is in contrast to legacy 4th generation (4G) heterogeneous network deployments with typical site densities of less than 10 sites/km 2 and ISD of a few hundred meters [1,5,7]. There is currently no commonly accepted definition on what network deployment constitutes a UDN. The definitions provided in different scientific literature have typically attempted to define UDNs in terms of cell density or the cell density relative to active user density (see [10] and references quoted therein). Other UDN definitions promulgated by industry include that of UDN being network with a small cell deployed outdoor on every lamp post or indoor with spacing of less than 10 m [1]. In this study, we adopt the pragmatic viewpoint from [1], whereby UDNs are considered to be an evolution from legacy dense networks and small cell network deployments with ISD of less than 100 m and are projected to become more prominent after year 2020.
However, UDNs are creating new and significant challenges for mobile network operators, with network planning and optimization notably becoming increasingly complex with denser network deployments [1,7,8]. In this context, planning refers to the process of determining the number, location, and configuration of base stations (e.g., small cells) to provide wireless access to users (and things) guaranteeing a certain targeted Quality of Service (QoS). In this process, dimensioning is the initial step used to solve the problem of estimating the required number of base stations needed to meet the capacity needs of a given service demand volume [11]. Thereafter, more precise network planning is carried out to evaluate cell site locations and initial cell parameters, and eventually optimization procedures in live networks are also used to continuously adjust cell's parameters to further optimize both coverage and capacity. However, in practical scenarios, dimensioning and site positioning are nontrivial problems because the services are heterogeneous, that is, different QoS requirements, and the spatiotemporal distribution of the service demand is both nonuniform and dynamic. Furthermore, the challenges of site acquisition naturally scale with increased densification, thus obliging operators to consider leveraging base stations sites (mostly small cells) available in unplanned (suboptimal) locations [1,4,12]. Additionally, the ongoing evolution of radio access technologies, together with the new radio access concepts and paradigms expected for 5G, is blurring the traditional boundary between planning and optimization tasks. Indeed, as per discussion presented in [11], planning and optimization are iterative tasks that should be increasingly intertwined. These thoughts are echoed by the authors of [13], who also highlight the need for a rethink of planning and optimization in the context of dense heterogeneous networks, emphasizing that effective planning attains an even distribution of the load among cells, a goal that in the opinion of the authors of [11] (and corroborated by the authors of this paper) is a valid way to enhance system performance.
This paper addresses the aforementioned challenges by proposing an optimization framework for planning of UDN, which is suitable for real-world deployments. The corresponding research problem can be stated as follows.
Research Problem. Determine the set of network topologies with a certain number of access points (within an interval of interest, i.e., minimum and maximum node density) that is best compatible with a given spatial distribution of the service demand distribution (in statistical terms) and a certain performance metric.
Thus, the contribution of this paper, associated with the previous research problem, can be summarized as follows.
Main Contribution. A single-and multiobjective optimization framework for planning of UDN deployments: The optimization allows obtaining network topologies which can be optimized for any arbitrary spatial traffic distribution (STD) (hereafter, the terms "spatial traffic distribution" and "spatial service demand distribution" are used interchangeably) and performance metrics, such as spectral efficiency or cell-edge performance.
Additional Contributions. In addition, several other minor contributions include: (1) The comparative analysis of several bandwidth allocation policies in the context of network planning.
(2) A simple heuristic for planning of UDNs.
The numerical results from a real-world planning case (evaluated under a variety of conditions) reveal a number of interesting insights: (i) Bandwidth allocation strategies can facilitate the identification of optimized UDN topologies that may enable an operator to flexibly prioritize either system capacity or cell-edge performance.
(ii) The results from the benchmarking clearly indicate that, in case of nonuniform STD, optimization is mandatory as the performance of regular and userdeployed (random) topologies is poor, while quasioptimal performances accompanied with significant gains can be attained through the use of heuristic planning and optimization.
The rest of the paper is organized as follows: the next section presents the system model. The performance metrics and proposed optimization formulations are introduced in Section 3. In Section 4, a background and description of the planning case study are presented together with the description of the spatial service demand distributions, benchmarks, and parameters and assumptions used in numerical evaluations. Section 5 provides a concise analysis of the numerical results. Finally, the concluding discussions and overview of potential research directions are provided in Section 6. Mobile Information Systems 3

System Model
As indicated previously, the goal is to plan an ultra dense cellular network composed of low-power BSs for a target service area A. The service area is divided into small area elements or pixels (in this paper, the terms "area elements" and "pixels" will be used interchangeably) in which the average received power can be assumed to be constant.
In this study, the downlink of an Orthogonal Frequency Division Multiple Access-(OFDMA-) based cellular network with system bandwidth is considered. To carry out the planning, it is assumed that a set of candidate locations have been previously defined in the target service area. In each of these locations, a BS could be placed, and a maximum transmit power max is assumed.
The radio propagation, that is, the network geometry, is captured by the matrix G ∈ R × that indicates the average channel gain between each BS and area element. The vectors p RS and p , both ∈ R , correspond to the transmit power of each BS in Reference Signals (RS) and data channels, respectively. The average RS received power can be calculated by means of the following expression: The operator ⊙ denotes Hadamard (pointwise) operations. The binary vector x ∈ {0, 1} indicates the allocation of a BS in the candidate locations, and hence, x is referred to as "network topology" as it determines the number and location of BSs. Therefore, x is the planning (optimization) variable.
Hereafter, all the dependencies on x are omitted for the sake of clarity. For instance, R RS (x) → R RS in (1). R RS ( , ) gives the average RS received power in the th pixel from the th BS. Cell selection, the association of each pixel to a serving BS, is based on the average RS received power. Therefore, the th pixel (the th row in R RS ) is served by cell ⋆ if The coverage pattern associated with each network topology is represented by the binary coverage matrices S and S , both in R × . If the th area element is served by ⋆ , then S( , ⋆ ) = 1. S is the binary complement of S. It is assumed that each area element is either served by one cell or out-of-coverage. It is considered that the th area element is out-of-coverage if at least one of the following three conditions is not fulfilled: (i) The RS received power is larger than a minimum value: R PS ( , ⋆ ) ≥ Rx min . (ii) The Signal to Interference plus Noise Ratio (SINR) is larger than a threshold: ≥ min .
(iii) The average channel gain G( , ⋆ ) between the area element and its serving BS is larger than UL min . The outage associated with a network topology is captured by the vector ∈ {0, 1} . If the th area element is out-ofcoverage, then = 1, and 0 otherwise.
A certain knowledge of the spatial distribution of the service demand is assumed. In practice, this is known by operators in statistical terms [14]. This information is stored in the vector Φ ∈ R . Φ can be regarded as a Probability Density Function (PDF) in two dimensions, and hence, it indicates the probability, in the event of a new user, that the th pixel has the user on it. Thus, Φ ⋅ 1 = 1.
In this work, full load is assumed to model the intercell interference, which is a reasonable assumption for planning purposes. Other models, such as load-coupling [15], can be easily incorporated in the model, if needed. Thus, the vector Γ ∈ R representing the average SINR at each area element is given by where 2 is the noise power. The operators ⊘ and ⊕ denote Hadamard (pointwise) operations. It is customary to define link performance as a nondecreasing function of the SINR. In this work, Shannon's bound is considered, and hence, the resulting spectral efficiency is stored in the vector H ∈ R , and its elements are calculated as follows: In (4), the idea is to discard the contribution of the pixels that are out-of-coverage (by means of ( )), thus penalizing network topologies with significant coverage holes in the optimization procedure. The list of symbols is provided in Basic Notation in Notation for convenience.

Performance Metrics for Radio Access Network Planning.
Generally speaking, planning is about determining the number and location of BSs in the service area. Evidently, the network deployment should be done such that the maximum benefit is obtained; that is, network capacity is maximized (more users) with minimal cost (less infrastructure deployment), while guaranteeing a certain level of coverage, QoS, and fairness. In order to address this problem by means of optimization, several metrics (and constraints) are required. In this work, the following objectives are considered.
(i) Number of BSs (f 1 ). In principle, the deployment should be done with the minimum possible number of BSs to minimize both the Capital Expenditure (CAPEX) and the energy consumption that is part of the Operational Expenditure (OPEX).
(ii) Network Capacity (f 2 ). This metric captures the average aggregate rate the network is able to deliver. Thus, 2 represents a system-oriented performance indicator.
(iii) Cell-Edge Performance (f 3 ). This metric captures the performance in the weakest zones of the service area. Thus, 3 is a user-oriented performance indicator and promotes fairness. The definition of the previous metrics is given next. The number of BSs ( 1 ) in a network topology is simply the number of '1's in the corresponding x, and hence, From a planning point of view, it is important to consider the spatial distribution of the service demand. In other words, the planning should favor network topologies that provides more capacity to the zones of the service area where the traffic is more likely to appear. Given that this information is contained in the vector Φ, it can be used to weight the different pixels according to their importance; that is, pixels with more traffic are more important. Thus, the weighted spectral efficiency vector H ∈ R is defined as follows: H ≜ H⊙Φ. Note that, indeed, the scalar = H ⋅1 represents the expected spectral efficiency at area element level because Φ is a PDF.
In cellular networks, a very important aspect is the frequency reuse; that is, the system bandwidth can be reutilized at each cell. This is the most distinctive aspect of cellular networks that allows these systems to provide radio access to a large amount of users (and things). The way in which the bandwidth is allocated to the users largely determines the resulting system capacity and/or users' satisfaction. In this sense, cell-edge performance [16] is a well-known, yet important, problem in OFDMA-based cellular networks that can affect (negatively) user's experience. Broadly speaking, users at cell-edges are relatively more expensive in terms of radio resources, as their SINR is typically very low. In this work, this aspect is considered from the planning point of view, and consequently, two different bandwidth allocation strategies are considered and integrated in the performance metrics.
(1) Uniform Bandwidth Allocation (UBA). The objective is to evaluate the resulting aggregate capacity assuming that the bandwidth of each cell is equally distributed over its coverage area (pixels).
(2) Proportional Bandwidth Allocation (PBA). The objective is to evaluate the resulting aggregate capacity assuming that the bandwidth of each cell is distributed over its coverage area (pixels) proportionally to the SINR or, equivalently, the spectral efficiency of the pixels. The vectors B and B , both in R , indicate the bandwidth that would be allocated to each area element under the uniform and proportional bandwidth allocation, respectively. They are defined as follows: where the vector n ∈ R contains the inverse of the number of pixels associated with each BS. The proportional allocation is as follows: Equation (7) divides the bandwidth of each cell proportionally to the spectral efficiency of the area elements in the cell. Thus, the network capacity metric is defined as follows: Hereafter, superscripts " " and " " are used to indicate UBA and PBA, respectively, as follows: 2 and 2 . Cell-edge performance is defined, for planning purposes herein, as the aggregate rate of the worst 5% of the service area. Given the vectors where the sorting is in ascending order, then the metric 3 representing the cell-edge performance would be given by where 5% ≜ ceil{0.05⋅ }. Since 2 and 3 can be evaluated for both UBA and PBA, then this study is able to utilize four possible objective functions for comparison purposes, namely, 2 , 2 , 3 , and 3 .

Optimization Problem Formulation.
In this work, two different optimization formulations are considered. They can be used depending on the network planning strategy of the operator. On the one hand, if the network operator's target is to maximize the network aggregate capacity ( 2 ), a multiobjective problem is proposed as this metric is in conflict with 1 ; that is, generally speaking, the denser the network, the higher the capacity due to the more aggressive frequency reuse. On the other hand, if the operator's target is to provide a more homogeneous coverage, that is, less variability at pixel level, a single-objective problem is proposed with 3 as objective function, and the required number of cells Target as an input.

Multiobjective Optimization.
Multiobjective optimization [17] can be used when multiple conflicting objectives need to be simultaneously optimized (a brief introduction to multiobjective and evolutionary optimization is presented in Appendix A). This is the case of 1 and 2 in the planning framework presented herein. Thus, in order to obtain network topologies featuring the best trade-off between the number of BSs ( 1 ) and network capacity ( 2 ), the following multiobjective optimization problem is proposed: subject to: In problem (10a), (10b), (10c), and (10d), constraint (10b) guarantees that a minimum fraction ( COV ) of the area Mobile Information Systems 5 elements has coverage. Constraint (10c) defines the search space, that is, the domain of the variable x. In practice, and due to the nature of the environments in which UDNs are envisioned to be deployed, network operators usually have an estimate of the number of BSs that is required/feasible, and hence, the optimization can be further localized. This is accomplished by means of constraint (10d), where these limits are set.
Problem (10a), (10b), (10c), and (10d) is a combinatorial problem belonging to the class NP-complete. The search space defined by the optimization variable x (the total number of network topologies) is a set of size 2 − 1, where , as indicated, is the number of candidate locations. Even for a small set of candidate locations, say = 15, the number of network topologies would be larger than 32 × 10 4 , which makes it infeasible to compare all possible topologies by means of time-consuming and computationally heavy system level simulations. For this reason, the proposed planning approach is a convenient strategy. The objective space (or image) is defined by the possible values of the objective functions. Due to the mathematical structure of 2 and 3 , the objective space is highly nonlinear, nonconvex, and full of discontinuities and local optima [18]. Thus, a multiobjective evolutionary algorithm (MOEA) [19], the Nondominated Sorting Genetic Algorithm II (NSGA-II) [20], is used to address (10a), (10b), (10c), and (10d). A brief description is provided in Appendix A.

Single-Objective Optimization.
Single-objective optimization is proposed if planning needs to be carried out following a max-min approach, such as the maximization of aggregate rate in the area elements with weak coverage. Thus, the problem of maximizing cell-edge performance ( 3 ), for planning, can be written as follows: subject to: Problem (11a), (11b), (11c), and (11d) and its constraints are similar to (10a), (10b), (10c), and (10d), except that it contains only one objective function. Constraints (11b) and (11c) are equal to (10b) and (10c), respectively. Constraint (11d) indicates that only solutions with Target BSs are accepted. This is so because, in general, the metric 3 is proportional to

Deployment Scenario.
The network densification as planned by operators is both difficult and highly constrained in certain scenarios. This includes the fast expanding highdensity urban and periurban settlements in emerging market areas. Indeed, 90% of the urban population growth by 2050 is expected to be concentrated in Asia and Africa. These settlements already have populations densities typically in the range of 40,000-200,000 people/km 2 [21]. Mobile broadband networks continue to be the primary means for wireless connectivity in these densely populated areas [2,22], which makes them a highly compelling target for the deployment of UDN. Unfortunately, some challenges related to UDN deployment are further exacerbated in these areas due to the limited availability of legacy infrastructure for small cell backhaul, energy scarcity, difficulties in site acquisition, need for securing network assets at sites, and relatively low Average Revenue Per User (ARPU) compared to more developed economies [22].
One of the interesting approaches is to leverage thirdparty nonoperator entities, such as individual end users, households, microenterprises, and public venue owners, to deploy shared-access small cells that will provide service as an integral part of the operator's network. An example is the neighborhood small cell concept by Qualcomm promoting the use of privately deployed residential small cells as sharedaccess points [12]. A key distinction between third-party deployments and operator-led deployments is that, in the former case, the small cells deployments are unplanned; that is, the location in which small cells are deployed is not originally defined by the operator's network planning procedures. However, although small cells are deployed autonomously by third-parties, the operator retains remote management via core network and the use of Self-Organizing Networks (SON) [23,24]. Therefore, to contextualize the proposed UDN planning and optimization framework in a realistic setting, we consider a case study for UDN in a high-density urban settlement. To that end, we use the Hanna Nassif ward in Dar es Salaam, Tanzania, as the planning study case. Hanna Nassif has an estimated population density of 40000 people/km 2 . The approximately 1 km 2 Hanna Nassif area includes around 3000 buildings (mostly 3-6 m tall) and is located on a terrain with a topographical difference of 19 m. A three-dimensional (3D) representation of this scenario is shown in Figure 1. We assume that all candidate locations (indicated in white-blue dots) are outdoor at rooftop level. Rooftop deployed sharedaccess small cells provide improved outdoor coverage compared to indoor deployed small cells and enable line-of-sight (LOS) or near LOS (nLOS) conditions for the implementation of high-capacity wireless backhauling [25]. Moreover, the rooftop is also a convenient location for off-grid operation of the small cells through energy harvesting from ambient renewable energy sources (solar, wind, etc.) [26].

Parameters and Assumptions.
The radio coverage estimations are based on realistic 3D building vectors and topographical data (see Figure 1, for the simulation area)   1/ Termination crit. Hypervolume < 0.001%, [17] and are evaluated using the deterministic dominant path model implemented in the WinProp propagation modeling tool [27]. The simulation parameters and assumptions follow the IMT-Advanced guidelines [28] and are listed in Table 1.
The building penetration losses are modeled explicitly to account for the outdoor-to-indoor propagation and it is assumed that all buildings outer walls are based on one material (10 cm brick) and in-building losses (e.g., due to internal walls, doors) are approximated by an exponential decay model. Furthermore, as noted previously, the case study area includes a large number of densely packed small houses or buildings of relatively low height built on a land with significant topographical differences over short distances. The deployment of rooftop smalls in this environment creates significant multipath propagation, waveguiding effects, and outdoor-to-indoor propagation that can be suitably captured by a ray-tracing tool like WinProp (see, e.g., [29]). Indeed, it is noted that ray tracing provides significant accuracy improvements compared to classical models but with reduced generality in terms of scenario selection [30]. The precision (pixel resolution) and inclusion of time-varying nonstationary objects (e.g., cars, pedestrians) may further influence Mobile Information Systems accuracy and computational effort of different ray-tracing computations. To that end, the pixel resolution in our study is 1 × 1 m 2 , which provides the right trade-off between modeling accuracy and feasible computational time. However, in WinProp environment we do not include the scattering effects of nonstationary objects for the sake of simplicity in capturing the scenario. Moreover, the usage of same raytracing path loss results for all topologies considered in the study ensures that the absolute accuracy of the ray tracing does not influence the performance comparisons between the topologies. The parameters for the calibration of the NSGA-II algorithm are also shown in Table 1. Calibration parameters and complexity aspects are provided in Section 5.4.

Spatial Traffic Distributions.
Two different spatial traffic distributions have been considered for the numerical evaluations presented in the next section: uniform and nonuniform. Uniform STD implies that the service demand is uniformly distributed in the coverage area. Nonuniform STD implies that the traffic is more likely to appear in certain areas (hotspots), which is representative of how traffic is commonly distributed in practice. The system model and optimization framework presented herein are able to consider any arbitrary STD by means of the vector Φ, as it is explained in Section 3.1. Figure 2 shows a representation of the nonuniform STD used in numerical evaluations. A 2-dimensional representation (map) is provided in Figure 2(a), where red areas correspond to hotspots. Figure 2(b) shows the CDF of the probability at pixel level in order to provide a perspective of the level of irregularity of the nonuniform STD. The uniform STD is also indicated by the vertical dashed line. This input, as it will be seen shortly, has a profound impact on the optimization process and leads to significantly different network topologies.

Benchmarks.
In order to clarify the merit of the proposed optimization framework, several benchmarks have been considered.
(1) Random Deployments. As explained earlier, open-access outdoor small cells (randomly deployed by users) would provide a sustainable (cost efficient) approach to network densification. Thus, random topologies (located at candidate locations) are considered.
(2) Regular Deployments. Regularly deployed access points, that is, geometrically regular cells, are the natural choice for uniform STD and are often considered as a baseline. Two regular topologies (x 1 and x 2 ) are considered and are shown in Figure 3.  : Required number of access points. ≤ . output: x : A sub-optimal network topology for the metric with x ⋅ 1 = .

Regular topologies
Heuristic topologies Heuristic topologies Uniform service demand distribution Nonuniform service demand distribution x H 2 , respectively. Both uniform and proportional bandwidth allocation are considered in each case. The conflicting nature of 1 and 2 is evident; that is, the larger the number of BSs, the larger the network capacity, which is expected due to the higher frequency reuse. The result shows that uniform bandwidth allocation provides significantly better network capacity compared with proportional bandwidth allocation; see (6) and (7). In the figure, the difference (≈60%) in terms of 2 between optimized topologies with 180 BSs (for the case of uniform STD) is shown as a reference. This result, although expected, is important because it is well-aligned with system level simulations [31], and more precisely with different scheduling policies (Round Robin or Proportional Fair), where it is well-known that fairness is always traded by spectral efficiency. Thus, the performance metrics proposed herein for planning capture the same behavior.
Analogously, problem (11a), (11b), (11c), and (11d) can be solved for both uniform and proportional bandwidth allocation; however, only one solution can be obtained in each case (since single-objective optimization can be regarded as a particular case of multiobjective optimization, (11a), (11b), (11c), and (11d) was also solved using NSGA-II). In order to compare optimized topologies, solutions of problem (10a), (10b), (10c), and (10d) with 1 = 180 have been selected, as this value was also used in (11a), (11b), (11c), and (11d), where Target = 180. Hereafter, the notation used to describe the four optimized network topologies is indicated in Notation Used to Refer to the Optimized Topologies in Notation.
The comparison is shown in Figure 5. The figure shows the (normalized) performance of each network topology and objective function, that is, 2 , 2 , 3 , and 3 . As it can be seen, each of these topologies maximizes one metric, and hence, they are all Pareto efficient. Both uniform and  nonuniform spatial traffic distribution are considered. It is worth noting that, in case of nonuniform spatial traffic distribution, the performance of each optimized topology with respect to the metrics for which it is not optimized becomes notably poor. Since uniform STD is not common in practice, the result confirms that optimization is highly needed.
A pictorial representation of these optimized topologies is shown in Figure 6. It is clear that they are different, as each of them is optimized for a certain metric. Therefore, it can be concluded that, in general, each performance indicator (representing different operator interests) requires an independent optimization. The spatial correlation between the density of BSs of the optimized topologies (for the case of nonuniform STD) and the nonuniform pattern considered for numerical evaluations should be noted (see Figure 2(a)).
It is worth noting that, in the study case considered herein ( = 368 and 1 = 180), there are more than 2×10 109 possible network topologies (the number of combinations of = 368 candidate locations with 1 = 180 BSs).
In order to justify and explain the previous differences in performance, Figure 7 provides an alternative point of view. The results correspond to the case of uniform STD, but the same analysis applies for the nonuniform case. The figure shows the statistics at pixel level of the resulting spectral efficiency (Figure 7(a)) and average rate (Figure 7(b)), both normalized. As it is clear, there is not major difference in terms of the statistic of the resulting spectral efficiency; only x ⋆ 2 presents a slightly better distribution. This is somehow expected considering their dense nature and the relatively homogeneous propagation characteristics of the study case; see Figure 1. However, the impact of the different bandwidth allocation policies is noticeable, as it can be seen in Figure 7(b). The topologies x ⋆ 2 and x ⋆ 3 clearly enhance not only the fairness among area elements, but also cell-edge performance (despite that, for instance, 2 is a metric measuring network Uniform service demand distribution Nonuniform service demand distribution Similar spectral efficiency at area element level capacity). The results allow us to conclude that the bandwidth allocation policies, introduced herein in the context of planning, facilitate the identification of network topologies to enhance either system capacity or cell-edge performance (better fairness), thus, demonstrating the effectiveness and flexibility of the proposed planning framework. So far, the optimized topologies have been compared among themselves; next, benchmarking is presented in order to provide a quantitative perspective of the merit of the proposed planning framework.

Benchmarking: Random Deployments.
Optimized topologies are compared with random deployments. In order to make the comparison fair, 1000 random deployments with 1 = 180 BSs (located in candidate locations) have been considered and evaluated in terms of 2 , 2 , 3 , and 3 . As additional measure of fairness, Jain's Index (Jain's index is usually used to determine whether users or applications are receiving a fair share of system resources) [32] has also been considered taking the resulting rate at pixel level as input. Thus, if the resulting rate in the th pixel is , Jain's index is given by where ∈ [1/ , 1], being 1/ and 1 the worst and ideal case, respectively. It is worth mentioning that the same set of random topologies is used in each case. The comparisons for uniform and nonuniform STD are shown in Figures 8 and 9, respectively. Figures 8(a), 8(b), 9(a), and 9(b) show the empirical Cumulative Density Function (CDF) of 2 and 3 for the random deployments (thin-solid lines). Figures 8(c), 8(d), 9(c), and 9(d) show the CDF of Jain's index for the random deployments (thin-solid lines). In all the cases, the performance of the corresponding optimized topology is also indicated (vertical thick-dashed lines). Since the analysis of Figures 8 and 9 is similar; the focus is placed initially on Figure 8. Next, relevant particularities of Figure 9 are discussed.
From Figure 8(a), it can be seen that the gain achieved by the optimized topologies with respect to the random deployments is, on average, 14% and 19%, for uniform and proportional bandwidth allocation, respectively. In practice, these gains can be even larger, as in random topologies (e.g., deployed by customers), BSs can potentially be anywhere, without respecting a minimum distance as it has been set for the candidate locations used herein. The difference (≈60%) between the optimized topologies is also shown, indicated as x ⋆ 2 and x ⋆ 2 . The result makes the significant benefit that can be achieved by means of proper UDN planning evident. The same behavior has been verified for cell-edge performance ( 3 ), although with greater gains, as it can be seen in Figure 8(b). Thus, it can be concluded that if the network operator target is maximizing the cell-edge performance, the need for planning is mandatory. Gains with respect to random deployments are, on average, 68% and 99%, while the celledge performance gain by using the proportional bandwidth allocation is 96% (between optimized topologies). In this manner, and for the case of study, it would be possible to trade a gain of 96% in terms of 3 by a reduction of 60% in terms of 2 , or vice versa. Figures 8(c) and 8(d) validate that, by means of the proportional bandwidth allocation policy, the overall system fairness (measured in terms of Jain's index) is significantly enhanced, for both 2 and 3 . The high gains in terms of Jain's index (25% and 32%) and the fact that, in most of the cases, the optimized topologies provide better fairness than random deployments although Jain's index is not the objective function under optimization should be remarked.
As mentioned, the analysis of Figure 9 is similar as well as the obtained results, qualitatively speaking. However, optimization gains when nonuniform service demand distribution is considered are notably larger. This is expected because uniform STD is indeed the worst case from the optimization point of view, which is well-aligned with the information theoretical intuition that uniform distribution maximizes the entropy. Thus, both situations are presented herein, making (even in the worst case) the merit and potential benefit of the proposed optimization framework clear.

Benchmarking: Regular Deployments and Heuristic Planning.
In order to provide another quantitative perspective of the merit of the proposed scheme, a comparative assessment with regular deployments and network topologies obtained through basic heuristic planning, in terms of network capacity ( 2 ), have also been carried out. The results are shown in Figure 10. For each case, both bandwidth allocation policies (uniform and proportional) have been taken into account. Similarly to the benchmarking with random topologies, larger gains (between 5% and 14%) are consistently obtained when nonuniform STD is considered (the case of practical interest). This is expected as regular deployments are not likely to be effective for nonuniform STDs. However, the gains even in the worst case (uniform STD) range from 1% to 5%. It is worth mentioning the dense nature of the deployment considered for this comparison: 1 = 180 BSs in the target area (resulting in ≈514 access points per km 2 ). Thus, the proposed optimization formulation succeeds in finding nearto-optimal and Pareto efficient network topologies under different bandwidth allocation strategies and spatial traffic distribution conditions. The effectiveness of the optimization comes from the fact that the objective functions take into account (1) the spatial traffic distribution (prioritizing the topologies providing better capacity where the traffic is more likely to appear) and (2) the radio propagation characteristics obtained from every single candidate location (favoring better wireless links with serving access points and more isolation among interfering cells). As mentioned earlier, other performance metrics (e.g., Jain's index) can also be considered depending on operator's needs/interest. Finally, it is worth mentioning the notorious performance of the heuristic planning (Algorithm 1), which can also be used if optimization tools are not available; and in that sense, it can : optimal topology with respect to f 2 using uniform bandwidth allocation (UBA).
x ⋆ f p 3 : optimal topology with respect to f 3 using proportional bandwidth allocation (PBA).
x ⋆ f p 2 : optimal topology with respect to f 2 using proportional bandwidth allocation (PBA).
x ⋆ f p 3 : optimal topology with respect to f 3 using proportional bandwidth allocation (PBA). also be considered as a part of the UDN planning framework presented herein. All in all, the previous results show (and confirm) that effective planning is strongly recommended when it comes to UDN. The evaluation and benchmarking presented provide evidence of the merit, flexibility, and effectiveness of the proposed framework under several conditions.

Complexity and Calibration Aspects.
To close this section, complexity and calibration aspects are discussed. According to [20], the complexity of NSGA-II is O( ⋅ 2 ), where and correspond to the population size and the number of objective functions, respectively. In our case, = 2 and can be set depending on the scale of the problem. However, there is a consensus about the size of the : optimal topology with respect to f 2 using uniform bandwidth allocation (UBA).
x ⋆ f p 3 : optimal topology with respect to f 3 using proportional bandwidth allocation (PBA).
x ⋆ f p 2 : optimal topology with respect to f 2 using proportional bandwidth allocation (PBA).
x ⋆ f p 3 : optimal topology with respect to f 3 using proportional bandwidth allocation (PBA).
(e) Legend Figure 9: Comparison of the optimized and random topologies with 180 base stations ( 1 = 180) in terms of network capacity ( 2 ) and cell-edge performance ( 3 ) for both uniform and proportional bandwidth allocation, UPB and PBA, respectively. Jain's index is also included as an additional reference. Nonuniform spatial traffic distribution is considered.
population when using genetic algorithms, such as NSGA-II, and it is considered that during calibration populations of 20 up to 100 individuals can be used. Values greater than 100 hardly achieve significant gains and the same global convergence is obtained [33]. In evolutionary algorithms, a termination criterion is usually defined/needed. One metric used to measure the level of convergence is the hypervolume indicator [19]. It reflects the size of volume dominated by the estimated Pareto Front. In this work, the search is terminated if the improvement in the hypervolume is smaller than a threshold (0.001%) after a certain number of generations (in this study, 20). Finally, crossover and mutation probabilities are set to 1 and 1/ (one mutation per solution, on average), respectively, as indicated in Table 1. nowadays and in future 5G systems, where small cells are of utmost importance. The framework studied herein is rich and admits several future research directions. Considering additional objective functions is definitely a study item, as well as other optimization formulations that could be specific for certain use cases, such as Downlink Uplink Decoupling. In addition, upgrading fixed/existing deployments is of great practical interest, as well as adaptation of coverage patterns, and power optimization. Finally, planning studies for UDN operating at higher frequency bands is also on our roadmap.

A. Multiobjective and Evolutionary Optimization
Multiobjective optimization is the discipline that focuses on the resolution of problems involving simultaneous optimization of several conflicting objectives. The target is to find a subset of good solutions X ⋆ from a set X (the domain of an optimization variable x) according to a set of criteria F = { 1 , 2 , . . . , |F| }, with cardinality |F| greater than one.
Since the objectives are in conflict, improving one of them implies worsening another. Consequently, in the context of multiobjective optimization, there is no one single optimal solution but an optimum set X ⋆ . A central concept in multiobjective optimization is Pareto efficiency. A solution x ⋆ ∈ X has Pareto efficiency if and only if there does not exist a solution x ∈ X, such that x dominates x ⋆ . A solution x 1 is preferred to (dominates) another solution x 2 , (x 1 ≻ x 2 ), if x 1 is better than x 2 in at least one criterion and not worse than any of the remaining ones. The set X ⋆ , composed of all the solutions featuring Pareto efficiency, is called optimal nondominated set, and its image is known as the Optimal Pareto Front (OPF). Figure 11 illustrates this idea, where it can be seen that the nondominated solutions (in blue) are the ones not dominated by any other, and hence, they provide a trade-off that cannot be improved.
In multiobjective optimization, it is unusual to obtain the OPF due to problem complexity; instead, a near-optimal or estimated Pareto Front (PF) is found. Interested readers are referred to [17] for an in-depth discussion. In general, solving multiobjective (combinatorial) problems, such as (10a), (10b), (10c), and (10d), is very difficult [17]. Indeed, optimal solutions for problems belonging to the class NPcomplete cannot be found in polynomial time. For this reason, heuristic-based algorithms are commonly used to address this type of problems, but unfortunately, heuristic solutions are typically problem-specific, and hence, their use is limited. Thus, a more general class of solutions, the socalled "metaheuristics," have become very popular and an active research field [18]. Metaheuristics can be used to solve very general kind of multiobjective optimization problems. These methods allow (1) finding good solutions by efficiently exploring the search space and (2) operating efficiently with multiple criteria and a large number of design variables. In addition, no requirements on the mathematical structure of the objective functions are required (e.g., convexity or continuity).
Multiobjective evolutionary algorithms (MOEAs) [19] are metaheuristics that fulfill the previous goals. MOEAs are population-based metaheuristics that simulate the process of natural evolution and they are convenient due to their general-purpose nature. One MOEA, the Nondominated Sorting Genetic Algorithm II (NSGA-II) [20], is well-recognized as a reference in the field of evolutionary optimization as it has desirable features, such as elitism (the ability to preserve good solutions), and mechanisms to flexibly improve convergence and distribution. Interested readers are referred to [17,19], and the references therein, for an in-depth treatment of multiobjective and evolutionary optimization.

B. Heuristic Network Planning for Small Cell Deployments
Heuristic solutions have been considered for solving complex optimization problems in many fields, including engineering. Since planning and topology optimization for UDN are NP-complete problems, heuristics are naturally an option. In this appendix, a simple heuristic for planning of UDN is presented. The idea is adapted to planning from the Minimum Distance Algorithm used in Cell Switch-Off [34]. The pseudocode is shown in Algorithm 1. The basic idea is to start identifying the best single access point (a network with only one cell) for a given metric ( ), Line (1). This would be the network with = 1 cell. Then the loop (Lines (2) to (10)) iteratively find the best new candidate location to create the network with + 1 cells (and the previously selected locations). Thus, the loop in the th level has to search for − candidate locations. The loop is repeated until locations are selected. In Line (4), Ham corresponds to the Hamming distance between two different network topologies, which guarantees that the −1 previously found cells are fixed in the th iteration. In Line (5), x corresponds to the performance of the network topology x in terms of the metric . : Optimal topology with respect to 2 using uniform bandwidth allocation (UBA) x ⋆ 2 : Optimal topology with respect to 2 using proportional bandwidth allocation (PBA) x ⋆ 3 : Optimal topology with respect to 3 using uniform bandwidth allocation (UBA) x ⋆ 3 : Optimal topology with respect to 3 using proportional bandwidth allocation (PBA).