UEP Concepts in Modulation and Coding

First unequal error protection (UEP) proposals date back to the 1960’s [1], but now with the introduction of scalable video, UEP develops to a key concept for the transport of multimedia data. The paper presents an overview of some new approaches realizing UEP properties in physical transport, especially multicarrier modulation, or with LDPC and Turbo codes. For multicarrier modulation, UEP bit-loading together with hierarchical modulation is described allowing for an arbitrary number of classes, arbitrary SNR margins between the classes, and arbitrary number of bits per class. In Turbo coding, pruning, as a counterpart of puncturing is presented for ﬂexible bit-rate adaptations, including tables with optimized pruning patterns. Bit- and/or check-irregular LDPC codes may be designed to provide UEP to its code bits. However, irregular degree distributions alone do not ensure UEP and other necessary properties of the parity-check matrix for providing UEP are also pointed out. Pruning is also the means for constructing variable-rate LDPC codes for UEP, especially controlling the check-node proﬁle.

solutions with irregular variable-node profile in Section V and a short note on modifying the check-node profile by pruning will complete the treatment of UEP options in Section VI, followed by conclusions in Section VII.
Note that, whenever error-ratio performances are shown, they will be over E s /N 0 to be able to really observe the UEP properties with varying channel signal-to-noise ratios. For LDPC codes, however, we may still use E b /N 0 scales, but since there is no notion of local rates, the overall rate is used and thus, these figures are actually just using renormalized E s /N 0 scales preserving the SNR spacing of all performance curves.

II. UEP AND RATE DISTORTION
Before we actually discuss different UEP solutions, we should discuss shortly how we should relate source coding qualities given by spatial and temporal resolution and signal-to-noise ratio margin separations or error rates. We start referencing a work by Huang and Liang [5], who relate a distortion measure to error probabilities. However, in the end, we will conclude that for the codes that we will study in here, such a treatment is not suitable. The actual video quality steps (spatial and temporal) to be provided at what SNR steps will be at the discretion of a provider and essentially a free choice.
Huang and Liang [5] simplify the treatment by relating MPEG I, P, and B frames to protection classes with different error probabilities. This is, of course, only addressing temporal resolution.
As distortion measure, the mean squared error is used and is formulated as where L is the number of protection classes (layers), S is the total number of bits in the source data, where S i correspond to the numbers in the different classes. E i is the distortion introduced by the source-coding layer itself, without considering errors added by the channel, whereas A i P i refers to the influence of channel errors. P i is the channel bit-error rate and A i describes the sensitivity of the ith source-coding layer to bit-errors. June 3, 2010 DRAFT

hal-00521076, version 1 -26 Sep 2010
For a rate-distortion relation, Huang and Liang write the total rate as where R (S) i and R (C) i denote the source coding bit-rate and the added redundancy, respectively, for the ith layer.
This is a treatment that is reasonable for rate-compatible punctured convolutional codes [2] that will result in finite error rates. Capacity-achieving codes, however, will lead to a strong on-off characteristic due to the water-fall region in their BER curves. For such coding schemes, the SNR thresholds will define certain quality steps that will be made available to an end device.
Equation 1 would then only represent the quality steps provided by the source coding, since P i could be assumed to almost only assume the extreme values of zero and 0.5. In the case of capacity-achieving codes, it appears to be more suitable to simply relate source coding quality steps to classes and these again to SNR steps of the UEP channel coding. The SNR steps will then be either realized by different code rates of a Turbo or LDPC coding scheme or, alternatively, by bit-allocation and/or hierarchical modulation together with channel-coding with identical code rates. Combinations of modulation-based realizations of different protection levels and those based on codes with different protection levels is, of course, also possible. The quality steps provided by source coding, as well as the SNR steps provided by channel coding are then a choice of service and network providers.
In the following, we will describe options that we investigated to realize UEP in multicarrier hierarchical modulation, Turbo-, and LDPC coding. These schemes will prove to be very flexible, allowing the realization of arbitrary SNR level increments between quality classes.

III. ACHIEVING UEP WITH MULTICARRIER BIT LOADING
We begin our treatment with modulation-based UEP realizations, starting from hierarchical modulation without bit-loading, followed by bit-loading, to finally combine both concepts in bit-loaded hierarchical multicarrier modulation. We use different bit-loading algorithms to give a flavor of options that are possible, although space limitations will not allow to study all UEP modifications of known bit-loading algorithms.

A. Hierarchical modulation
In hierarchical modulation, also known as embedded modulation [6], different symbols with unequal priorities can be embedded in each other thereby creating different Euclidean distances d j between different priority classes j. The margin separations between these classes can easily be adjusted using the ratios of constellation distances There are different hierarchical constellation constructions in literature, e.g., [6], [7]. However, for implementation convenience, we have selected the construction in [8] as shown in Fig. 1. In this figure, we assume 3 different classes L j , j ∈ {0, 1, 2} and the performance priority ratios are assumed to be fixed to 3 dB, which is embedded in a 64-QAM (L2).
is embedded in a non-square 8-QAM (L2).  Figure 2 depicts the bit-error ratios in case of AWGN using a fixed hierarchical modulation 2/4/8-QAM (as defined in Fig. 1.b). Figure 2 also shows the comparison between AWGN and a Rayleigh fading channel 1 , where the 3-dB margin is strictly preserved in the AWGN case.
However, in the case of a Rayleigh fading channel, this margin becomes wider, e.g., almost 6 dB at a SER (symbol-error ratio) of 2 · 10 −3 . Nevertheless, the order of the classes and the relative margin separations are roughly preserved. The overall system performance deteriorates due to the fixed modulation size and the fixed power allocation. Hence, further adaptation to channel conditions, using adaptive modulation and power allocation, is a very important measure to keep the margin separation and an acceptable performance, as will be discussed in detail in the next section.  Fig. 1.b) assuming 3 different classes with a margin separation of 3 dB. In total, 6144 bits were placed on 2048 subcarriers. 1 The channel is modeled as independent time-invariant Rayleigh fading composed of Λ = 9 different paths (echoes); each path has its own amplitude β l , delay τ l , and random uniform phase shift θ l ∈ [0, 2π), i.e., h(τ ) = P Λ−1 l=0 β l p l e jθ l δ(τ − τ l ); p l follows an exponentially decaying power profile, [10].

B. UEP adaptive modulation
Traditionally, bit-loading algorithms have been designed to assure the highest possible link quality achieving equal error probability. This results in performance degradations in case of variable channel conditions (no graceful degradation). In contrast, UEP adaptation schemes [11] allow for different parts of the same video stream/frame to acquire different link qualities. This can be done by allocating different parts of this stream to different subcarriers (with different bit-rate and error probabilities) according to the required QoS. Therefore, current research efforts [11]- [13] have been directed towards modifying the traditional bit-loading algorithms, e.g., the ones by Hughes-Hartogs [14], Campello [15], Chow-Cioffi-Bingham [16], Fischer-Huber [17], in order to realize UEP. In [18], the algorithm by Fischer et al. has been modified in order to allow for different predefined error probabilities on different subcarriers. However, the allocation of subcarriers to the given classes is a computationally complex process. A more practical approach has been described in [12] using a modified rate-adaptive Chow et al. bit-loading. This one modifies the margin γ in Shannon's capacity formula for the Gaussian channel in [16] by dedicating a different γ j for each protection level j. The advantage here is the flexibility to adapt the modulation in order to realize any arbitrary margin separations between the priority classes.
The modified UEP capacity formula is given by [19] b k,j = log 2 where k is the carrier index. (3) is rounded tô with quantization errors The iterative modification of the overall γ j (if the target bit-rate is not fulfilled) is performed in the same way as in the original Chow et al. algorithm, namely applying to one of the margins, e.g., to γ 0 . N used is the number of actually used carriers amongst the total of N carriers 2 , B = k,j b k,j is the total actual number of bits, B T denotes the total target number. The margin spacing between the given L classes is selected as ∆γ in dB, such that where j here can take on values in 1, ..., L − 1 and γ 0 is computed in the iterative process [13].
As in the original algorithm, the quantization error ∆b k,j is used in later fine-tuning steps to force the bit load to desired values if the iterations were not completely successful.
How should now different protection classes be mapped onto the given subcarriers? An iterative sorting and partitioning approach has be proposed in [11], [12]. The core steps of the algorithm have been simplified more in [20] using a straight-forward linear algebra approach to initialize γ 0 close to the final solution. The main steps in [12], [20] are given in the following: 1) The N subcarriers are sorted in a descending order according to the channel state information; the sorted indices are stored in a vector M of size 1 × N.
2) In [11], [12], γ 0 is initially set to an arbitrary value (as in [16]). However, in [20], γ m of the middle priority class is calculated initially using the average SNR (SNR) as γ m init = SNR 2 B T /N , and enhanced more using Thereafter, the noise margins of the other classes are computed according to (7).
3)b k,j is calculated as in (4); the number of subcarriers for each priority class are selected to fulfill the individual target bit-rate T j , using a binary search, as in [12].

4)
If the target bit-rate B T is not fulfilled, all γ j are adjusted using (6) together with (7), subsequently repeating from 3).

5) Else
, if the maximum number of iterations is achieved without fulfilling B T , further tuning based on the quantization error (5) is performed as in [16].
The main drawback of the previous two methods, [18] and [20], is the inefficient energy utilization, where energy is wasted in allocating it to weak subcarriers. The algorithm by Hughes-Hartogs [14] is seen as the energy optimum bit-loading approach, however, it requires lengthy searching and sorting steps and non-linear operations. Campello's bit-loading [15], which is a linear representation of the Levin bit-loading algorithm [21], is a simple alternative in between Hughes-Hartogs and Chow et al.. It achieves almost the same optimum power allocation requiring only a fraction of the complexity due to quantization of the channel-gain-to-noise ratio G k based, again, on Shannon's formula. However, carriers of similar levels of G k can be gathered into G smaller groups, where G << N. Hence, all carriers in each of these groups can be adapted simultaneously. Therefore, the algorithm can easily allocate bits according to these quantized groups, later it tunes following the Hughes-Hartogs criterion of minimum power increment. In addition to the simplicity, Campello's bit-loading can be thought of as a practical solution for limited (quantized) channel feedback systems [13]. However, in this paper, we will discuss the UEP applications of the Hughes-Hartogs algorithm, only.  Fig. 2). This shows the inefficiency of the hierarchical modulation.

C. UEP adaptive hierarchical modulation
For the optimal power bit-loading algorithms (like Hughes-Hartogs), we opt for hierarchical modulation to realize UEP classes [13] together with bit-allocation instead of carrier grouping, since it realizes different classes more efficiently without tedious binary searches for the carrier groups separation. In this approach, the highest priority class first consumes the good-SNR subcarriers with the minimum incremental power (calculated based on the maximum allowed symbol-error rate P e 0 (SER) and the channel coefficients). Thereafter, the bits of the following classes are allowed to be allocated to either already used subcarriers in hierarchical fashion if their incremental powers are the minimum ones. However, if the incremental powers are not sufficient to allocate more bits in hierarchical fashion, free subcarriers can instead be used based on the same given margin separation ∆γ j , which is identical to the one given by the hierarchical modulation. Therefore, the only required information to establish our algorithm now is the SER P e 0 of the first class. The other SER, of the less important data, P e j , are calculated using the given margin γ j and the given P e 0 , as in [12], [13].
In here, we are going to describe the complete power minimization hierarchical bit-loading algorithm. This algorithm can be considered as a margin-adaptive bit-loading defined as: where E k is the power allocated to the k th subcarrier, E tot is the given target power, E σ is the accumulated power, G k is the channel gain (λ k ) to the noise (σ 2 n ) ratio, and the "gap" approximation is given by , [22]. If the total target rate is tight to a certain value B T and E tot is still greater than E σ , then the performance can be further enhanced by scaling up the effective power allocation E k by the ratio E tot /E σ . This is called "margin maximization" criterion, where the maximum system margin is defined as The complete algorithm is as follows: 1) Initially, allocate L×N zeros to the bit-loading matrix B and N zeros to the power loading vector E and the incremental power vector ∆E.
2) Set j = 0 and the maximum allowed number of bits on each class to b j,max , such that the summation over all j is less than the maximum number of bits per carrier b j,max .
3) Compute the incremental power steps ∆E k , for every subcarrier assuming a single bit addition, using the following approximate equation (as in [22]): where P e j of the current class j is calculated using the previous class probability of error P e j−1 and ∆γ as follows which is valid for high constellation order since , as in [11], and γ j−1 = 10 − ∆γ 10 γ j , i.e., if P e 0 is given, the other classes P e j can be computed according to (12).
4) Find the minimum ∆E k among all subcarriers, then increment B j,k such that B j,k ≤ b j,max allowed for each hierarchical level.
5) Increment the power of this subcarrier k by the value ∆E i . 6) If the target bit-rate of the j th class is not fulfilled and • the sum of the powers N −1 k=0 E k is less than the target energy E tot , go to 3), • else, stop and go to 8) to finalize the margin maximization approach.

7)
If the target bit-rate of the j th class is fulfilled and j is less than the number of the given classes L , • if the sum of the energy is less than the target energy E tot , increment j such that, j < L, then go to 3), • else, stop the iterations for this class.
8) Scale-up the allocated energy E k using Eq. (11), then The matrix B has L hierarchy levels as its rows. Non-allocation of leading row(s) means that first protection level data have not been put on the corresponding carrier. Nevertheless, lower-priority data may follow and still use a smaller hierarchical signal set.  We also observe the same performance degradation as in the modified Chow algorithm, when adding more bits to the first class. Finally, one can also see from Fig. 4 that the performance of the non-hierarchical modified Chow algorithm outperforms the hierarchical Hughes-Hartogs UEP, which is due to the power-inefficiency of hierarchical constellations.
An example of combining hierarchical modulation schemes with Turbo coding of different rates is given in [23]. How such different rates are obtained in a flexible way, is shown in the following section.
In this work, we only focus on (almost) capacity-achieving codes. Turbo codes are known for their error-floor behavior, nevertheless they are suited for smaller codeword lengths, i.e., interleaver sizes. If the error floor is an issue, outer Reed-Solomon Codes may be applied.
There are, of course, manyfold options with smaller codeword lengths or delays, such as ratecompatible convolutional codes based on puncturing, which we are to some extent addressed inside the following Turbo-code section. Just to mention another example, one may also think of multilevel coded modulation with corresponding rate choices according to the desired SNR steps [9]. Actually, also there, Turbo-and LDPC codes can be chosen for the different layers.

IV. ACHIEVING UEP WITH CONVOLUTIONAL CODES FOR APPLICATIONS IN TURBO CODING
In this section, we describe methods of achieving unequal error protection with convolutional codes which can later be applied in Turbo codes. A straightforward approach of varying the performance of a convolutional code is puncturing, i.e., excluding a certain amount of code bits from transmission and, thus, increasing the code rate R = k/n, where k and n are the numbers of information bits and code bits. Another approach is called pruning, which modifies the number of input bits to the encoder k, i.e., the numerator of the code rate instead of the denominator.
In contrast to [3], [4], we present a more flexible way of pruning in the following. In order to modify the number of encoder input bits, certain positions in the input sequence could be reserved for fixed values, i.e., 0 or 1 for binary codes. The code rate of a pruned convolutional code can be given as where n 0 denotes the number of digits fixed to a certain value and L p is the pruning period.
At the receiver, the pruning pattern is known such that the reliability of the fixed zeros can be set to infinity (or equivalently, the probability can be set to 1) and may help decoding the other bits reliably.
A possible pruned input sequence to a 2-input encoder with certain positions fixed to 0 could be u =    where the pruning period is L p = 5. Thereby, code rates other than that of the mother code can easily be achieved. Using puncturing and pruning, a family of codes with different error correction capabilities may be constructed. Figure 5 shows a set of bit-error rate curves of Turbo codes using pruned and punctured recursive, systematic convolutional component codes.
When performing a computer search for a suitable pruning scheme, it is usually not sufficient to study pruning patterns alone. Additionally, it has to be ensured that at interval boundaries between blocks of different protection levels, the states at joint trellis segments are the same as already required in rate-compatible punctured convolutional codes [2]. With the improved approach shown above, this problem does automatically not arise any more since the decoder is operating on one and the same trellis, namely the mother trellis, only varying certain a-priori probabilities. Thus, trellis structures do not change at transitions between different protection intervals at all.
Concerning the minimum distance of the sub-code, it is in either case greater than or equal to the minimum distance of the mother code since, as stated above, both codes can be illustrated by the same trellis. Fixing certain probabilities of a zero to be infinity means pruning those paths corresponding to a one. Either, if the minimum weight path is pruned, the minimum distance of the code is increased or if it is not pruned, the minimum distance stays the same.
The proposed technique is in a way dual to puncturing with comparable complexity. Puncturing increases the rate by erasing output bits, whereas pruning reduces it by omitting input bits (fixing its value). With puncturing, there is no knowledge about the erased bits in the decoding. With pruning, we add perfect knowledge about certain bits and may enhance the decoding performance in iterative decoding through increased extrinsic information. Occasional pruning has also once been used to improve the NASA serial concatenation of convolutional and Reed-Solomon codes in [24].
We ran an exhaustive computer search in order to find mother codes together with different pruning patterns which behave well in iterative decoding. We used EXIT charts [25] for the evaluation of the convergence behavior. One assessment criterion was amongst others the convergence threshold, which is the lowest SNR where error-free decoding is theoretically possible, i.e., where the tunnel between the EXIT curves opens and the mutual information between the decoded and the transmitted sequence is one (or very near to one). Furthermore, we report the area between the EXIT curves, since it is a measure of how close the waterfall region is to the Shannon limit and how steep it is [26]. Although this has formally only been proved for the binary erasure channel, it has been observed for the additive white Gaussian noise (AWGN) channel, as well. We also give the approximate distance δ of the convergence point from the Shannon limit in dB. The minimum distance of the mother convolutional codes and their pruned subcodes was determined by evaluating low-weight input sequences. Table II in the appendix shows three convolutional mother codes with constraint lengths L c = 3, L c = 4, and L c = 5 with reasonably fast convergence. The convolutional mother code rate is R CC = 1/2 in all cases, such that the Turbo code rate is R T C = 1/3. The pruning pattern search was performed for pruning periods up to L p = 6.
The code table shows that the higher the degree of pruning (and the lower the code rate), the larger is the minimum distance. This is natural, since with a large number of constraints, it is more likely that the minimum distance path is erased.

V. NECESSARY DEGREE DISTRIBUTION PROPERTIES OF UEP-LDPC CODES
Irregular low-density parity-check (LDPC) codes are very suitable for UEP, as well, and can be designed appropriately according to the requirements. Irregular LDPC codes provide UEP simply by modification of the parity-check matrix and a single encoder and decoder may still be used for all bits in the codeword. The sparse parity-check matrix H of an LDPC code may be represented by a Tanner graph, introduced in [27], which facilitates the description of a decoding algorithm known as the message-passing algorithm [28]. Such a code may be described by variable node and check node degree distributions defined by the polynomials [29] [34]. Information bits may be grouped into protection classes according to their error protection requirements or importance and the parity bits are grouped into a separate protection class with least protection.
Generally, the average variable node degrees of the classes are decreasing with importance. Good degree distributions are commonly computed by means of density evolution using a Gaussian approximation [35].
Based on an optimized degree distribution pair ofλ(x) andρ(x), a corresponding paritycheck matrix may be constructed. Several construction algorithms can be found in literature.
The most important ones are the random construction (avoiding only length-4 cycles between degree-2 variable nodes), the ACE (approximate cycle extrinsic message degree) algorithm [36], the PEG (progressive edge-growth) algorithm [37], and the PEG-ACE algorithm [38]. It is widely believed that an irregular variable node degree distribution is the only requirement to provide UEP, see for example [32], [33]. Surprisingly, we found that constructing parity-check matrices using these different algorithms, based on the same degree distribution pair, results in codes with very different UEP capabilities: The random and the ACE algorithms result in codes which are UEP-capable, whereas the PEG and the PEG-ACE algorithms result in codes that do not provide any UEP [39].
Since the degree distribution pairs are equal for all algorithms, a more detailed definition of the degree distribution is necessary. The multi-edge type generalization [30] may be used, but is unnecessarily detailed for our purpose. Instead, a subclass of the multi-edge type LDPC codes    To confirm that the detailed check node degree distribution is the key to the UEP capability of a code, a modification of the non-UEP PEG-ACE algorithm, which makes it UEP-capable, is presented. By constraining the edge selection procedure to allow only certain check nodes to be connected, the resulting detailed check node degree distribution is made similar to that of the ACE code. The bit-error rates of the codes constructed by the modified PEG-ACE, the original PEG-ACE and the ACE algorithm are shown in Fig. 9. The figure shows that the original PEG-ACE code does not provide any UEP to its code bits, whereas the ACE code is UEP-capable.

ACE PEG-ACE˜ρ
Surprisingly, the code constructed by the modified PEG-ACE algorithm offers even more UEP than the ACE code. The UEP capability provided by the modified PEG-ACE algorithm confirms that the detailed check node degree distribution is crucial to the UEP capability of a code. codes [31], [40].

VI. ACHIEVING UEP WITH LDPC CODES WITH AN IRREGULAR CHECK-NODE PROFILE
In Fig. 6, we observed that a non-compressed detailed check-node distribution was an essential ingredient to obtain UEP properties, which are even preserved after many iterations, even if an overall compressed distribution was chosen to optimize the overall average performance (according to results in [35]). In the following, we even refrain from the overall concentrated form and design UEP properties by controlling the check-node degree distribution, possibly keeping a regular variable-node degree distribution. It is well-known that the quality of a variable-node is increased with the number of edges connected to it. Regarding the check-node side, a connected variable node profits from a lower connection degree of that check-node. Thus, the quality of  We consider a check node to belong to a certain bit-node (priority) class L k if there is at least one edge of the Tanner graph connecting the check node with one bit node of that class. By studying the mutual information at the output of a check node of a priority class compared to the average mutual information, we get a measure of unequal protection of the priority class: the higher the difference, the more the class is protected compared to other bits in the codeword.
It is also possible to link this difference in mutual information to the average check connection degree of class L k , where d max are the minimum and maximum check connection degrees, respectively.
is the relative portion of check nodes with connection degree d that belong to class L k .
To maximize the performance of class L k , d (L k ) has to be minimized. In other words, the most protected classes have the lowest average check-node degrees.
Using the detailed representation of the LDPC code [41], we optimized the irregular check node profiles for each priority class with Density Evolution. Once the irregularity profile has been optimized, there are some specific parity check matrix constructions that allow to follow the fixed profile. We depict in the following a method based on pruning, which has the advantage of being efficient and flexible, just as in the case of UEP Turbo codes in Section IV. With a single fixed (mother) encoder and decoder, the protection properties for different priority classes can be modified by suitable pruning. With pruning, we control the check-node distribution of the classes. Let (N 0 , K 0 ) be the length and the number of information bits, respectively, of the mother code. Pruning in Section IV meant simply omitting information bits according to some pruning pattern, i.e., fixing them to some known values. Although this can be further generalized by adding a precoder to a mother code, which also offers suitable LDPC UEP solutions, we will stick to this simple pruning concept also in here. Presetting certain information to zero, means the creation of a subcode of dimension K 1 by eliminating K 0 − K 1 columns from the parity-check matrix H m . The subcode has length N 1 = N 0 − (K 0 − K 1 ). This would be comparable to the length change in the case of pruning a systematic convolutional code. We use systematic LDPC codes, i.e., LDPC codes for which the parity-check matrix has an upper triangular structure. The pruning is then performed by just omitting an information bit of the mother code, or equivalently, by removing the corresponding column in the information part of the parity check matrix (the part which is not upper triangular). By doing so, the dimensions of the subcode matrices H S and G S will be M 0 × N 0 − (K 0 − K 1 ) and K 1 × N 0 − (K 0 − K 1 ), respectively. The code rate is obtained as Only the indices of the pruned columns of the mother code need to be known at the transmitter and the receiver in order to be able to encode and decode the pruned code. Thus, there is almost no complexity increase for realizing different UEP configurations with the same mother LDPC code. This shows that the specific matrix construction that we advice, based on a mother code and pruning, is very flexible and can be implemented in practice with low complexity. In the following, we describe the iterative pruning procedure in some more detail.
Let the relative portion of bits devoted to a class L k be denoted by α(k), with L k=1 α(k) = 1.
An iterative pruning is performed. The procedure is controlled by the two key parameters of the kth class, d (L k ) and d 1) Any pruned bit must not be linked with a check node of degree already identical to the lower limit of a priori chosen degree distributions.
2) Unvoluntary pruning shall be avoided, meaning that a column of the parity-check matrix H becomes independent from all the others and then it would not define a code any more.
3) The chosen code rate K 1 /N 1 must still be achieved given by the total number of checks NK and the number of bit nodes N.

4)
Convergence at a desired signal-to-noise ratio (near the Shannon-capacity limit) must be ensured, typically by investigating EXIT charts [25].
5) A stability constraint [35] has to be ensured, which is formulated as a rule for λ 2 , which is the proportion of edges of the graph connected to bit nodes of degree 2.
where ρ j denotes the proportion of edges of the graph connected to check nodes of degree j.
In an iterative procedure, d min may be further reduced after ensuring that the listed constraints are fulfilled (if the lower limit of allowed degrees is not yet reached). A further pruning process is used to reduce d (L k ) . Optimizations to obtain unconcentrated (degrees for checks between 2 and 6) and almost concentrated (degrees for checks between 4 and 6) degrees codes were performed to compare the performances.
The decoder is using the pruned parity-check matrix of the mother code. The check-node profiles are given in Table I. The variable-node degree was three.

VII. UEP IN PHYSICAL TRANSPORT OR IN CODING?
This paper has pointed out manifold options for realizing unequal error protection, especially new concepts developed recently. UEP in multicarrier physical transport is very easy to realize and the design is very flexible allowing for arbitrary SNR margins. In UEP Turbo or LDPC coding, the coding scheme has to be optimized in advance, i.e., a code search is necessary and the performances have to be investigated beforehand (EXIT charts, simulations). Pruning and puncturing also offer quite some flexibility in choosing the code rate, but the actual performances are only obtained after the code-design and evaluation steps. However, in digital transport without access to the physical channel, the only option is UEP coding.
When the channel changes its frequency characteristic (correlation properties for the equivalent binary channel), the margins between the priority classes will be modified in UEP bit allocation, even if a more robust SNR sorting is used. In UEP Turbo or LDPC coding, the margins will more or less be preserved due to the large interleaver.
ACKNOWLEDGMENT Some of this work was part of the FP6 / IST project M-Pipe and was co-funded by the European Commission. Furthermore, we appreciate funding by the German national research foundation DFG.
Some results of this paper have been prepublished at conferences [11], [19], [42] or will appear in [39]. UEP LDPC codes for higher-order modulation, which were not presented in here, have recently been published in [43]; for results on UEP multilevel codes, the reader is referred to [9].

Parameters of Fig. 5
Generator matrix of the mother code: The code rates given in the figure are the ones of the Turbo code, i.e., the rate-2/3 convolutional code results in a rate-1/2 Turbo code. The interleaver size was 2160.