Simplified LLRs for the Decoding of Single Parity Check Turbo Product Codes Transmitted Using 16 QAM

Iterative soft-decision decoding algorithms require channel log-likelihood ratios (LLRs) which, when using 16QAM modulation, require intensive computations to be obtained. Therefore, we derive four simple approximate LLR expressions. When using the maximum a posteriori probability algorithm for decoding single parity check turbo product codes (SPC/TPCs), these LLRs can be simplified even further. We show through computer simulations that the bit-error-rate performance of (8, 7) and (8, 7) SPC/TPCs, transmitted using 16QAM and decoded using the maximum a posteriori algorithm with our simplified LLRs, is nearly identical to the one achieved by using the exact LLRs.


INTRODUCTION
By using soft-input/soft-output (SISO) maximum a posteriori (MAP) decoders with iterative (i.e., turbo) decoding, it was shown in [1] that the Shannon limit can be approached within 0.7dB for a code rate r of 0.5, a block length of 65536 bits, and with 18 turbo iterations.The turbo principle with SISO decoding was later applied to other existing codes like single parity check (SPC) product code [2][3][4].SISO decoders, including those used in decoding SPC/TPC, are normally derived for the binary input AWGN channel.In applications where high bandwidth efficiency is required, it becomes crucial to use higher-order modulation schemes such as 16QAM, which has a raw bandwidth efficiency of 4 bits/s/Hz compared to 1 bit/s/Hz for BPSK.Since SISO decoders require soft channel information for each bit, we need to extract this information from the received 16QAM symbols.A brute-force approach is to compute the LLRs using the exact expression, which requires intensive computations.Instead, several authors [5,6] proposed the approximation of the LLR expressions for these bits.
In this letter, we take a similar approach and approximate the LLRs, but we do this based on different assumptions, which result in simpler LLR expressions.When the MAP algorithm is used for the soft decoding of SPC/TPC, computing these approximated LLRs requires four simple arithmetic operations per 16QAM symbol.Simulation results show that, while being very simple, the proposed method's BER performance for SPC/TPC is virtually identical to the one achieved when using the exact LLRs, which are more cumbersome to compute.

BACKGROUND
In this paper, we will use the following mapping of bit u to channel symbol x: u = 0→x = −1 and u = 1→x = +1.We assume that the received channel observation y has been corrupted by AWGN n (with variance σ 2 n ), that is, y = x + n.

Log-likelihood ratio and intrinsic information
In uncoded binary communication systems, we can extract a channel LLR L c (y), also called maximum a posteriori probability (MAP), from the received channel observation y using conditional probabilities as follows: L a (x) is a priori LLR of symbol x that is equal to zero for equiprobable symbols, and L c (y) is an LLR related to channel measurement that can be found as [4] L c (y) = 2 σ 2 n y. ( Unfortunately, we may not be able to obtain σ 2 n at the receiver, and consequently we approximate L c (y) simply as L c (y) = y (i.e., by setting σ 2 n to 2).In other words, the LLR of the channel is the same as the channel observation obtained at the output of the matched filter.This soft information is normally referred to as intrinsic information in the literature as it is obtained from the actual channel measurement itself.

Extrinsic information
In a coded communication system, extra information can be extracted from the coded bits.In its simplest form, given the three bits u, u 1 , and u 2 that are related by parity equation u = u 1 ⊕ u 2 , where ⊕ represents the modulo 2 addition operator, we have from [3] (3) In the general case, where the parity equation contains J bits, we have [3] The above equation tells us that in coded systems, we can extract additional information, called extrinsic information, that we can use to improve the reliability of the soft decision, that is, Equations ( 2), (4), and ( 5) form the core of the MAP algorithm [3] used to decode SPC/TPC.It can be easily shown that L c (αy) = αL c (y) and L ext (αy) = αL ext (y) for any positive constant α.In this case, and for equiprobable symbols, L( x) will be scaled by a positive integer α and hence the sign will not be affected.Therefore, L( x) can be normalized by α.This fact will be exploited to simplify the expressions for the 16QAM LLRs.

16QAM modulation
Square 16QAM modulation is a bandwidth-efficient modulation technique.Each 16QAM symbol represents 4 bits giving a raw bandwidth efficiency of 4 bits/s/Hz.
One way of minimizing the overall bit error rate in digital communication systems is by using the Gray mapping, which makes sure that each symbol differs from its closest neighbors by one bit only.One possible Gray-coded 16QAM constellation is shown in Figure 1.

LLR calculation for 16QAM
Computing the exact LLR for each bit in a 16QAM modulated signal is rather computationally intensive as this involves computing the ratio of the sum of 8 probabilities, where that bit is 1, to the sum of 8 probabilities, where that bit is 0. Mathematically, this involves the computation of the following expression for each bit u ∈ {a, b, c, d} of the received channel observation y = y I + j y Q : where s 1,u = s 1I,u + js 1Q,u and s 0,u = s 0I,u + js 0Q,u are the 16QAM symbols where bit u ∈ {a, b, c, d} is equal to 1 and 0, respectively.

SIMPLE APPROXIMATED EXPRESSIONS FOR CHAN-NEL LLRs
One way of simplifying ( 6) is to only retain the largest value in the numerator and the denominator, in other words, to retain only the value that has the smallest Euclidian distance.That is, given the channel observation y, we find the closest neighbor, where bit u = 1, and the closest neighbor, where bit u = 0, to compute the following approximated LLR: where s 1,u and s 0,u are the closest (in Euclidean distance) 16QAM symbols to y, where bit u is equal to 1 and 0, respectively.Equation (7c) can be further simplified in different ways depending on the bit in question (i.e., bit a, b, c, or d shown in Figure 1).We clearly see from Figure 1 that for bits a and b we have s 1I,u = s 0I,u , and consequently the expression in (7c) is simplified to for bit a; and for bit b.
In (8), s 1Q,a = 3 and s 0Q,a = 1, or s 1Q,a = −3 and s 0Q,a = −1, depending on whether y Q is positive or negative, respectively.Therefore, (8) can be written differently as follows: For bit b, we have Therefore, (9) can be rewritten as follows: Using similar analysis, we can find approximate expressions for L(c) as and for L(d) as The approximate LLRs in (11) and ( 13) can be further approximated in the region of interest (let us say y I = −4 to +4 and y Q = −4 to +4 ) by either of two methods: by linear interpolation across the entire region of interest or by linear interpolation across a specific segment of that region (i.e., −2 ≤ y I ≤ + 2 and −2 ≤ y Q ≤ + 2 ).We here choose the latter method to obtain respectively.
As mentioned in Section 2.2, the MAP algorithm for decoding SPC/TPC is insensitive to the same positive scaling factor.Therefore, (10), (12), and (14) can be simplified to yield very simple expressions for the channel LLRs: Computing L(b) and L(d) require no arithmetic operation, whereas the computation of L(a) and L(c) requires computing two absolute values and two simple additions.When compared to (15), the exact LLR expression in (6) clearly requires substantial computations.
It should be emphasized that since the derived expressions for the LLRs in (10), (12), and (14) are not derived with any particular decoding algorithm in mind, they can be used with any iterative decoder such as LDPC and convolutional turbo decoders (of course as long as Gray-coded 16QAM modulation is used).These approximated expressions still only require a fraction of the complexity as compared to (6).If we set the noise variance σ 2 n in these expressions to an arbitrary scalar (in this case to as practically and commonly done when using these decoders, we obtain the same expressions in (15).

SIMULATION RESULTS
Figure 2 compares the BER performance of two SPC/TPCs using the exact LLR expression in (6) for finding the channel soft information to that decoded using the approximate LLR expressions in (15).In both cases, we clearly see that the performance is nearly identical.It should be emphasized that the performance achieved using the proposed method for calculating the channel LLRs costs only a tiny fraction of the computational effort required to achieve the same performance using the exact LLR expression.Furthermore, the proposed simplified LLRs do not require knowledge of noise variance.