TSWJ The Scientific World Journal 1537-744X 2356-6140 Hindawi Publishing Corporation 10.1155/2014/895782 895782 Research Article A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting http://orcid.org/0000-0001-7373-7511 Zhang Yingxian Pan Xiaofei Pan Kegang Ye Zhan Gong Chao Cao Lei Laboratory of Satellite Communications College of Communications Engineering PLA University of Science and Technology Nanjing 210007 China 2014 2372014 2014 04 05 2014 08 07 2014 23 7 2014 2014 Copyright © 2014 Yingxian Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length.

1. Introduction

Due to the ability of achieving Shannon capacity and its low encoding and decoding complexity, the polar codes have received much attention in recent years . However, compared to some original coding schemes such as LDPC and Turbo codes, the polar codes have a remarkable drawback; that is, the performance of the codes in the finite length regime is limited [2, 3]. Hence, researchers have proposed many decoding algorithms to improve the performance of the codes .

In [4, 5], a list successive-cancelation (SCL) decoding algorithm was proposed with consideration of L successive-cancelation (SC)  decoding paths, and the results showed that performance of SCL was very close to that of maximum-likelihood (ML) decoding. Then, in , another decoding algorithm derived from SC called stack successive-cancelation (SCS) was introduced to decrease the time complexity of the SCL. In particular, with CRC aided, SCL yielded better performance than that of some Turbo codes, as shown in . However, due to the serial processing nature of the SC, the algorithms in  suffered a low decoding throughput and high latency. Based on this observation, some improved versions of SC were proposed with the explicit aim to increase throughput and reduce the latency without sacrificing error-rate performance, such as simplified successive-cancellation (SSC) , maximum-likelihood SSC (ML-SSC) , and repetition single parity check ML-SSC (RSM-SSC) [10, 11]. Besides those SC based algorithms, researchers had also investigated some other algorithms. In [12, 13], the ML and maximum a posteriori (MAP) decoding were proposed for the short polar codes. And in , a linear programming decoder was introduced for the binary erase channels (BECs). With the factor graph representation of polar codes , authors in [16, 17] showed that belief propagation (BP) polar decoding had particular advantages with respect to the decoding throughput, while the performance was better than that of the SC and some improved SC decoding. What is more is that, with the minimal stopping set optimized, results of [18, 19] had shown that the error floor performance of polar codes was superior to that of LDPC codes.

Indeed, all the decoding algorithms in  can improve the performance of polar codes to a certain degree. However, as the capacity achieving coding scheme, the results of those algorithms are disappointing. Hence, we cannot help wondering why the performance of the polar codes with finite length is inferior to that of the existing coding schemes and how we can improve it. To answer the questions, we need to make a further analysis of those decoding algorithms in .

For the decoding algorithms with serial processing, there has been the problem of error propagation except the low decoding throughput and high latency [20, 21]. That is to say, errors which occurred in the previous node will lead to the error decoding of the later node. However, none of the existing serial processing algorithms has considered this observation. Furthermore, it is noticed from the factor graph of polar codes in  that the degree of the check or variable nodes in the decoder is 2 or 3, which will weaken the error-correcting capacity of the decoding, as compared to the LDPC codes with the average degree usually greater than 3 [22, 23]. Hence, the performance of the polar codes is inferior to that of LDPC codes with the same length [18, 19]. What is more is that BP polar decoding needs more iterations than that of LDPC codes, as shown in [16, 17, 22, 23]. Therefore, in order to improve the performance of a decoding algorithm for polar codes, it is important to enhance the error-correcting capacity of the algorithm.

Motivated by aforementioned observations, we propose a parallel decoding algorithm for short polar codes based on error checking and correcting in this paper. We first classify the nodes of the proposed decoder into two categories: information nodes and frozen nodes, values of which are determined and independent of decoding algorithms. Then, we introduce the method to check the errors in the input nodes of the decoder, with the solutions of the error-checking equations generated based on the frozen nodes. To correct those checked errors, we modify the probability messages of the error nodes with constant values according to the maximization principle. Through delving the error-checking equations solving problem, we find that there exist multiple solutions for those equations. Hence, as to check the errors as accurately as possible, we further formulate a CRC-aided optimization problem of finding the optimal solution of the error-checking equations with three different target functions. Besides, we also use a parallel method based on the decoding tree representations of the nodes to calculate probability messages in order to increase the throughput of decoding. The main contributions of this paper can be summarized as follows.

An error-checking algorithm for polar decoding based on the error-checking equations solving is introduced; furthermore, as to enhance the accuracy of the error checking, a CRC-aided optimization problem of finding the optimal solution is formulated.

To correct the checked errors, we propose a method of modifying the probability messages of the error nodes according to the maximization principle.

In order to improve the throughput of the decoding, we propose a parallel probability messages calculating method based on the decoding tree representation of the nodes.

The whole procedure of the proposed decoding algorithm is described with the form of pseudocode, and the complexity of the algorithm is also analyzed.

The finding of this paper suggests that, with the error checking and correcting, the error-correcting capacity of the decoding algorithm can be enhanced, which will yield a better performance at cost of certain complexity. Specifically, with the parallel probability messages calculating, the throughput of decoding is higher than the serial process based decoding algorithms. All of these results are finally proved by our simulation work.

The remainder of this paper is organized as follows. In Section 2, we explain some notations and introduce certain preliminary concepts used in the subsequent sections. And in Section 3, the method of the error checking for decoding based on the error-checking equations is described in detail. In Section 4, we introduce the methods of probability messages calculating and error correcting, and after the formulation of the CRC-aided optimization problem of finding the optimal solution, the proposed decoding algorithm with the form of pseudocode is presented. Then, the complexity of our algorithm is analyzed. Section 5 provides the simulation results for the complexity and bit error performance. Finally, we make some conclusions in Section 6.

2. Preliminary 2.1. Notations

In this work, the blackboard bold letters, such as X , denote the sets, and | X | denotes the number of elements in X . The notation u 0 N - 1 denotes an N -dimensional vector ( u 0 , u 1 , , u N - 1 ) , and u i j indicates a subvector ( u i , u i + 1 , , u j - 1 , u j ) of u 0 N - 1 , 0 i , j N - 1 . When i > j , u i j is an empty vector. Further, given a vector set U , vector u i is the i th element of U .

The matrixes in this work are denoted by bold letters. The subscript of a matrix indicates its size; for example, A N × M represents an N × M matrix A . Specifically, the square matrixes are written as A N , size of which is N × N , and A N - 1 is the inverse of A N . Furthermore, the Kronecker product of two matrixes A and B is written as A B , and the n th Kronecker power of A is A n .

During the procedure of the encoding and decoding, we denote the intermediate node as v ( i , j ) , 0 i n , 0 j N - 1 , where N = 2 n is the code length. Besides, we also indicate the probability values of the intermediate node v ( i , j ) being equal to 0 or 1 as p v ( i , j ) ( 0 ) or p v ( i , j ) ( 1 ) .

Throughout this Paper, “ ” denotes the Modulo-Two Sum, and “ i = 0 M x i ” means “ x 0 x 1 , , x M ”.

2.2. Polar Encoding and Decoding

A polar coding scheme can be uniquely defined by three parameters: block-length N = 2 n , code rate R = K / N , and an information set I N = { 0,1 , , N - 1 } , where K = | I | . With these three parameters, a source binary vector u 0 N - 1 consisting of K information bits and N - K frozen bits can be mapped a codeword x 0 N - 1 by a linear matrix G N = B N F 2 n , where F 2 = [ 1 0 1 1 ] , B N is a bit-reversal permutation matrix defined in , and x 0 N - 1 = u 0 N - 1 G N .

In practice, the polar encoding can be completed with the construction shown in Figure 1, where the gray circle nodes are the intermediate nodes. And the nodes in the leftmost column are the input nodes of encoder, values of which are equal to binary source vector; that is, v ( 0 , i ) = u i , while the nodes in the rightmost column are the output nodes of encoder, v ( n , i ) = x i . Based on the construction, a codeword x 0 7 is generated by the recursively linear transformation of the nodes between adjacent columns.

Construction of the polar encoding with length N = 8 .

After the procedure of the polar encoding, all the bits in the codeword x 0 N - 1 are passed to the N -channels, which are consisted of N independent channels of W , with a transition probability of W ( y i x i ) , where y i is i th element of the received vector y 0 N - 1 .

At the receiver, the decoder can output the estimated codeword x ^ 0 N - 1 and the estimated source binary vector u ^ 0 N - 1 with different decoding algorithms . It is noticed from  that the construction of all the decoders is the same as that of the encoder; here, we make a strict proof for that with the mathematical formula in the following theorem.

Theorem 1.

For the generation matrix of the a polar code G N , there exists (1) G N - 1 = G N .

That is to say, for the decoding of the polar codes, one will have (2) u ^ 0 N - 1 = x ^ 0 N - 1 G N - 1 = x ^ 0 N - 1 G N ,

where G N - 1 is construction matrix of the decoder.

Proof.

The proof of Theorem 1 is based on the matrix transformation, which is shown detailedly in Appendix A.

Hence, as for the polar encoder shown in Figure 1, there is (3) G 8 = G 8 - 1 = [ 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 1 0 1 0 1 0 1 1 0 0 0 0 0 0 1 1 0 0 1 1 0 0 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 ] .

Furthermore, we have the construction of the decoder as shown in Figure 2(a), where nodes in the rightmost column are the input nodes of the decoder, and the output nodes are the nodes in the leftmost column. During the procedure of the decoding, the probability messages of the received vector are recursively propagated from the rightmost column nodes to the leftmost column nodes. Then, the estimated source binary vector u ^ 0 7 can be decided by (4) u ^ i = { 0 , p v ( 0 , i ) ( 0 ) > p v ( 0 , i ) ( 1 ) 1 , otherwise .

(a) Construction of polar decoding with code length N = 8 . (b) Basic process units of the polar decoder.

In fact, the input probability messages of the decoder depend on the transition probability W ( y i x i ) and the received vector y 0 7 ; hence, there is (5) p v ( n , i ) ( 0 ) = W ( y i x i = 0 ) , p v ( n , i ) ( 1 ) = W ( y i x i = 1 ) .

For convenience of expression, we will write the input probability messages W ( y i x i = 0 ) and W ( y i x i = 1 ) as q i ( 0 ) and q i ( 1 ) , respectively, in the rest of this paper. Therefore, we further have (6) p v ( n , i ) ( 0 ) = q i ( 0 ) , p v ( n , i ) ( 1 ) = q i ( 1 ) .

2.3. Frozen and Information Nodes

In practice, due to the input of frozen bits , values of some nodes in the decoder are determined, which are independent of the decoding algorithm, as the red circle nodes illustrated in Figure 2(a) (code construction method is the same as ). Based on this observation, we classify the nodes in the decoder into two categories: the nodes with determined values are called frozen nodes, and the other nodes are called information nodes, as the gray circle nodes shown in Figure 2(a). In addition, with the basic process units of the polar decoder shown in Figure 2(b), we have the following lemma.

Lemma 2.

For the decoder of a polar code with rate R < 1 , there must exist some frozen nodes, the number of which depends on the information set I .

Proof.

The proof of Lemma 2 can be easily finished based on the process units of the polar decoder as shown in Figure 2(b), where v ( i , j 1 ) , v ( i , j 2 ) , v ( i + 1 , j 3 ) , and v ( i + 1 , j 4 ) are the four nodes of the decoder.

Lemma 2 has shown that, for a polar code with rate R < 1 , the frozen nodes are always existing; for example, the frozen nodes in Figure 2(a) are v ( 0,0 ) , v ( 1,0 ) , v ( 0,1 ) , v ( 1,1 ) , v ( 0,2 ) , and v ( 0,4 ) . For convenience, we denote the frozen node set of a polar code as V F , and we assume that the default value of each frozen node is 0 in the subsequent sections.

2.4. Decoding Tree Representation

It can be found from the construction of the decoder in Figure 2(a) that the decoding of a node v ( i , j ) can be equivalently represented as a binary decoding tree with some input nodes, where v ( i , j ) is the root node of that tree, and the input nodes are the leaf nodes. The height of a decoding tree is as most as lo g 2 N , and each of the intermediate node has one or two son nodes. As illustrated in Figure 3, the decoding trees for frozen nodes v ( 0,0 ) , v ( 0,1 ) , v ( 0,2 ) , v ( 0,4 ) , v ( 1,0 ) , and v ( 1,1 ) in Figure 2(a) are given.

The decoding trees for the nodes v ( 0,0 ) , v ( 0,1 ) , v ( 1,0 ) , v ( 1,1 ) , v ( 0,2 ) , and v ( 0,4 ) .

During the decoding procedure, probability messages of v ( 0,0 ) , v ( 0,1 ) , v ( 0,2 ) , v ( 0,4 ) , v ( 1,0 ) , and v ( 1,1 ) will strictly depend on the probability messages of the leaf nodes as the bottom nodes shown in Figure 3. In addition, based on the (2), we further have (7) v ( 0,0 ) = i = 0 7 v ( 3 , i ) v ( 1,0 ) = v ( 3,0 ) v ( 3,1 ) v ( 3,2 ) v ( 3,3 ) v ( 0,1 ) = v ( 3,4 ) v ( 3,5 ) v ( 3,6 ) v ( 3,7 ) v ( 1,1 ) = v ( 3,4 ) v ( 3,5 ) v ( 3,6 ) v ( 3,7 ) v ( 0,2 ) = v ( 3,2 ) v ( 3,3 ) v ( 3,6 ) v ( 3,7 ) v ( 0,4 ) = v ( 3,1 ) v ( 3,3 ) v ( 3,5 ) v ( 3,7 ) .

To generalize the decoding tree representation for the decoding, we introduce the following Lemma.

Lemma 3.

In the decoder of a polar code with length N = 2 n , there is a unique decoding tree for each node v ( i , j ) , the leaf nodes set of which is indicated as V v ( i , j ) L . And if j N - 1 , the number of the leaf nodes is even; that is, (8) v ( i , j ) = k = 0 M / 2 v ( n , j 2 k ) , 0 j 2 k N - 1 , v ( n , j 2 k ) V v ( i , j ) L , where M = | V v ( i , j ) L | and ( M mod 2 ) = 0 . While if j = N - 1 , M is equal to 1, and it is true that (9) v ( i , N - 1 ) = v ( n , N - 1 ) .

Proof.

The proof of Lemma 3 is based on (2) and construction of the generation matrix. It is easily proved that, except the last column (only one “1” element), there is an even number of “1” elements in all the other columns of F 2 n . As B N is a bit-reversal permutation matrix, which is generated by permutation of rows in I N , hence, the generation matrix G N has the same characteristic as F 2 n (see the proof of Theorem 1). Therefore, (8) and (9) can be easily proved by (2).

Lemma 3 has clearly shown the relationship between the input nodes and other intermediate nodes of the decoder, which is useful for error checking and probability messages calculation introduced in the subsequent sections.

3. Error Checking for Decoding

As analyzed in Section 1, the key problem to improve the performance of polar codes is to enhance the error-correcting capacity of the decoding. In this section, we will show how to achieve the goal.

3.1. Error Checking by the Frozen Nodes

It is noticed from Section 2.3 that the values of the frozen nodes are determined. Hence, if the decoding is correct, the probability messages of any frozen node v ( i , j ) must satisfy the condition of p v ( i , j ) ( 0 ) > p v ( i , j ) ( 1 ) (the default value of frozen nodes is 0), which is called reliability condition throughout this paper. While in practice, due to the noise of received signal, there may exist some frozen nodes unsatisfying the reliability condition, which indicates that there must exist errors in the input nodes of the decoder. Hence, it is exactly based on this observation that we can check the errors during the decoding. To further describe detailedly, a theorem is introduced to show the relationship between the reliability condition of the frozen nodes and the errors in the input nodes of the decoder.

Theorem 4.

For any frozen node v ( i , j ) with a leaf node set V v ( i , j ) L , if the probability messages of v ( i , j ) do not satisfy the reliability condition during the decoding procedure, there must exist an odd number of error nodes in V v ( i , j ) L ; otherwise, the error number will be even (including 0).

Proof.

For the proof of Theorem 4 see Appendix B for detail.

Theorem 4 has provided us an effective method to detect the errors in the leaf nodes set of the frozen node. For example, if the probability messages of the frozen node v ( 0,0 ) in Figure 2 do not satisfy the reliability condition, that is, p v ( 0,0 ) ( 0 ) p v ( 0,0 ) ( 1 ) , it can be confirmed that there must exist errors in the set of { v ( 3,0 ) , v ( 3,1 ) , , v ( 3,7 ) } , and the number of these errors may be 1 or 3 or 5 or 7. That is to say, through checking the reliability condition of the frozen nodes, we can confirm existence of the errors in the input nodes of the decoder, which is further presented as a corollary.

Corollary 5.

For a polar code with the frozen node set V F , if v ( i , j ) V F and v ( i , j ) does not satisfy the reliability condition, there must exist errors in the input nodes of decoder.

Proof.

The proof of Corollary 5 is easily completed based on Theorem 4.

Corollary 5 has clearly shown that, through checking the probability messages of each frozen node, errors in the input nodes of decoder can be detected.

3.2. Error-Checking Equations

As aforementioned, errors in the input nodes can be found with probability messages of the frozen node, but there still is a problem, which is how to locate the exact position of each error. To solve the problem, a parameter called error indicator is defined for each input node of the decoder. And for the input node v ( n , i ) , the error indicator is denoted as c i , value of which is given by (10) c i = { 1 , v ( n , i )    is    error 0 , otherwise .

That is to say, by the parameter of error indicator, we can determine whether an input node is error or not. Hence, the above problem can be transformed into how to obtain the error indicator of each input node. Motivated by this observation, we introduce another corollary of Theorem 4.

Corollary 6.

For any frozen node v ( i , j ) with a leaf node set V v ( i , j ) L , there is (11) ( k = 0 M - 1 c i k ) mod 2 = 1 , p v ( i , j ) ( 0 ) p v ( i , j ) ( 1 ) ( k = 0 M - 1 c i k ) mod 2 = 0 , o t h e r w i s e , where M = | V v ( i , j ) L | , v ( n , i k ) V v ( i , j ) L , and N = 2 n is code length. Furthermore, under the field of G F ( 2 ) , (11) can be written as (12) k = 0 M - 1 c i k = 1 , p v ( i , j ) ( 0 ) p v ( i , j ) ( 1 ) k = 0 M - 1 c i k = 0 , o t h e r w i s e .

Proof.

The proof of Corollary 6 is based on Lemma 3 and Theorem 4, and here we ignore the detailed derivation process.

Corollary 6 has shown that the problem of obtaining the error indicator can be transformed to find solutions of (12) under the condition of (11). In order to introduce more specifically, here, we will take an example based on the decoder in Figure 2(a).

Example 7.

We assume that frozen nodes v ( 0,0 ) , v ( 1,0 ) , v ( 0,2 ) , and v ( 0,4 ) do not satisfy the reliability condition; hence, based on Theorem 4 and Corollary 6, there are equations as (13) i = 0 7 c i = 1 c 0 c 1 c 2 c 3 = 1 c 4 c 5 c 6 c 7 = 0 c 4 c 5 c 6 c 7 = 0 c 2 c 3 c 6 c 7 = 1 c 1 c 3 c 5 c 7 = 1 .

Furthermore, (13) can be written as matrix form, which is (14) E 68 ( c 0 7 ) T = [ 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 1 1 0 0 1 1 0 1 0 1 0 1 0 1 ] [ c 0 c 1 c 2 c 3 c 4 c 5 c 6 c 7 ] = ( γ 0 5 ) T , where γ 0 5 = ( 1 , 1 , 0 , 0 , 1 , 1 , ) and E 68 is the coefficient matrix with size of 6    ×    8 . Therefore, by solving (14), we will get the error indicator vector of input nodes in Figure 2. In order to further generalize the above example, we provide a lemma.

Lemma 8.

For a polar code with the code length N , code rate R = K / N , and frozen node set V F , we have the error-checking equations as (15) E M N ( c 0 N - 1 ) T = ( γ 0 M - 1 ) T , where c 0 N - 1 is the error indicator vector and M = | V F | , M N - K . E M N is called error-checking matrix, elements of which are determined by the code construction method, and γ 0 M - 1 is called error-checking vector, elements of which depend on the probability messages of the frozen nodes in V F ; that is, v i V F , 0 i M - 1 , there is a unique γ i γ 0 M - 1 such that (16) γ i = { 1 , p v i ( 0 ) p v i ( 1 ) 0 , p v i ( 0 ) > p v i ( 1 ) .

Proof.

The proof of the Lemma 8 is based on (10)–(14), Lemma 3, and Theorem 4, which will be ignored here.

3.3. Solutions of Error-Checking Equations

Lemma 8 provides a general method to determine the position of errors in the input nodes by the error-checking equations. It is still needed to investigate the existence of solutions of the error-checking equations.

Theorem 9.

For a polar code with code length N and code rate R = K / N , there is (17) rank ( E M N ) = rank ( [ E M N ( γ 0 M - 1 ) T ] ) = N - K , where [ E M N ( γ 0 M - 1 ) T ] is the augmented matrix of (15) and rank ( X ) is the rank of matrix X .

Proof.

For the proof of Theorem 9 see Appendix C for detail.

It is noticed from Theorem 9 that there must exist multiple solutions for error-checking equations; therefore, we further investigate the general expression of solutions of the error-checking equations as shown in the following corollary.

Corollary 10.

For a polar code with code length N and code rate R = K / N , there exists a transformation matrix P N M in the field of G F ( 2 ) such that (18) [ E M N ( γ 0 M - 1 ) T ] P N M [ I H A H K ( γ ¯ 0 H - 1 ) T 0 ( M - H ) H 0 ( M - H ) K 0 ( M - H ) × 1 ] , where H = N - K , A H K is the submatrix of transformation result of E M N , and γ ¯ 0 H - 1 is the subvector of transformation result of γ 0 M - 1 . Based on (18), the general solutions of error-checking equations can be obtained by (19) ( c ^ 0 N - 1 ) T = [ ( c ^ K N - 1 ) T ( c ^ 0 K - 1 ) T ] = [ A H K ( c ^ 0 K - 1 ) T ( γ ¯ 0 H - 1 ) T ( c ^ 0 K - 1 ) T ] , (20) ( c 0 N - 1 ) T = B ^ N ( c ^ 0 N - 1 ) T , where c ^ i { 0,1 } and B ^ N is an element-permutation matrix, which is determined by the matrix transformation of (18).

Proof.

The proof of Corollary 10 is based on Theorem 9 and the linear equation solving theory, which are ignored here.

It is noticed from (18) and (19) that solutions of the error-checking equations tightly depend on the two vectors: γ ¯ 0 H - 1 and c ^ 0 K - 1 . Where γ ¯ 0 H - 1 is determined by the transformation matrix P N M and the error-checking vector γ 0 M - 1 , and c ^ 0 K - 1 is a random vector. In general, based on c ^ 0 K - 1 , the number of solutions for the error-checking equations may be up to 2 K , which is a terrible number for decoding. Although the solutions number can be reduced through the checking of (11), it still needs to reduce the solutions number in order to increase efficiency of error checking. To achieve the goal, we further introduce a theorem.

Theorem 11.

For a polar code with code length N = 2 n and frozen node set V F , there exists a positive real number δ such that v ( i , j ) V F ; if ( p v ( i , j ) ( 0 ) / p v ( i , j ) ( 1 ) ) δ , there is (21) v ( n , j k ) V v ( i , j ) L c j k = 0 , where V v ( i , j ) L is the leaf nodes set of v ( i , j ) , 0 j k N - 1 , 0 k | V v ( i , j ) L | - 1 , and the value of δ is related to the transition probability of the channel and the signal power.

Proof.

For the proof of Theorem 11 see Appendix D for detail.

Theorem 11 has shown that, with the probability messages of the frozen node and δ , we can determine the values of some elements in c ^ 0 K - 1 quickly, by which the freedom degree of c ^ 0 K - 1 will be further reduced. Correspondingly, the number of solutions for the error-checking equations will be also reduced.

Based on above results, we take (14) as an example to show the detailed process of solving the error-checking equations. Through the linear transformation of (18), we have γ ¯ 0 3 = ( 1,1 , 1,0 ) , (22) A 4 = [ 1 1 1 0 1 1 0 1 1 0 1 1 0 1 1 1 ] , (23) B ^ 8 = [ 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 ] .

By the element permutation of B ^ 8 , we further have c ^ 0 3 = ( c 3 , c 5 , c 6 , c 7 ) and c ^ 4 7 = ( c 0 , c 1 , c 2 , c 4 ) . If ( p v ( 0,1 ) ( 0 ) / p v ( 0,1 ) ( 1 ) ) δ , with the checking of (21), there is ( c 3 , c 5 , c 6 , c 7 ) = ( c 3 , 0,0 , 0 ) , and ( c 0 , c 1 , c 2 , c 4 ) = ( c 3 1 , c 3 1 , c 3 1,0 ) , which imply that the solutions number will be 2. Furthermore, with the checking of (11), we obtain the exact solution c 0 7 = ( 0,0 , 0,1 , 0,0 , 0,0 ) ; that is, the 4th input node is error.

It is noticed clearly from the above example that, with the checking of (11) and (21), the number of the solutions can be greatly reduced, which make the error checking more efficient. And, of course, the final number of the solutions will depend on the probability messages of the frozen nodes and δ .

As the summarization of this section, we given the complete process framework of error checking by solutions of the error-checking equations, which is shown in Algorithm 1.

<bold>Algorithm 1: </bold>Error checking for decoding.

Input:

The frozen nodes set, V F ;

The probability messages set of the V F ;

The matrixes, P N M , A H K and B ^ N ;

Output:

The error indicator vectors set, C ;

( 1 )  Getting γ 0 M - 1 with the probability messages set of V F ;

( 2 )  Getting γ - 0 H - 1 L L L with γ 0 M - 1 and P N M ;

( 3 )  for each v ( i , j ) V F   do

( 4 )   if   p v ( i , j ) ( 0 ) / p v ( i , j ) ( 1 ) > δ   then

( 5 )    Setting the error indicator for each leaf node of v ( i , j ) to 0;

( 6 )   end if

( 7 )  end for

( 8 )  for each valid of c ^ 0 K - 1   do

( 9 )   Getting c ^ K N - 1 with A H K and γ - 0 N - K - 1 ;

( 10 )  if (11) is satisfied then

( 11 )  Getting c 0 N - 1 C with B ^ N ;

( 12 )  else

( 13 )  Dropping the solution and continuing;

( 14 )  end if

( 15 ) end for

( 16 ) return   C ;

4. Proposed Decoding Algorithm

In this section, we will introduce the proposed decoding algorithm in detail.

4.1. Probability Messages Calculating

Probability messages calculating is an important aspect of a decoding algorithm. Our proposed algorithm is different from the SC and BP algorithms, because the probability messages are calculated based on the decoding tree representation of the nodes in the decoder, and for an intermediate node v ( i , j ) with only one son node v ( i + 1 , j o ) , 0 j o N - 1 , there is (24) p v ( i , j ) ( 0 ) = p v ( i + 1 , j o ) ( 0 ) , p v ( i , j ) ( 1 ) = p v ( i + 1 , j o ) ( 1 ) . While if v ( i , j ) has two son nodes v ( i + 1 , j l ) and v ( i + 1 , j r ) , 0 j l , j r N - 1 , we will have (25) p v ( i , j ) ( 0 ) = p v ( i + 1 , j l ) ( 0 ) p v ( i + 1 , j r ) ( 0 ) + p v ( i + 1 , j l ) ( 1 ) p v ( i + 1 , j r ) ( 1 ) , p v ( i , j ) ( 1 ) = p v ( i + 1 , j l ) ( 0 ) p v ( i + 1 , j r ) ( 1 ) + p v ( i + 1 , j l ) ( 1 ) p v ( i + 1 , j r ) ( 0 ) .

Based on (24) and (25), the probability messages of all the variable nodes can be calculated in parallel, which will be beneficial to the decoding throughput.

4.2. Error Correcting

Algorithm 1 in Section 3.3 has provided an effective method to detect errors in the input nodes of the decoder, and, now, we will consider how to correct these errors. To achieve the goal, we propose a method based on modifying the probability messages of the error nodes with constant values according to the maximization principle. Based on the method, the new probability messages of a error node will be given by (26) q i ( 0 ) = λ 0 , q i ( 0 ) > q i ( 1 ) q i ( 0 ) = 1 - λ 0 , otherwise , and q i ( 1 ) = 1 - q i ( 0 ) , where q i ( 0 ) , q i ( 1 ) are the new probability messages of the error node v ( n , i ) , and λ 0 is a small nonnegative constant; that is, 0 λ 0 1 . Furthermore, we will get the new probability vector of the input nodes as (27) q 0 N - 1 ( 0 ) = ( q 0 ( 0 ) , q 1 ( 0 ) , , q N - 1 ( 0 ) ) q 0 N - 1 ( 1 ) = ( q 0 ( 1 ) , q 1 ( 1 ) , , q N - 1 ( 1 ) ) , where q i ( 0 ) = q i ( 0 ) and q i ( 1 ) = q i ( 1 ) , if the input node v ( n , i ) is error; otherwise, q i ( 0 ) = q i ( 0 ) and q i ( 1 ) = q i ( 1 ) . Then, probability messages of all the nodes in the decoder will be recalculated.

In fact, when there is only one error indicator vector output from Algorithm 1, that is, | C | = 1 , after the error correcting and the probability messages recalculation, the estimated source binary vector u ^ 0 N - 1 can output directly by the hard decision of the output nodes. While if | C | > 1 , in order to minimize the decoding error probability, it needs further research about how to get the optimal error indicator vector.

4.3. Reliability Degree

To find the optimal error indicator vector, we will introduce a parameter called reliability degree for each node in the decoder. And for a node v ( i , j ) , the reliability degree ζ v ( i , j ) is given by (28) ζ v ( i , j ) = { p v ( i , j ) ( 0 ) p v ( i , j ) ( 1 ) , p v ( i , j ) ( 0 ) > p v ( i , j ) ( 1 ) p v ( i , j ) ( 1 ) p v ( i , j ) ( 0 ) , otherwise .

The reliability degree indicates the reliability of the node’s decision value, and the larger the reliability degree, the higher the reliability of that value. For example, if the probability messages of the node v ( 0,0 ) in Figure 2 are p v ( 0,0 ) ( 0 ) = 0.95 and p v ( 0,0 ) ( 1 ) = 0.05 , there is ζ v ( i , j ) = 0.95 / 0.05 = 19 ; that is, the reliability degree of v ( 0,0 ) = 0 is 19. And in fact, the reliability degree is an important reference parameter for the choice of the optimal error indicator vector, which will be introduced in the following subsection.

4.4. Optimal Error Indicator Vector

As aforementioned, due to the existence of | C | > 1 , correspondingly, one node in the decoder may have multiple reliability degrees. We denote the k th reliability degree of node v ( i , j ) as ζ k v ( i , j ) , value of which depends on the k th element of C , that is, c k . Based on the definition of reliability degree, we introduce three methods to get the optimal error indicator vector.

The first method is based on the fact that, when there is no noise in the channel, the reliability degree of the node will approach to infinity; that is, ζ v ( i , j ) . Hence, the main consideration is to maximize the reliability degree of all the nodes in decoder, and the target function can be written as (29) k ^ = argmax c k C { i = 0 lo g 2 N j = 0 N ζ k v ( i , j ) } , where c k ^ is the optimal error indicator vector.

To reduce the complexity, we have further introduced two simplified versions of the above method. On one hand, we just maximize the reliability degree of all the frozen nodes; hence, the target function can be written as (30) k ^ = argmax c k C { v ( i , j ) V F ζ k v ( i , j ) } .

On the other hand, we take the maximization of the output nodes’ reliability degree as the optimization target, function of which will be given by (31) k ^ = argmax c k C { j = 0 N - 1 ζ k v ( 0 , j ) } .

Hence, the problem of getting the optimal error indicator vector can be formulated as an optimization problem with the above three target functions. What is more is that, with the CRC aided, the accuracy of the optimal error indicator vector can be enhanced. Based on these observations, the finding of the optimal error indicator vector will be divided into the following steps.

Initialization: we first get number L candidates of the optimal error indicator vector, c k ^ 0 , c k ^ 1 , , c k ^ L - 1 , by the formulas of (29) or (30) or (31).

CRC-checking: in order to get the optimal error indicator vector correctly, we further exclude some candidates from c k ^ 0 , c k ^ 1 , , c k ^ L - 1 by the CRC-checking. If there is only one valid candidate after the CRC-checking, the optimal error indicator vector will be output directly; otherwise, the remaining candidates will further be processed in step 3.

Determination: if there are multiple candidates with a correct CRC-checking, we will further choose the optimal error indicator vector from the remaining candidates of step 2 with the formulas of (29) or (30) or (31).

So far, we have introduced the main steps of proposed decoding algorithm in detail, and, as the summarization of these results, we now provide the whole decoding procedure with the form of pseudocode, as shown in Algorithm 2.

<bold>Algorithm 2: </bold>Decoding algorithm based on error checking and correcting.

Input:

The the received vector y 0 N - 1 ;

Output:

The decoded codeword, u ^ 0 N - 1 ;

( 1 ) Getting the probability messages q 0 N - 1 ( 0 ) and q 0 N - 1 ( 1 ) with the received vector y 0 N - 1 ;

( 2 ) Getting the probability messages of each frozen node V F ;

( 3 ) Getting the error indicator vectors set C with the Algorithm 1;

( 4 ) for each c k C   do

( 5 )  Correcting the errors indicated by c k with (26);

( 6 )  Recalculating the probability messages of all the nodes of the decoder;

( 7 ) end for

( 8 ) Getting the optimal error indicator vector for the decoding;

( 9 ) Getting the decoded codeword u ^ 0 N - 1 by hard decision;

( 10 ) return   u ^ 0 N - 1 ;

4.5. Complexity Analysis

In this section, the complexity of the proposed decoding algorithm is considered. We first investigate the space and time complexity of each step in Algorithm 2, as shown in Table 1.

The space and time complexity of each step in Algorithm 2.

Step number in Algorithm 2 Space complexity Time complexity
( 1 ) O ( 1 ) O ( N )
( 2 ) O ( N log 2 N ) O ( N log 2 N )
( 3 ) O ( X 0 ) O ( X 1 )
( 4 )–( 7 ) O ( T 0 N log 2 N ) O ( T 0 N log 2 N )
( 8 ) O ( 1 ) O ( T 0 N log 2 N ) or O ( T 0 N )
( 9 ) O ( 1 ) O ( N )

In Table 1, O ( X 0 ) , O ( X 1 ) are the space and time complexity of Algorithm 1, respectively, and T 0 is the element number of error indicator vectors output by Algorithm 1; that is, T 0 = | C | . It is noticed that the complexity of the Algorithm 1 has a great influence on the complexity of the proposed decoding algorithm; hence, we further analyze the complexity of each step of Algorithm 1, and the results are shown in Table 2.

The space and time complexity of each step in Algorithm 1.

Step number in Algorithm 1 Space complexity Time complexity
( 1 ) O ( 1 ) O ( M )
( 2 ) O ( 1 ) O ( M )
( 3 )–( 7 ) O ( 1 ) O ( M N )
( 8 )–( 15 ) O ( 1 ) O ( T 1 ( M - K ) K ) + O ( T 1 M )

In Table 2, M is the number of the frozen nodes, and T 1 is the valid solution number of the error-checking equations after the checking of (21). Hence, we get the space and time complexity of Algorithm 1 as O ( X 0 ) = O ( 1 ) and O ( X 1 ) = 2 O ( M ) + O ( M N ) + O ( T 1 ( M - K ) K ) + O ( T 1 M ) . Furthermore, we can get the space and time complexity of the proposed decoding algorithm as O ( ( T 0 + 1 ) N log 2 N ) and O ( 2 N ) + O ( ( 2 T 0 + 1 ) N log 2 N ) + O ( ( T 1 + N + 2 ) M ) + O ( T 1 K ( N - K ) ) . From these results, we can find that the complexity of the proposed decoding algorithm mainly depends on T 0 and T 1 , values of which depend on the different channel condition, as illustrated in our simulation work.

5. Simulation Results

In this section, Monte Carlo simulation is provided to show the performance and complexity of the proposed decoding algorithm. In the simulation, the BPSK modulation and the additive white Gaussian noise (AWGN) channel are assumed. The code length is N = 2 3 = 8 , code rate R is 0.5, and the index of the information bits is the same as .

5.1. Performance

To compare the performance of SC, SCL, BP, and the proposed decoding algorithms, three optimization targets with 1 bit CRC are used to get the optimal error indicator vector in our simulation, and the results are shown in Figure 4.

Performance comparison of SC, SCL( L = 4 ), BP (iteration number is 60), and the proposed decoding algorithm. Algorithm 1 means that target function to get the optimal error indicator vector is (29), Algorithm 2 means that the target function is (30), and Algorithm 3 means that the target function is (31). δ in Theorem 11 takes the value of 4.

As it is shown from Algorithms 1, 2, and 3 in Figure 4, the proposed decoding algorithm almost yields the same performance with the three different optimization targets. Furthermore, we can find that, compared with the SC, SCL, and BP decoding algorithms, the proposed decoding algorithm achieves better performance. Particularly, in the low region of signal to noise ratio (SNR), that is, E b / N 0 , the proposed algorithm provides a higher SNR advantage; for example, when the bit error rate (BER) is 1 0 - 3 , Algorithm 1 provides SNR advantages of 1.3 dB, 0.6 dB, and 1.4 dB, and when the BER is 1 0 - 4 , the SNR advantages are 1.1 dB, 0.5 dB, and 1.0 dB, respectively. Hence, we can conclude that performance of short polar codes could be improved with the proposed decoding algorithm.

In addition, it is noted from Theorem 11 that the value of δ depended on the transition probability of the channel and the signal power will affect the performance of the proposed decoding algorithm. Hence, based on Algorithm 1 in Figure 4, the performance of our proposed decoding algorithm with different δ and SNR is also simulated, and the results are shown in Figure 5. It is noticed that the optimal values of δ according to E b / N 0 = 1 dB , E b / N 0 = 3 dB , E b / N 0 = 5 dB , and E b / N 0 = 7 dB are 2.5, 3.0, 5.0, and 5.5, respectively.

Performance of proposed decoding algorithm with different δ .

5.2. Complexity

To estimate the complexity of the proposed decoding algorithm, the average numbers of parameters T 0 and T 1 indicated in Section 4.5 are counted and shown in Figure 6.

Average number of parameters T 0 and T 1 with δ = 4 .

It is noticed from Figure 6 that, with the increasing of the SNR, the average numbers of parameters T 0 and T 1 are sharply decreasing. In particular, we can find that, in the high SNR region, both of the T 0 and T 1 are approaching to a number less than 1. In this case, the space complexity of the algorithm will be O ( N log 2 N ) , and the time complexity approaches to O ( N M ) . In addition, we further compare the space and time complexity of Algorithm 1 ( δ = 4 ) and SC, SCL ( L = 4 ), and BP decoding algorithm, results of which are shown in Figure 7. It is noticed that, in the high SNR region, the space complexity of the proposed algorithm is almost the same as that of SC, SCL, and BP decoding algorithm, and the space complexity of the proposed algorithm will be close to O ( N M ) . All of the above results have suggested the effectiveness of our proposed decoding algorithm.

Space and time complexity comparison of SC, SCL( L = 4 ), BP (iteration number is 60), and Algorithm 1 ( δ = 4 ).

6. Conclusion

In this paper, we proposed a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. To enhance the error-correcting capacity of the decoding algorithm, we derived the error-checking equations generated on the basis of the frozen nodes, and through delving the problem of solving these equations, we introduced the method to check the errors in the input nodes by the solutions of the equations. To further correct those checked errors, we adopted the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulated a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we used a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results showed that the proposed decoding algorithm achieves better performance than that of the existing decoding algorithms, where the space and time complexity were approaching to O ( N lo g 2 N ) and O ( N M ) ( M is the number of frozen nodes) in the high signal to noise ratio (SNR) region, which suggested the effectiveness of the proposed decoding algorithm.

It is worth mentioning that we only investigated the error correcting for short polar codes, while for the long-length codes, the method in this paper will yield higher complexity. Hence, in future, we will extend the idea of error correcting in this paper to the research of long code length in order to further improve the performance of polar codes.

Appendix A. Proof of Theorem <xref ref-type="statement" rid="thm1">1</xref>

We can get the inverse of F 2 through the linear transformation of matrix; that is F 2 - 1 = [ 1 0 1 1 ] . Furthermore, we have (A.1) ( F 2 2 ) - 1 = [ F 2 0 2 F 2 F 2 ] - 1 = [ F 2 - 1 0 2 - F 2 - 1 F 2 F 2 - 1 F 2 - 1 ] = [ F 2 0 2 F 2 F 2 ] = F 2 2 .

Based on the mathematical induction, we will have (A.2) ( F 2 n ) - 1 = F 2 n .

The inverse of G N can be expressed as (A.3) G N - 1 = ( B N F 2 n ) - 1 = ( F 2 n ) - 1 B N - 1 = F 2 n B N - 1 .

Since B N is a bit-reversal permutation matrix, by elementary transformation of matrix, there is B N - 1 = B N . Hence, we have (A.4) G N - 1 = F 2 n B N .

It is noticed from Proposition 16 of  that F 2 n B N = B N F 2 n ; therefore there is (A.5) G N - 1 = G N .

B. Proof of Theorem <xref ref-type="statement" rid="thm2">4</xref>

We assume the leaf nodes number of the frozen node v ( i , j ) is Q ; that is, Q = | V v ( i , j ) L | . If Q = 2 , based on (25), there is (B.1) p v ( i , j ) ( 0 ) = p v 0 ( 0 ) p v 1 ( 0 ) + p v 0 ( 1 ) p v 1 ( 1 ) p v ( i , j ) ( 1 ) = p v 0 ( 0 ) p v 1 l ( 1 ) + p v 0 ( 1 ) p v 1 ( 0 ) , where v 0 , v 1 V v ( i , j ) L . Based on the above equations, we have (B.2) p v ( i , j ) ( 0 ) - p v ( i , j ) ( 1 ) = ( p v 0 ( 0 ) - p v 0 ( 1 ) ) ( p v 1 ( 0 ) - p v 1 ( 1 ) ) .

Therefore, by the mathematical inducing, when Q > 2 , we will have (B.3) p v ( i , j ) ( 0 ) - p v ( i , j ) ( 1 ) = k = 0 Q - 1 ( p v k ( 0 ) - p v k ( 1 ) ) , where v k V v ( i , j ) L .

To prove Theorem 4, we assume that the values of all the nodes in V v ( i , j ) L are set to 0 without generality. That is to say, when the node v k V v ( i , j ) L is right, there is p v k ( 0 ) > p v k ( 1 ) . Hence, based on the above equation, when the probability messages of v ( i , j ) do not satisfy the reliability condition, that is, p v ( i , j ) ( 0 ) - p v ( i , j ) ( 1 ) 0 , there must exist a subset V v ( i , j ) L O V v ( i , j ) L , and | V v ( i , j ) L O | is an odd number, such that (B.4) v k V v ( i , j ) L O p v k ( 0 ) p v k ( 1 ) . While if p v ( i , j ) ( 0 ) - p v ( i , j ) ( 1 ) > 0 , there must exist a subset V v ( i , j ) L E V v ( i , j ) L , and | V v ( i , j ) L E | is an even number, such that (B.5) v k V v ( i , j ) L E p v k ( 0 ) p v k ( 1 ) . So the proof of Theorem 4 is completed.

C. Proof of Theorem <xref ref-type="statement" rid="thm3">9</xref>

It is noticed from (1) that the coefficient vector of error-checking equation generated by the frozen nodes in the leftmost column is equal to one column vector of G N - 1 , denoted as g i , 0 i N - 1 . For example, the coefficient vector of error-checking equation generated by v ( 0,0 ) is equal to g 1 = ( 1,1 , 1,1 , 1,1 , 1,1 ) T . Hence, based on the proof of Theorem 1, we have (C.1) rank ( E M N ) N - K , rank ( [ E M N ( γ 0 M - 1 ) T ] ) N - K .

In view of the process of polar encoding, we can find that the frozen nodes in the intermediate column are generated by the linear transformation of the frozen nodes in the leftmost column. That is to say, error-checking equation generated by the frozen nodes in the intermediate column can be linear expressed by the error-checking equation generated by the frozen nodes in the leftmost column. Hence, we further have (C.2) rank ( E M N ) N - K , rank ( [ E M N ( γ 0 M - 1 ) T ] ) N - K .

Therefore, the proof of (17) is completed.

D. Proof of Theorem <xref ref-type="statement" rid="thm4">11</xref>

To prove Theorem 11, we assume that the real values of all the input nodes are 0 without generality. Given the conditions of transition probability of the channel and constraint of the signal power, it can be easily proved that there exists a positive constant β 0 > 1 , such that (D.1) v ( n , k ) V I 1 β 0 p v ( n , k ) ( 0 ) p v ( n , k ) ( 1 ) β 0 , where v ( n , k ) is an input node and V I is the input nodes set of the decoder. That is to say, for a frozen node v ( i , j ) with a leaf nodes set V v ( i , j ) L , we have (D.2) v k V v ( i , j ) L 1 β 0 p v k ( 0 ) p v k ( 1 ) β 0 .

Based on (25) and the decoding tree of v ( i , j ) , we have the probability messages of v ( i , j ) as (D.3) p v ( i , j ) ( 0 ) = m = 0 Q / 2 - 1 { k 0 , , k 2 m - 1 } { 0 , , Q - 1 } l = 0 2 m - 1 p v k l ( 1 ) r = 0,0 k r Q - 1 , k r { k 0 , , k 2 m - 1 } Q - 2 m - 1 p v k r ( 0 ) , p v ( i , j ) ( 1 ) = m = 0 Q / 2 - 1 { k 0 , , k 2 m } { 0 , , Q - 1 } l = 0 2 m p v k l ( 1 ) r = 0,0 k r Q - 1 , k r { k 0 , , k 2 m } Q - 2 m - 1 p v k r ( 0 ) , where v k l , v k r V v ( i , j ) L . Hence, we further have (D.4) p v ( i , j ) ( 0 ) p v ( i , j ) ( 1 ) = 1 + m = 1 Q / 2 - 1 { k 0 , , k 2 m - 1 } { 0 , , Q - 1 } l = 0 2 m - 1 ( p v k l ( 0 ) / p v k l ( 1 ) ) m = 0 Q / 2 - 1 { k 0 , , k 2 m } { 0 , , Q - 1 } l = 0 2 m ( p v k l ( 0 ) / p v k l ( 1 ) ) .

With definition of variables φ 0 = p v 0 ( 0 ) / p v 0 ( 1 ) , φ 1 = p v 1 ( 0 ) / p v 1 ( 1 ) , , φ Q - 1 = p v Q - 1 ( 0 ) / p v Q - 1 ( 1 ) , 1 / β 0 φ 0 , φ 1 , , φ Q - 1 β 0 , the above equation will be written as (D.5) p v ( i , j ) ( 0 ) p v ( i , j ) ( 1 ) = f ( φ 0 , φ 1 , , φ Q - 1 ) = ( 1 + + φ Q - 2 φ Q - 1 + φ 0 φ 1 φ 2 φ 3 + + φ Q - 4 φ Q - 3 φ Q - 2 φ Q - 1 + ) × ( φ 0 + + φ Q - 1 + φ 0 φ 1 φ 2 + + φ Q - 3 φ Q - 2 φ Q - 1 + ) - 1 .

To take the derivative of f ( φ 0 , φ 1 , , φ Q - 1 ) , we further define functions as (D.6) h ( φ 0 , φ 1 , , φ Q - 1 ) = 1 + φ 0 φ 1 + + φ Q - 2 φ Q - 1 + φ 0 φ 1 φ 2 φ 3 + + φ Q - 4 φ Q - 3 φ Q - 2 φ Q - 1 + , g ( φ 0 , φ 1 , , φ Q - 1 ) = φ 0 + + φ Q - 1 + φ 0 φ 1 φ 2 + + φ Q - 3 φ Q - 2 φ Q - 1 + . Then, the derivative of f ( φ 0 , φ 1 , , φ Q - 1 ) with respect to φ k will be (D.7) f φ k = ( h / φ k ) g - ( g / φ k ) h g 2 = g g φ k = 0 - h h φ k = 0 g 2 = g φ k = 0 2 - h φ k = 0 2 g 2 , where g φ k = 0 = g ( φ 0 , , φ k - 1 , 0 , φ k + 1 , , φ Q - 1 ) and h φ k = 0 = h ( φ 0 , , φ k - 1 , 0 , φ k + 1 , , φ Q - 1 ) . Based on solution of the equations ( f / φ 0 ) = 0 , ( f / φ 1 ) = 0 , , and ( f / φ Q - 1 ) = 0 , we get the extreme value point of f ( φ 0 , φ 1 , , φ Q - 1 ) as φ 0 = φ 1 = = φ Q - 1 = 1 . Based on the analysis of the monotonicity of f ( φ 0 , φ 1 , , φ Q - 1 ) , we can get the maximum value as δ = f ( β 0 , β 0 , , β 0 Q ) . What is more, we can also get that when f ( φ 0 , φ 1 , , φ Q - 1 ) δ , there is φ 0 > 1 , φ 1 > 1 , , and φ Q - 1 > 1 . That is to say, when ( p v ( i , j ) ( 0 ) / p v ( i , j ) ( 1 ) ) δ , we will have p v 0 ( 0 ) > p v 0 ( 1 ) , p v 1 ( 0 ) > p v 1 ( 1 ) , , and p v Q - 1 ( 0 ) > p v Q - 1 ( 1 ) ; that is, there is no error in V v ( i , j ) L . So the proof of Theorem 11 is completed.

Conflict of Interests

The authors declare that they do not have any commercial or associative interests that represent a conflict of interests in connection with the work submitted.

Acknowledgment

The authors would like to thank all the reviewers for their comments and suggestions.

Arikan E. Channel polarization: a method for constructing capacity-achieving codes for symmetric binary-input memoryless channels IEEE Transactions on Information Theory 2009 55 7 3051 3073 10.1109/TIT.2009.2021379 MR2598005 2-s2.0-67650099990 Arikan E. Telatar E. On the rate of channel polarization Proceedings of the IEEE International Symposium on Information Theory (ISIT '09) June-July 2009 1493 1495 10.1109/ISIT.2009.5205856 2-s2.0-70449478491 Korada S. B. Şaşoğlu E. Urbanke R. Polar codes: characterization of exponent, bounds, and constructions IEEE Transactions on Information Theory 2010 56 12 6253 6264 10.1109/TIT.2010.2080990 MR2809997 2-s2.0-78649340775 Tal I. Vardy A. List decoding of polar codes Proceedings of the IEEE International Symposium on Information Theory Proceedings (ISIT '11) August 2011 St. Petersburg, Russia 1 5 10.1109/ISIT.2011.6033904 2-s2.0-80054823729 Tal I. Vardy A. List decoding of polar codes http://arxiv.org/abs/1206.0050 Chen K. Niu K. Lin J.-R. Improved successive cancellation decoding of polar codes IEEE Transactions on Communications 2013 61 8 3100 3107 10.1109/TCOMM.2013.070213.120789 2-s2.0-84883268376 Niu K. Chen K. CRC-aided decoding of polar codes IEEE Communications Letters 2012 16 10 1668 1671 10.1109/LCOMM.2012.090312.121501 2-s2.0-84867878580 Alamdar-Yazdi A. Kschischang F. R. A simplified successive-cancellation decoder for polar codes IEEE Communications Letters 2011 15 12 1378 1380 10.1109/LCOMM.2011.101811.111480 2-s2.0-84655166983 Sarkis G. Gross W. J. Increasing the throughput of polar decoders IEEE Communications Letters 2013 17 4 725 728 10.1109/LCOMM.2013.021213.121633 2-s2.0-84877578075 Sarkis G. Giard P. Vardy A. Thibeault C. Gross W. J. Fast polar decoders: algorithm and implementation IEEE Journal on Selected Areas in Communications 2014 32 5 946 957 10.1109/JSAC.2014.140514 Giard P. Sarkis G. Thibeault C. Gross W. J. A fast software polar decoder http://arxiv.org/abs/1306.6311 Arikan E. Kim H. Markarian G. Özür U. Poyraz E. Performance of short polar codes under ML decoding Proceedings of the ICT-Mobile Summit Conference June 2009 Kahraman S. Çelebi M. E. Code based efficient maximum-likelihood decoding of short polar codes Proceedings of the IEEE International Symposium on Information Theory (ISIT '12) July 2012 Cambridge, Mass, USA 1967 1971 10.1109/ISIT.2012.6283643 2-s2.0-84867565147 Goela N. Korada S. B. Gastpar M. On LP decoding of polar codes Proceedings of the IEEE Information Theory Workshop (ITW '10) September 2010 Dublin, Ireland 1 5 10.1109/CIG.2010.5592698 2-s2.0-80051938098 Arikan E. A performance comparison of polar codes and reed-muller codes IEEE Communications Letters 2008 12 6 447 449 10.1109/LCOMM.2008.080017 2-s2.0-46349104863 Hussami N. Korada S. B. Urbanke R. Performance of polar codes for channel and source coding Proceedings of the IEEE International Symposium on Information Theory (ISIT '09) July 2009 1488 1492 10.1109/ISIT.2009.5205860 2-s2.0-70449484484 Arikan E. Polar codes: a pipelined implementation Proceedings of the 4th International Symposium on Broadband Communication (ISBC '10) July 2010 11 14 Eslami A. Pishro-Nik H. On bit error rate performance of polar codes in finite regime Proceedings of the 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton '10) October 2010 188 194 10.1109/ALLERTON.2010.5706906 2-s2.0-79952418090 Eslami A. Pishro-Nik H. On finite-length performance of polar codes: stopping sets, error floor, and concatenated design IEEE Transactions on Communications 2013 61 3 919 929 10.1109/TCOMM.2013.012313.110692 Arikan E. Systematic polar coding IEEE Communications Letters 2011 15 8 860 862 10.1109/LCOMM.2011.061611.110862 2-s2.0-80052070779 Massey J. L. Catastrophic error-propagation in convolutional codes Proceedings of the 11th Midwest Symposium on Circuit Theory January 1968 583 587 Gallager R. G. Low-density parity-check codes IEEE Transactions on Information Theory 1962 8 21 28 MR0136009 Divsalar D. Jones C. CTH08-4: protograph LDPC codes with node degrees at least 3 Proceedings of the IEEE Global Telecommunications Conference (GLOBECOM '06) December 2006 San Francisco, Calif, USA 1 5 10.1109/GLOCOM.2006.78 2-s2.0-50949122889