This paper describes an embedded FFT processor where the higher radices butterflies maintain one complex multiplier in its critical path. Based on the concept of a radix-r fast Fourier factorization and based on the FFT parallel processing, we introduce a new concept of a radix-r Fast Fourier Transform in which the concept of the radix-r butterfly computation
has been formulated as the combination of radix-2^{
α}/4^{
β} butterflies implemented in parallel. By doing so, the VLSI butterfly implementation for higher radices would be feasible since it maintains approximately the same complexity of the radix-2/4 butterfly which is obtained by block building of the radix-2/4 modules. The block building process is achieved by duplicating the block circuit diagram of the radix-2/4 module that is materialized by means of a feed-back network which will reuse the block circuit diagram of the radix-2/4 module.
1. Introduction
For the past decades, the main concern of the researchers was to develop a fast Fourier transform (FFT) algorithm in which the number of operations required is minimized. Since Cooley and Tukey presented their approach showing that the number of multiplications required to compute the discrete Fourier transform (DFT) of a sequence may be considerably reduced by using one of the fast Fourier transform (FFT) algorithms [1], interest has arisen both in finding applications for this powerful transform and for considering various FFT software and hardware implementations.
The DFT computational complexity increases according to the square of the transform length and thus becomes expensive for large N. Some algorithms used for efficient DFT computation, known as fast DFT computation algorithms, are based on the divide-and-conquer approach. The principle of this method is that a large problem is divided into smaller subproblems that are easier to solve. In the FFT case, dividing the work into subproblems means that the input data x[n] can be divided into subsets from which the DFT is computed, and then the DFT of the initial data is reconstructed from these intermediate results. Some of these methods are known as the Cooley-Tukey algorithm [1], split-radix algorithm [2], Winograd Fourier transform algorithm (WFTA) [3], and others, such as the common factor algorithms [4].
The problem with the computation of an FFT with an increasing N is associated with the straightforward computational structure, the coefficient multiplier memories’ accesses, and the number of multiplications that should be performed. The overall arithmetic operations deployed in the computation of an N-point FFT decreases with increasing r as a result; the butterfly complexity increases in terms of complex arithmetic computation, parallel inputs, connectivity, and number of phases in the butterfly’s critical path delay. The higher radix butterfly involves a nontrivial VLSI implementation problem (i.e., increasing butterfly critical path delay), which explains why the majority of FFT VLSI implementations are based on radix 2 or 4, due to their low butterfly complexity. The advantage of using a higher radix is that the number of multiplications and the number of stages to execute an FFT decrease [4–6].
The most recent attempts to reduce the complexity of the higher radices butterfly’s critical path was achieved by the concept of a radix-r fast Fourier transform (FFT) [8, 9], in which the concept of the radix-r butterfly computation has been formulated as composed engines with identical structures and a systematic means of accessing the corresponding multiplier coefficients. This concept enables the design of butterfly processing element (BPE) with the lowest rate of complex multipliers and adders, which utilizes r or r-1 complex multipliers in parallel to implement each of the butterfly computations. Another strategy was based on targeting hardware oriented radix 2α or 4β which is an alternative way of representing higher radices by means of less complicated and simple butterflies in which they used the symmetry and periodicity of the root unity to further lower down the coefficient multiplier memories’ accesses [10–20].
Based on the higher radices butterfly and the parallel FFT concepts [21, 22], we will introduce the structure of higher multiplexed 2α or 4β butterflies that will reduce the resources in terms of complex multiplier and adder by maintaining the same throughput and the same speed in comparison to the other proposed butterflies structures in [13–20].
This paper is organized as follows. Section 2 describes the higher radices butterfly computation and Section 3 details the FFT parallel processing. Section 4 elaborates the proposed higher radices butterflies; meanwhile Section 5 draws the performance evaluation of the proposed method and Section 6 is devoted to the conclusion.
2. Higher Radices’ Butterfly Computation
The basic operation of a radix-r PE is the so-called butterfly computation in which r inputs are combined to give the r outputs via the following operation:
(1)X=Brxin,xin=[x(0),x(1),…,x(r-1)]T,X=[X(0),X(1),…,X(r-1)]T,
where xin and X are, respectively, the butterfly’s input and output vectors. Br is the butterfly matrix (dim(Br)=r×r) which can be expressed as
(2)Br=WNTr,
for decimation in frequency (DIF) process, and
(3)Br=TrWN,
for decimation in time (DIT) process. In both cases the twiddle factor matrix, WN, is a diagonal matrix which is defined by WN=diag(1,wNp,wN2p,…,wN(r-1)p) with p=0,1,…,N/rs-1 and s=0,1,…,logrN-1 and Tr is the adder tree matrix within the butterfly structure expressed as [4]
(4)Tr=[wN0wN0wN0⋯⋯wN0wN0wNN/rwN2N/r⋯⋯wN(r-1)N/rwN0wN2N/rwN4N/r⋮⋮wN2(r-1)N/r⋮⋮⋮⋮⋮⋮⋮⋮⋮⋮⋮⋮wN0wN(r-1)N/rwN2(r-1)N/r⋯⋯wN(r-1)2N/r].
As seen from (2) and (3), the adder tree, Tr, is almost identical for the two algorithms, with the only difference being the order in which the twiddle factor and the adder tree multiplication are computed. A straightforward implementation of the adder tree is not effective for higher radices butterflies due to the added complex multipliers in the higher radices butterflies’ critical path that will complicate its implementation in VLSI.
By defining the element of the lth line and the mth column in the matrix Tr as [Tr]l,m,
(5)[Tr]l,m=wN⟦(lmN/r)⟧N,
where l=0,1,…,r-1, m=0,1,…,r-1, and ⟦x⟧N represents the operation x modulo N. By defining WN(m,v,s) the set of the twiddle factor matrix as
(6)[WN]l,m(v,s)=diag(wN(0,v,s),wN(1,v,s),…,wN(r-1,v,s)),
where the index r is the FFT’s radix, v=0,1,…,V-1 represents the number of words of size r(V=N/r), and s=0,1,…, S is the number of stages (or iterations S=logrN-1). Finally, the twiddle factor matrix in (2) and (3) can be expressed for the different stages of an FFT process as [7, 8]
(7)[WN]l,m(v,s)={wN⟦⌊v/rs⌋lrs⟧Nforl=m0elsewhere,
for the DIF process and (3) would be expressed as
(8)[WN]l,m(v,s)={wN⟦⌊v/r(S-s)⌋lr(S-s)⟧Nforl=m0elsewhere,
for the DIT process, where l=0,1,…,r-1 is the lth butterfly’s output, m=0,1,…,r-1 is the mth butterfly’s input, and ⌊x⌋ represents the integer part operator of x.
As a result, the lth transform output during each stage can be illustrated as
(9)X(v,s)[l]=∑m=0r-1x(v,s)[m]wN⟦lmN/r+⌊v/rs⌋lrs⟧N,
for the modified DIF process, and
(10)X(v,s)[l]=∑m=0r-1x(v,s)[m]wN⟦lmN/r+⌊v/r(S-s)⌋mr(S-s)⟧N,
for the modified DIT process.
The conceptual key to the modified radix-r FFT butterfly is the formulation of the radix-r as composed engines with identical structures and a systematic means of accessing the corresponding multiplier coefficients [8, 9]. This enables the design of an engine with the lowest rate of complex multipliers and adders, which utilizes r or r-1 complex multipliers in parallel to implement each of the butterfly computations. There is a simple mapping from the three indices m, v, and s (FFT stage, butterfly, and element) to the addresses of the multiplier coefficients needed by using the proposed FFT address generator in [24]. For a single processor environment, this type of butterfly with r parallel multipliers would result in decreasing the time delay for the complete FFT by a factor of O(r). A second aspect of the modified radix-r FFT butterfly is that they are also useful in parallel multiprocessing environments. In essence, the precedence relations between the engines in the radix-r FFT are such that the execution of r engines in parallel is feasible during each FFT stage. If each engine is executed on the modified processing element (PE), it means that each of the r parallel processors would always be executing the same instruction simultaneously, which is very desirable for SIMD implementation on some of the latest DSP cards.
Based on this concept, Kim and Sunwoo proposed a proper multiplexing scheme that reduces the usage of complex multiplier for the radix-8 butterfly from 11 to 5 [25].
3. Parallel FFT Processing
For the past decades, there were several attempts to parallelize the FFT algorithm which was mostly based on parallelizing each stage (iteration) of the FFT process [26–28]. The most successful FFT parallelization was accomplished by parallelizing the loops during each stage or iteration in the FFT process [29, 30] or by focusing on memory hierarchy utilization that is achieved by the combination of production and consumption of butterflies’ results, data reuse, and FFT parallelism [31].
The definition of the DFT is represented by
(11)X(k)=∑n=0N-1x(k)wNnk,k∈[0,N-1],
where x(n) is the input sequence, X(k) is the output sequence, N is the transform length, and wN is the Nth root of unity: wN=e-j2π/N. Both x(n) and X(k) are complex valued sequences.
Let x(n) be the input sequence of size N and let pr denote the degree of parallelism which is multiple of N; therefore, we can rewrite (11) by considering k1=0,1,…,V-1, p=0,1,…,pr-1, q=0,1,…,pr-1, V=N/pr, and k=k1+qV as [9]
(12)X(k1+qN/pr)=[wN0∑n=0N/pr-1x(prn)wN/prn(k1+qrN/pr)k+wN(k1+qN/pr)∑n=0N/pr-1x(prn+1)wN/prn(k1+qN/pr)k+⋯+wN(pr-1)(k1+qN/pr)∑n=0N/pr-1x(prn+(pr-1))wN/prn(k1+qN/pr)].
If X(k) is the Nth order Fourier transform ∑n=0N-1x(n)wNnk, then, X(0)(k1), X(1)(k1),…, and X(pr-1)(k1) will be the Nth/pr order Fourier transforms given, respectively, by the following expressions: ∑n=0V-1x(prn)wVnv, ∑n=0V-1x(prn+1)wVnv,…, and ∑n=0V-1x(prn+(pr-1))wVnv.
4. The Proposed Higher Radices Butterflies
Most of the FFTs’ computation transforms are done within the butterfly loops. Any algorithm that reduces the number of additions and multiplications in these loops will reduce the overall computation speed. The reduction in computation is achieved by targeting trivial multiplications which have a limited speedup or by parallelizing the FFTs that have a significant speedup on the execution time of the FFT. In this section we will be limited in the elaboration of the proposed butterfly’s radix-2^{
α}/4^{
β} (the radix-2/4 families) for the DIT FFT process. By rewriting (3) as
(13)X=WN∑m=0r-1x(m)wNlmN/r=WN∑m=0r-1x(m)wrlm
and by applying the concept of the parallel FFT (introduced in Section 3) on the kernel Br, therefore, (13) will be expressed as
(14)X=WN∑m=0r-1x(m)wrlm=WN[∑m=0r/α-1x(αm)wrlmα+⋯kkkk+∑m=0r/α-1x(αm+(α-1))wrl(αm+(α-1))]=WN[X(0)+wrlX(1)+⋯+wrl(α-1)X(α-1)]kkkkkkkkkkkkkkkkkkkforl=0,…,rα-1.
It is to be noted that the notation wx in all figures of this paper represents the set of twiddle factor associated with the butterfly input defined by [w0,…,w(r-2)]=diag(wNp,wN2p,…,wN(r-1)p).
For the radix-4 butterfly (r=2 and α=2), we can express (13) as
(15)X=WN[∑m=01x(2m)w2lm+w4l∑m=01x(2m+1)w2lm]=WN[X(0)+w4lX(1)],
and the conventional radix-2^{2} (MDC-R2^{2}) BPE in terms of radix-2 butterfly is illustrated in Figure 1.
The use of resources could also be reduced by a feedback network and a multiplexing network where the feedback network is for feeding the ith output of the jth radix-2 adder network to the jth input of the ith butterfly and the multiplexers selectively pass the input data or the feedback, alternately, to the corresponding radix-2 adder network as illustrated in Figure 2(a) [23]. The circuit block diagram of the radix-2 adder network is illustrated in Figure 2(b) that consists of two complex adders only.
(a) Proposed multiplexed radix-2^{2} (MuxMDC-R2^{2}) BPE and (b) block circuit diagram of the radix-2 adder network [7].
With the rising edge of the clock cycle the inputs data are fed to the butterfly’s input of the system presented in Figure 1. In order to complete the butterfly’s operations within one clock cycle, the following conditions should be satisfied:
(16)TCLK>TCM+2TCA,Throughput=4TCLK,
where TCM/TCA is the time required to perform one complex multiplication/addition and the timing block diagram of Figure 1 is sketched in Figure 3.
Timing block diagram of Figure 1.
With the rising edge of the clock cycle the inputs data are fed to the butterfly’s input of the system presented in Figure 2(a) and with the falling edge of the clock cycle the feedback data are fed to the butterfly’s input. In order to complete the butterfly’s operations within one clock cycle, the following conditions should be satisfied:
(17)t1>TCM+TCA,t2>TCA,TCLK>(t1+t2)>TCM+2TCA,Throughput=4TCLK,
and the timing block diagram of Figure 2(a) is illustrated in Figure 4.
Timing block diagram of Figure 2(a).
Further block building of these modules could be achieved by duplicating the block circuit diagram of Figure 2(a) and combining them in order to obtain the radix-8 MDC-R2^{3} BPE; therefore, for this case (r=4 and α=2), (4) could be expressed as
(18)X(l)=WN[∑m=03x(2m)w4lm+w8l∑m=03x(2m+1)w4lm]=WN[X(0)+w8lX(1)],
and the signal flow graph (SFG) of the DIT conventional MDC-R2^{3} BPE butterfly is illustrated in Figure 5. The resources in the conventional MDC-R2^{3} BPE could also be reduced by means of the partial multiplexed radix 2^{2} and a feedback network yielding to the proposed MuxMDC-R2^{3} BPE structure in Figure 6.
Conventional MDC-R2^{3} BPE.
Proposed MuxMDC-R2^{3} BPE based on the partial MuxMDC-R2^{2}.
The clock timing of Figure 5 is computed as
(19)TCLK>TCM+tpm+3TCA,Throughput=8TCLK,
where tpm is the time required to execute one complex multiplication on a constant multiplier and the clock timing of the proposed MuxMDC-R2^{3} is estimated as
(20)t1>TCM+TCA,t2>TCA,t3=t1,TCLK>(t1+t2+t3)>2TCM+3TCA,Throughput=8TCLK.
The overall timing block diagram of the proposed MuxMDC-R2^{3} is sketched in Figure 7. In Figure 6, the inputs are multiplied by the twiddle factors wi when S2=1 and by the constant factors -j, c, c1 or 1 for S2=0.
Timing block diagram of Figure 6.
Further block building of these modules could be achieved by combining two radix-8 butterflies with eight radix-2 butterflies in order to obtain the conventional MDC-R2^{4} BPE; therefore, for this case (r=8 and α=2), (4) could be expressed as
(21)X(l)=WN[∑m=07x(2m)w8lm+w8l∑m=07x(2m+1)w8lm]=WN[X(0)+w16lX(1)],
and the signal flow graph (SFG) of the proposed DIT radix-2^{4} MuxMDC-R2^{4} based on the partial MuxMDC-R2^{3} (Figure 8) is illustrated in Figure 9.
Proposed Partial MuxMDC-R2^{3}.
Proposed MuxMDC-R2^{4} BPE based on the Partial MuxMDC-R2^{3}.
The clock timing of the conventional MDC-R2^{4} BPE is computed as
(22)TCLK>TCM+2tpm+4TCA,Throughput=16TCLK,
and the clock timing of the proposed MuxMDC-R2^{4} is estimated as
(23)t1>TCM+TCA,t2>TCA,t3=tpm+TCA,t4=t1,TCLK>(t1+t2+t3+t4)>2TCM+tpm+4TCA,Throughput=16TCLK.
The overall timing block diagram of the proposed MuxMDC-R2^{4} is sketched in Figure 10.
Timing block diagram of Figure 6.
With the same reasoning as above, we will be limited in the elaboration of the proposed butterfly’s radix-4α family to the DIT FFT process.
For the radix-16 butterfly (r=4 and α=4), we can express (4) as
(24)X=WN[∑m=03x(4m)w4lm+w16l∑m=03x(4m+1)w4lmkkkk+w162l∑m=03x(4m+2)w4lm+w163l∑m=03x(2m+1)w4lm]=WN[X(0)+w16lX(1)+w162lX(2)+w163lX(3)],
and the proposed MDC-R4^{2} in terms of radix-4 network is illustrated in Figure 11 where the feedback network is for feeding the ith output of the jth radix-4 network to the jth input of the ith butterfly and the switches selectively pass the input data or the feedback, alternately, to the corresponding radix-4 butterfly. The circuit block diagram of the radix-4 network is illustrated in Figure 12.
The proposed DIT MuxMDC-R4^{2}.
Block circuit diagram of the radix-4 adder network.
5. Performance Evaluation
FFTs are the most powerful algorithms that are used in communication systems such as OFDM. Their implementation is very attractive in fixed point due to the reduction in cost compared to the floating point implementation. One of the most powerful FFT implementations is the pipelined FFT which is highly implemented in the communication systems; see Figure 13.
S stages radix-r pipelined FFT.
Since the objective of this paper is mainly concentrated on the higher radices butterflies structures, in our performance study we will be limited to the impact of the butterfly structure. Once the pipeline is filled, the butterflies will produce r output each clock cycle (throughput T in samples per cycle (Spc)). Therefore, Table 1 will draw the comparison between the different butterflies’ structures in terms of resources needed to compute an FFT of size N.
Resources needed to compute an FFT of size N.
Butterfly structure
Complex multiplier
Complex adder
Latency (cycles)
T (Spc)
4 parallel BPE architectures
R-2^{4} [13], [14]
4(log4N-1)
16log4N
N/4
4
R-2^{4} [15]
4(log4N-1)
16log4N
N/4
4
R-2^{4} [16]
4(log4N-1)
16log4N
N/4
4
R-2^{2} [17]
3(log4N-1)
16log4N
N/4
4
R-2^{3} [17]
4(log4N-1)
8log4N
N/4
4
R-2^{4} [17]
3.5(log4N-4)
8log4N
N/4
4
Proposed MuxMDC-R2^{2}
3(log4N-1)
4log4N
N/4
4
8 parallel BPE architectures
R-2 [18]
8(log4N-1)
16log4N
N/8
8
R-2 [19]
8(log4N-1)
32log4N
N/8
8
R-2^{4} [20]
8(log4N-1)
32log4N
N/8
8
R-2^{2} [17]
6(log4N-1)
16log4N
N/8
8
R-2^{3} [17]
6log4N-7
16log4N
N/8
8
R-2^{4} [17]
7log4N-8
16log4N
N/8
8
Proposed MuxMDC-R2^{3}
7(log8N-1)
8log8N
N/8
8
16 parallel BPE architectures
Proposed MuxMDC-R2^{4}
17(log16N-1)
16log16N
N/16
16
Proposed MuxMDC-R4^{2}
15(log16N-1)
32log16N
N/16
16
As shown in Figure 14, we could clearly see that the proposed MuxMDC-R2^{2} for the four parallel pipelined FFTs of size N will have the same amount of complex multiplier compared to the radix 2^{4} cited in [30]. Furthermore, our proposed MuxMDC-R2^{2} achieves a reduction in the usage of complex multiplier by a factor that ranges between 1.1 and 1.4 compared to the other cited butterflies.
Comparison between the different butterflies’ structures in terms of complex multiplier needed to compute the 4 parallel BPE pipelined FFTs of size N.
For the 4 parallel pipelined FFTs of size N, the reduction in the usage of complex adder for our proposed method MuxMDC-R2^{2} ranges between 1.9 and 3.9 compared to the cited butterflies as shown in Figure 15.
Comparison between the different butterflies’ structures in terms of complex adder needed to compute the 4 parallel BPE pipelined FFTs of size N.
For the 8 parallel pipelined FFTs of size N, the reduction factor in the usage of complex multiplier for our proposed MuxMDC-R2^{3} could range from 1.3 to 2.1 compared to the cited butterflies as illustrated in Figure 16.
Comparison between the different butterflies’ structures in terms of complex multiplier needed to compute the 8 parallel BPE pipelined FFTs of size N.
For the same structure, the reduction factor in the usage of complex adder for our proposed method MuxMDC-R2^{3} could range from 3.0 to 5.4 compared to the cited butterflies (Figure 17).
Comparison between the different butterflies’ structures in terms of complex adder needed to compute the 8 parallel BPE pipelined FFTs of size N.
It seems that the proposed MuxMDC-R2^{4} uses less complex adders than the proposed MuxMDC-R4^{2} as shown in Figure 18 where the proposed MuxMDC-R2^{4} achieves a reduction in the usage of complex adder by a factor of 2 but the proposed MuxMDC-R4^{2} achieves a reduction in the usage of complex multiplier by a factor of 1.1 as shown in Figure 19.
Comparison between the different butterflies’ structures in terms of complex adder needed to compute the 16 parallel BPE pipelined FFTs of size N.
Comparison between the different butterflies’ structures in terms of complex multiplier needed to compute the 16 parallel BPE pipelined FFTs of size N.
Since one complex multiplication is counted as 3 real multiplications and 5 real additions as shown in Figure 20, Table 2 will illustrate the required resources in terms of full adder (FA) that will be computed as (a) n2 for two n-digit real multiplier and (b) p for two p-digit real adder.
Resources needed in terms of FA to compute an FFT of size N.
Butterfly structure
FA
4 parallel BPE architectures
R-2^{4} [13], [14]
12n2(log4N-1)+20(plog4N-1)+32plog4N
R-2^{4} [15]
12n2(log4N-1)+20p(log4N-1)+32plog4N
R-2^{4} [16]
12n2log4(N-1)+20plog4(N-1)+32plog4N
R-2^{2} [17]
9n2(log4N-1)+15p(log4N-1)+32plog4N
R-2^{3} [17]
12n2(log4N-1)+20p(log4N-1)+16plog4N
R-2^{4} [17]
10.5n2(log4N-1)+17.5p(log4N-1)+16plog4N
Proposed MuxMDC-R2^{2}
9n2(log4N-1)+15p(log4N-1)+8plog4N
8 parallel BPE architectures
R-2 [18]
24n2(log4N-1)+40p(log4N-1)+32plog4N
R-2 [19]
24n2(log4N-1)+40p(log4N-1)+64plog4N
R-2^{4} [20]
24n2(log4N-1)+40p(log4N-1)+64plog4N
R-2^{2} [17]
18n2(log4N-1)+30p(log4N-1)+32plog4N
R-2^{3} [17]
18n2(log4N-1)+30p(log4N-1)+32plog4N-21n2-35p
R-2^{4} [17]
21n2(log4N-1)+35p(log4N-1)+32plog4N-24n2-40p
Proposed MuxMDC-R2^{3}
21n2(log8N-1)+35p(log8N-1)+16plog8N
16 parallel BPE architectures
Proposed MuxMDC-R2^{4}
51n2(log16N-1)+85p(log16N-1)+32plog16N
Proposed MuxMDC-R4^{2}
45n2(log16N-1)+75p(log16N-1)+64plog16N
Complex multiplier using three real multipliers and five real adders.
For the four parallel pipelined FFTs of size N, it seems that the R-2^{2} butterfly cited in [30] will have approximately the same amount of FA as the proposed MuxMDC-R2^{2} according to Figure 21. Our proposed MuxMDC-R2^{2} will achieve a reduction in the usage of FA by a factor that ranges between 1.17 and 1.34 (Figure 21).
Comparison between the different butterflies’ structures in terms of full adder needed to compute the 4 parallel pipelined FFTs of size N (multiplier on 16 bits and adder on 32 bits).
With regard to the eight parallel pipelined FFTs of size N, it seems that the proposed MuxMDC-R2^{3} will achieve a reduction in the usage of FA by a factor that ranges between 1.4 and 1.9 in comparison to the other cited butterflies as shown in Figure 22.
Comparison between the different butterflies’ structures in terms of full adder needed to compute the 8 parallel pipelined FFTs of size N (multiplier on 16 bits and adder on 32 bits).
Since the implementation of higher radices by means of the radix-2^{
α}/4^{
β} butterfly is feasible, the optimal pipelined FFT is achieved by the two stage FFT as shown in Figure 23 where the use of complex memories between the different stages is completely eliminated and the delay required to fill up the pipeline is totally absent.
Two stage pipelined FFT (or array structure) with a feedback network [23].
6. Conclusion
It has been shown that the higher radix FFT algorithms are advantageous for the hardware implementation, due to the reduced quantity of complex multiplications and memory access rate requirements. This paper has presented an efficient way of implementing the higher radices butterflies by means of the radix-2^{
α}/4^{
β} kernel where serial parallel models have been represented. The proposed optimized different structures with a scheduling scheme of complex multiplications are suitable for embedded FFT processors. Furthermore, it has been proven that the higher radices butterflies could be obtained by reusing the block circuit diagram of the radix-2^{
α}/4^{
β} butterfly. Based on this concept, the hardware resources needed could be reduced which is highly desirable for low power consumption FFT processors. The proposed method is suitable for large pipelined FFTs implementation where the performance gain will increase with an increasing FFTs’ radix size. This structure is also appropriate for SIMD implementation on some of the latest DSP cards.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The authors would like to thank the financial support from the Natural Sciences and Engineering Research Council of Canada and from JABERTECH’s Shareholders Trevor Hill from Alberta and Bassam Kabbara from Kuwait.
CooleyW.TukeyJ. W.An algorithm for the machine calculation of complex Fourier seriesDuhamelP.HollmannH.Split radix FFT algorithmWinogradS.On computing the discrete Fourier transformWidheT.WidheT.MelanderJ.WanhammarL.Design of efficient radix-8 butterfly PEs for VLSIProceedings of the IEEE International Symposium on Circuits and Systems (ISCAS '97)June 1997208420872-s2.0-0030655076MelanderJ.WidheT.PalmkvistK.VesterbackaM.WanhammarL.An FFT processor based on the SIC architecture with asynchronous PE3Proceedings of the IEEE 39th Midwest Symposium on Circuits and SystemsAugust 1996Ames, Iowa, USA131313162-s2.0-0030372715JaberM.MassicotteD.The self-sorting JMFFT algorithm eliminating trivial multiplication and suitable for embedded DSP processorProceedings of the 10th IEEE International NEWAS ConferenceJune 2012Montreal, CanadaJaberM.Butterfly processing element for efficient fast Fourier transform method and apparatusUS Patent No. 6, 751, 643, 2004JaberM. A.MassicotteD.A new FFT concept for efficient VLSI implementation: part I—butterfly processing element16th International Conference on Digital Signal Processing (DSP '09)July 2009Santorini, Greece162-s2.0-7044960217510.1109/ICDSP.2009.5201181WangY.TangY.JiangY.ChungJ.SongS.LimM.Novel memory reference reduction methods for FFT implementations on DSP processorsHeS.TorkelsonM.Design and implementation of a 1024-point pipeline FFT processorProceedings of the IEEE Custom Integrated Circuits ConferenceMay 19981311342-s2.0-0031633013HeS.TorkelsonM.New approach to pipeline FFT processorProceedings of the 10th International Parallel Processing Symposium (IPPS '96)April 19967667702-s2.0-0029710702SwartzlanderE. E.YoungW. K. W.JosephS. J.A radix 4 delay commutator for fast Fourier transform processor implementationMcClellanJ. H.PurdyR. J.LiuH.LeeH.A high performance four-parallel 128/64-point radix-2^{4} FFT/IFFT processor for MIMO-OFDM systemsProceedings of the IEEE Asia Pacific Conference on Circuits and Systems (APCCAS '08)December 2008Macao, China8348372-s2.0-6294908716710.1109/APCCAS.2008.4746152ChoS.-I.KongK.-M.ChoiS.-S.Implementation of 128-point fast fourier transform processor for UWB systemsProceedings of the International Wireless Communications and Mobile Computing Conference (IWCMC '08)August 2008Crete Island, Greece2102132-s2.0-5294909841610.1109/IWCMC.2008.37GarridoM.GrajalJ.SanchezM. A.GustafssonO.Pipelined radix-2k feedforward FFT architecturesJohnstonJ. A.Parallel pipeline fast Fourier transformerWoldE. H.DespainA. M.Pipeline and parallel-pipeline FFT processors for VLSI implementationsTangS.-N.TsaiJ.-W.ChangT.-Y.A 2.4-GS/s FFT processor for OFDM-based WPAN applicationsJaberM.Parallel multiprocessing for the fast Fourier transform with pipeline architectureUS Patent No. 6, 792, 441JaberM. A.MassicotteD.A new FFT concept for efficient VLSI implementation: part II—parallel pipelined processingProceedings of the 16th International Conference on Digital Signal Processing (DSP 2009)July 2009Santorini, Greece152-s2.0-7044958898910.1109/ICDSP.2009.5201254JaberM.Fourier transform processorUS Patent No. 7, 761, 495JaberM.Address generator for the fast Fourier transform processorUS-6, 993, 547 82 and European Patent Application Serial no: PCT/USOI /07602KimE. J.SunwooM. H.High speed eight-parallel mixed-radix FFT processor for OFDM systemsProceedings of the IEEE International Symposium of Circuits and Systems (ISCAS '11)May 2011Rio de Janeiro, Brazil168416872-s2.0-7996085565710.1109/ISCAS.2011.5937905LiP.DongW.Computation oriented parallel FFT algorithms on distributed computerProceedings of the 3rd International Symposium on Parallel Architectures, Algorithms and Programming (PAAP '10)December 2010Dalian, China3693732-s2.0-7995254953610.1109/PAAP.2010.35TakahashiD.UnoA.YokokawaM.An implementation of parallel 1-D FFT on the K computerProceedings of the IEEE International Conference on High Performance Computing and CommunicationJune 2012Liverpool, UK34435010.1109/HPCC.2012.53PiedraR. M.Parallel 1-D FFT implementation with TMS320C4x DSPshttp://www.fftw.org/PetrovV.MKL FFT performance—comparison of local and distributed-memory implementationsKelefourasV. I.AthanasiouG. S.AlachiotisN.MichailH. E.KritikakouA. S.GoutisC. E.A methodology for speeding up fast fourier transform focusing on memory architecture utilization