An Evolved Wavelet Library Based on Genetic Algorithm

As the size of the images being captured increases, there is a need for a robust algorithm for image compression which satiates the bandwidth limitation of the transmitted channels and preserves the image resolution without considerable loss in the image quality. Many conventional image compression algorithms use wavelet transform which can significantly reduce the number of bits needed to represent a pixel and the process of quantization and thresholding further increases the compression. In this paper the authors evolve two sets of wavelet filter coefficients using genetic algorithm (GA), one for the whole image portion except the edge areas and the other for the portions near the edges in the image (i.e., global and local filters). Images are initially separated into several groups based on their frequency content, edges, and textures and the wavelet filter coefficients are evolved separately for each group. As there is a possibility of the GA settling in local maximum, we introduce a new shuffling operator to prevent the GA from this effect. The GA used to evolve filter coefficients primarily focuses on maximizing the peak signal to noise ratio (PSNR). The evolved filter coefficients by the proposed method outperform the existing methods by a 0.31 dB improvement in the average PSNR and a 0.39 dB improvement in the maximum PSNR.


Introduction
Initially GA was developed to modify the coefficient sets of standard wavelet inverse transform which significantly improved the MSE for a given class of one-dimensional signals [1]. An investigation on evolutionary computation for image compression shows that it can be used to optimize wavelet coefficients and the transforms are independently trained and tested using three sets of images: digital photographs, fingerprints, and satellite images [2][3][4] and it was concluded that a better evolutionary progress towards an optimized reconstruction transform occurs when both the wavelet and scaling numbers are simultaneously evolved. Coevolutionary genetic algorithm based wavelet design for compressing fingerprint images was developed [5,6] and the evolved wavelets outperform hand-design wavelet improving the quality of compressed images significantly. The suitability of the evolutionary strategy (ES) to implement it in Field Programmable Gate Array (FPGA) was investigated and the original algorithm was modified by cutting down several computing requirements [7][8][9]. The discrete wavelet transform (DWT) coefficients evolved using GA showed better compression and reconstruction of images with less MSE compared to 9/7 wavelet [10] and the detrimental effects of quantization for ultrasound images are compensated using the evolved transforms and its superior performance increases in proportion to the selected quantization level [11]. Moore et al. evolved matched filter pairs for deep space images that outperformed standard wavelets [12]. Even at three-level multiresolution analysis (MRA) transforms the evolved filters gives better compression performance for both photographic [3,13] and satellite images [14,15]. The adaptive embedded system developed by Salvador et al. performs an adaptive image compression in FPGA devices and finds the optimized set of wavelet filters in less than 2 minutes when the input image changes [16,17]. Recently an adaptive fingerprint image compression (FIC) technique was carried out by evolving optimized lifting coefficients [18]. Evolving  x (n) Figure 1: Single level wavelet transforms using convolution scheme.
DWT filter coefficients separately for near-edge pixels and far-edge pixels have proven significant improvement in error when the images are reconstructed. Isolation of edge pixels can be done by the conventional edge detection algorithms like Sobel detector and a corresponding binary mask will separate the image into near-edge and far-edge objects [9].

Contribution.
Primarily the input images are classified based on their frequency content, calculated by performing the DWT, and the corresponding method is detailed in Section 3.2. The training images are grouped according to the calculated average frequency metric and for each group separate DWT filter coefficients are evolved. The fitness function is formulated using PSNR value only but in the future we would like to extend the fitness function as a combination of PSNR, energy compaction (EC), and structural similarity (SSIM) index [19]. Perhaps, the authors believe that the optimization of wavelet filter coefficients with multiobjective fitness function formulated using PSNR, EC, and SSIM would yield a set of filter coefficients with better compression performance [20]. In this paper, the authors work is limited for the evolution of a library of wavelet filter coefficients for various groups of images considering the PSNR as the fitness function. The rest of the paper is organized as follows. Section 2 gives the sufficient background to understand the wavelets and genetic algorithm. Section 3 pursues with the image classification based on frequency content. Detailed experimental setup for evolving DWT filter coefficients and the analysis of quality metrics of the reconstructed images are discussed in Section 4. The paper is concluded in Section 5 with the possible enhancements.

Background
The main objective of this paper is to evolve wavelet filter coefficients suitable for image compression for various groups of images classified according to their spatial frequency content. A detailed discussion about wavelets and genetic algorithm would be essential.

Wavelets and Image Compression.
The wavelet is a multiresolution analysis tool widely used in signal and image processing. The analysis of the signal can be carried out at different frequencies and also with different time resolutions. It should be noted that there is a trade-off between frequency resolution and time resolution in wavelet. Hence the wavelet can be designed to provide good frequency resolution by giving off the time resolution and vice versa.
Discrete wavelet transforms (DWTs) are widely used for image compression as they have good compression capability. In particular, biorthogonal wavelets prove remarkable capabilities in still image compression. Perhaps the lifting scheme based DWT converts the high pass and low pass filtering operations into sequence of matrix multiplications and hence it proves to be efficient in terms of computation and memory.
2.1.1. Discrete Wavelet Transform. The wavelet decomposition of the signal into different frequency bands is simply obtained by successive high pass and low pass filtering of the time domain signal. The original input signal [ ] is first passed through a half band high pass filter [ ] and a low pass filter ℎ[ ]. After the filtering process, half of the samples can be eliminated according to the Nyquist rule. The signal now has a highest frequency of /2 radians instead of . The signal [ ] can therefore be subsampled by 2, by discarding every other sample. This constitutes one level of wavelet decomposition as shown in Figure 1 and can mathematically be expressed as follows: The above procedure is followed in reverse order for the reconstruction. The signals are upsampled at every level and The Scientific World Journal 3 Fast wavelet transform (FWT) and/or Mallat's herringbone algorithm [21] which is a computationally efficient implementation of the DWT is used here to compute the wavelet coefficients. Table 1 shows the CDF 9/7 filter coefficients for both forward and inverse DWT.
Wavelets are described by four sets of coefficients: (1) LOW is the set of wavelet numbers for the forward DWT, (2) HIGH is the set of scaling numbers for the DWT, (3) LOWR is the set of wavelet numbers for IDWT, (4) HIGHR is the set of scaling numbers for the IDWT.

Lifting Based DWT and IDWT.
Lifting scheme is a computationally efficient way of implementing DWT [22,23]. The transform can proceed first with the Lazy Wavelet, then alternating dual lifting and primal lifting steps, and finally a scaling. The inverse transform proceeds first with a scaling, then alternating lifting and dual lifting steps, and finally the inverse lazy transform. The inverse transform can immediately be derived from the forward transform by running the scheme backwards and flipping the signs.
The polyphase decomposition of discrete low pass (LOW( )) and high pass (HIGH( )) filters are The synthesis filters can be expressed through polyphase matrix: And̂( ) can be analogously defined for the analysis filters. Euclidean algorithm can be used to decompose ( ) and ( ) as The discrete wavelet transform using lifting scheme consists of three steps as in Figure 2.

4
The Scientific World Journal (1) Split: the original signal, ( ), is split into odd and even sequences (lazy wavelet transform) (2) Lifting: it consists of one or more steps of the form.
(a) Predict/dual lifting: if ( ) possesses local correlation, then ( ) and ( ) also have local correlation; therefore, one subset (generally odd sequence) is used to predict the other subset (even sequence). Thus, the prediction step consists of applying a filter to the even samples and subtracting the result from the odd ones: where [ ( )] expresses that the value of ( ) is predicted by some combination of the value of ( ). (b) Update/primal lifting: an update step does the opposite of applying a filter to the odd samples and adding the result from the even samples: Eventually, after pairs of prediction and update steps, the even samples become the low pass coefficients while the odd samples become the high pass coefficients.
(3) Normalization/scaling: after lifting steps, scaling coefficients 1/ and are applied to the odd and even samples, respectively, in order to obtain the high pass subband ( ) and low pass subband ( ).
Lifting scheme for biorthogonal 9/7 is as follows.
Thus by adapting wavelets to better suit the image, the performance of image compression can be increased. This adaptation is done by an evolutionary algorithm (EA) such as GA to improve the image reconstruction in the presence of quantization error by replacing the wavelet filter coefficients with a set of evolved filter coefficients. Evolutionary algorithm will evolve the best filter coefficients for the given image as shown in Figure 3.

Genetic Algorithm.
Genetic algorithms (GAs) (first proposed by Holland) have frequently been used to solve a number of difficult optimization problems. GAs work by first creating a population of randomly generated chromosomes. Over a number of generations, new chromosomes are created by mutating and recombining chromosomes from the previous generation. Among the total population, the best chromosomes (solutions) are then selected for survival to the next generation based on some fitness criteria. The flow diagram of the GA for evolving wavelet filter coefficients is shown in Figure 4.
(a) Types of genetic algorithm.

Real Coded Genetic Algorithm.
In RCGA the chromosomes are represented as real valued coefficients. The evolution of filters for image processing requires the simultaneous optimization of many real valued coefficients.
Population Initialization. The initial population includes one chromosome consisting of CDF9/7 filter coefficients. The remaining individuals are copies of the original wavelet filter coefficients multiplied by a small random factor. Additionally, 5% of the filter coefficients are negated. Each chromosome is composed of low pass filter coefficients, high pass filter coefficients, low pass filter reconstruction coefficients, and high pass filter reconstruction coefficients.
The Scientific World Journal Evaluation. The fitness of initial population is evaluated by first performing two-dimensional (2D) DWT on the test images and then the conventional decomposition and reconstruction (refer to Figure 5) is performed on the transformed coefficients and finally 2D IDWT is carried out to get the reconstructed image and the population is sorted according to the average fitness value. Image quality (PSNR) and distortion (MSE) metrics are calculated between the original and the reconstructed image and the PSNR value is taken as the fitness measure. PSNR and MSE between the original ( ) and reconstructed (̂) image of size × can be calculated using (13) and (14), respectively. Here represents bits per pixel (bpp): An MSE = 0 in a reconstructed image indicates thatî s a perfect reconstruction of . Increasing values of MSE correspond to increasing error.
New Population Creation. Once the population is evaluated for its performance, the new population is created from the parent population by the following.
(i) Sorting the population according to the evaluated fitness measure. (ii) Selecting the parents for reproduction by random/stochastic uniform selection methods. (iii) Reproducing the population for the next generation.

Reproduction (Recombination and Mutation).
The new population for next generation is created by crossover and mutation.
(i) Elite. It represents the number of best individuals which is copied from the parent population to the new population; Ne is elite count number.
(ii) Heuristic Crossover. The technique by which a child is created from two parents 1 and 2 biased in the direction of the parent with better fitness. Assuming 1 has better fitness than 2 , then a child gene is created as where is randomly chosen in the interval [0, 1].
(iii) Gaussian Mutation. Mutation is required to avoid the premature population convergence in RCGA. Given a parent vector , a new child vector is created by = + ; is based on Gaussian mutation, where the mutation shrinks in successive generations. Mutation Shrink rate controls the rate at which the average amount of mutation decreases. In early generations, the large variance permits quick exploration of the search space. Towards the end of the run, the variance is quite small, and the mutation operator makes very small refinements. If is the current generation, 6 The Scientific World Journal Proposed Shuffling Mechanism. The probability of occurrences of the optimum solution increases by increasing the number of generation runs. The shuffling mechanism is primarily introduced to avoid the search algorithm getting struck at a local minimum [24]. Perhaps the search algorithm sometimes settles to the local minimum point and we call this phenomenon "positional effect" which can be avoided using the proposed "shuffling mechanism. " This shuffling operator totally changes the position of the elite chromosomes while The Scientific World Journal 7 getting replaced as new population for the next iteration. To a certain extent, this can make the search algorithm further visit some steepest points in the search space. The proposed GA along with the genetic operators and shuffling mechanism is tested for convergence using few standard objective functions, namely, Rosenbrock function, De Jong's function, and Rastrigin's function and the results obtained show that the proposed GA is suitable for the optimization problem. The optimum solutions obtained by the proposed GA for the standard test functions are listed in Table 2.

Genetic Algorithm Configuration for Evolving Global and
Local Filters. The overriding goal of this research work is to develop a robust methodology for the evolutionary optimization of image transform filters capable of outperforming CDF 9/7 under conditions subject to quantization noise.
Evolving Global and Local Filters. Traditional image transformation algorithms are concerned with minimizing the global error between a reconstructed image and its original counterpart. Those transforms which are evolved to provide reconstruction over entire images tend to exhibit higher error rates near image object edges (Salvador et al., [8]).

Improved Reconstruction through Edge Detection and Targeted
Evolution. Thus to improve the reconstruction of edges within an image, the image is reconstructed using two evolved image filters, globally evolved filters (Filters evolved using the entire image for fitness calculation to reduce errors in areas not adjacent to object edges.) and locally evolved filters (evolved using the edge-enclosing masks for fitness calculation to reduce error near object edges) and the two reconstructed images are combined by using binary mask which is generated by edge detection (canny edge detector) followed by thresholding. Figure 5 describes the process involved.
Initially the algorithm starts with separating the image into near-edge pixels and far-edge pixels using edge detector algorithm. Once the edges are detected, a binary mask is created which is a binary image which carries black pixels in the far-edge area and white pixels in the near-edge image area based on the threshold value. Hence, there are two classes of images (near and far edge) through which the evolutionary algorithm separately evolves suitable filter coefficients for the given set of training images. DWT is taken for both images using the respective filter coefficients and then it is quantized and encoded using lossless encoding algorithm like Huffman coding and transmitted. In the receiving side, the image can be reconstructed using appropriate wavelet filters and the individual near-edge and far-edge images are combined together to form a complete image.

Evolution of Wavelets.
Evolution of wavelets can be carried out in the following ways.  (2) 1 variable (k).

Convolution Scheme
Population Initialization. The initial population includes one chromosome consisting of CDF 9/7 filter coefficients. The remaining individuals are copies of the original wavelet coefficients multiplied by a small random factor. Additionally, 5% of the filter coefficients are negated. The initial configuration of the GA for each scheme is discussed in Table 3. Each chromosome is composed of the following: (i) low pass filter coefficients (9); (ii) high pass filter coefficients (7); (iii) low pass filter reconstruction coefficients (7); (iv) high pass filter reconstruction coefficients (9).

Lifting Scheme
Evolving 5 Variables. In this method all the 5 parameters ( , , , , ) are allowed to evolve randomly.

Image Classification Based on Frequency Content
The DWT filter coefficients evolved for images with smooth regions might not suit well for edge and texture rich images. Also, it is not practical to construct the optimal wavelet The Scientific World Journal

12
The Scientific World Journal  for each image as an online process in spite of the best compression with the evolved filter coefficients. Hence all the test images are classified according to the complexity of the images (edges and textures) and optimal wavelets are evolved for each class to build a wavelet library offline. The quality of the DWT-based compression method for remote sensing images is effectively assessed using a gradient based approach by classifying image pixels according to the gradient magnitude and texture complexity thus proving the importance of the edges and textures in an image [27]. Hence we propose a systematic approach to find the edges and textures of the image by using the DWT itself. The high frequency subbands of transformed image will depict the edge and texture content in an image. Texture rich images will have more coefficients in the high frequency subbands as depicted in Figure 6 (3 cases are considered). This implies that the images can be classified by looking into the high frequency subbands. Thus the Frequency Mean (summation of absolute averages of all the high frequency subbands) in frequency domain of an image is taken as measure to classify the images.

Test Images.
We have taken 50 images as shown in Figure 7 for each run and those 50 images are classified into six groups (G1, G2,. . .,G6) and the wavelets are evolved separately for each group and all of them are classified according to Frequency Mean (F MEAN).

Calculation of F MEAN.
The F MEAN is calculated using the steps followed in the Figure 8 and the calculated F MEAN are shown in Table 4 for the considered 50 test images.

Classification of Images.
The images are classified into one of the six groups (G1, G2,. . .,G6) according to the F MEAN value and the corresponding classification rule is shown in Table 5. For more clarity, the classified images are categorized in Figure 9 according to their groups. Finally, a library is build offline by evolving wavelets for each group separately using RCGA with PSNR as the fitness function.

Experimental Results and Discussion
Initially the images are classified according to the edges and textures using the algorithm discussed in the Section 3.2.
The initial classification step provides six groups of images with different texture details. The idea is to evolve wavelet The Scientific World Journal 13  filter coefficients for each individual group both for near-edge and far-edge pixels in an image. Based on the output of the edge detection algorithm, a binary mask is created for the considered image and the binary mask separates the nearedge and far-edge pixels. The next step is to evolve wavelet filter coefficient for the near-edge pixels followed by faredge pixels. The experiment is repeated for all the images which fall in the same group and the corresponding evolved filter coefficient is stored in the library. The experiment continues with the next group and concludes after evolving filter coefficients for all the six groups. Fifty GA runs are considered for both convolution scheme and lifting scheme and each GA run would consider one of the test images shown in the Figure 9. The GA configuration followed for conducting the experiment is given in Table 3.
Thus we have created an optimal wavelet library suitable for image compression for each class of images. For compressing an arbitrary image, its optimal wavelet filter coefficients need to be selected from the prestored library based on its F MEAN value which serves as an index for the selection of wavelets. We have evolved 9 filter coefficients for convolution scheme as the 2-variable evolver failed in most situations to produce a better wavelet than the CDF 9/7. For lifting scheme single variable is evolved as the 5-variable evolver failed because of its NIL constraint situation. The evolved wavelet libraries for both global and local filter are shown in Table 6.
The comparison of the quality measures in convolution and lifting schemes are shown in Figure 10. Figure 11 shows the images reconstructed using global and local evolved filters and Figure 12 shows the comparison between images reconstructed using CDF 9/7 and evolved filters coefficients.
The ES based wavelet optimization algorithm discussed by Salvador et al. [7,8] focused on hardware implementation choosing FPGA as the base fabric. The existing ES is modified to suit the hardware implementation with a hardware efficient mutation operator and it was tested for both floating point and fixed point arithmetic. Our focus is to improve the quality of reconstruction (PSNR) by evolving wavelet filter coefficients for image subgroups based on the texture and edges. Also, the same evolved filter coefficients may not suit for all image groups and hence we evolve different filters even if the improvement is marginal for the first level of decomposition. The improvement is best pronounced 14 The Scientific World Journal   16 The Scientific World Journal as the decomposition level increases. Table 7 compares the optimization methodology and improvement in the results in terms of PSNR.

Conclusion and Future Work
Thus a lossy image compression method with improved performance compared to the CDF 9/7 based compression has been designed. The experimental results of Hybrid subband decomposer for the selected test images show significant improvement in the average PSNR and the maximum PSNR value for the reconstructed image subjected to quantization. Thus evolving wavelets for each group of images classified according to the F MEAN value is robust for performing lossy image compression. The evolved wavelets show an average improvement of 0.31 dB and a maximum improvement of 0.39 dB under convolution scheme. Under lifting scheme the evolved wavelets show an average improvement of 0.27 dB and a maximum improvement of 0.35 dB. Apart from using PSNR as the quality metric, the wavelets can be evolved by also considering SSIM and EC for the fitness measure to further improve the performance of compression. As extrinsic evolution of filter coefficients takes large amount of time, intrinsic evolution can be carried out by implementing an optimized light weight GA core on an FPGA platform so that filter coefficients can be evolved in lesser amount of time and hence make it suitable for adaptive systems.