Performance Evaluation of Data Compression Systems Applied to Satellite Imagery

,


Introduction
High-resolution cameras onboard remote sensing satellites generate data at a rate on the order of hundreds of Mbits/s.Data compression [1] is used in many space missions to reduce onboard storage and telemetry bandwidth requirements.Particularly in remote sensing applications, in which data are acquired at high cost and the information contained in the image data is important for scientific exploration, a lossless or near-lossless compression is desirable to preserve image data.
In 1988, the Brazilian National Institute for Space Research (INPE) and the China Academy of Space Technology (CAST) signed a cooperation agreement for the development of remote sensing satellites, known as CBERS (China-Brazil Earth Resources Satellite) [2].Due to the success of CBERS 1 and 2 and CBERS-2B, the cooperation was expanded to include the satellites CBERS 3 and CBERS 4.
Given the importance of compression in space missions, a working group at INPE has evaluated some compression systems.The objective is to select an appropriate compression scheme to be implemented in FPGA hardware to meet the minimum requirements of compression ratio and PSNR (peak signal-to-noise ratio) of 4 and 50 dB, respectively.Within this context, we have studied some compression algorithms, and conducted their performance evaluation using quantitative rate-distortion measurements.
To compare the algorithms we assembled a dataset with a hundred of test images acquired by CCD camera (Table 2) onboard CBERS-2B (CBERS-2B CCD).This dataset is representative of a variety of content relevant to remote sensing applications including agriculture, forest, urban areas, and surface water with different cloud cover.
The selected compression system must meet certain requirements for real-time hardware compression onboard a spacecraft, such as nonframe (push-broom) data processing, a high decoded image quality, and packet loss effects that are limited to a small region of the image.In particular, the algorithm complexity must be sufficiently low to make highspeed hardware implementation feasible.
In this paper, we present the rate-distortion comparison of some compression systems suitable for remote sensing images such as DPCM, JPEG-LS, CCSDS-IDC, and JPEG-XR.The paper is structured as follows.In Section 2, we show an overview of image compression systems.Predictionand transform-based compression systems are described in Sections 3 and 4, respectively.The performance results and analysis are presented in Section 5, and the conclusions appear in Section 6.

Image Compression Systems
According to Gonzalez and Woods [5], data compression refers to the process of reducing the amount of data required to represent a given quantity of information.Image compression schemes are divided into two broad categories: lossless and lossy.Lossless image compression allows an image to be compressed and decompressed without any information loss.In lossy compression schemes, some controlled loss of data is tolerated.Although it is impossible to reconstruct the original image using lossy image compression, it provides a much higher compression ratio than lossless compression.
Digital images generally contain a significant amount of redundancy, thus image compression techniques take advantage of these redundancies to reduce the number of bits required to represent the image.There are two main kinds of data redundancy on digital images: spatial redundancy and coding redundancy [6].
(a) Spatial Redundancy.Means that, due to the interpixel correlations within the image, the value of any pixel can be partially predicted from the value of its neighbors.To reduce the spatial redundancy, the image is usually modified into a more efficient format using spatial decorrelation methods, such as prediction or transforms.
(b) Coding Redundancy.Utilizes the probability distribution associated with the occurrence of the symbols.To reduce coding redundancy, a variable-length coding assigns the shortest codewords to the most frequently occurring symbols and longer codewords to low-probability symbols.This process is also called entropy coding.The main entropy coding schemes are arithmetic coding [7], Huffman [8], and Golomb coding [9].
Generally, a compression system model consists of two distinct structural blocks: an encoder and a decoder.The encoder creates a codestream from the original input data.After transmission over the channel, the decoder generates a reconstructed output data.
A typical model of an encoder consists of three functional modules, as depicted in Figure 1: a prediction module (for prediction-based compression systems) or a forward transform module (for transform-based compression systems) that performs the spatial decorrelation, a quantization module that reduces the dynamic range of the errors, and an entropy encoding module that reduces the coding redundancy.When lossless compression is desired, the quantization step is omitted because it is an irreversible operation.
Basically, the decoder consists of two functional modules: an entropy decoding and an inverse prediction or transform.The quantization step results in irreversible information loss, and reconstruction of quantized data is based on the midpoints of each quantization interval.
Different schemes can be used for spatial decorrelation.We consider schemes based on prediction and on transforms.Prediction techniques are used to predict the current pixel value from the values of neighboring pixels.The prediction-based compression methods include DPCM (Differential Pulse Code Modulation) [10], lossless JPEG (Joint Photographic Experts Group) [11], and JPEG-LS [12].Transform-based systems perform a mapping of the image from the spatial (pixel) domain into a rotated system of coordinates in signal space by applying transforms, such as DCT (discrete cosine transform) and DWT (discrete wavelet transform).The baseline JPEG standard [11] is a DCT-based compression scheme.DWT-based compression schemes include JPEG2000 [13], ICER [14,15], and CCSDS-IDC (Consultative Committee for Space Data Systems-Image Data Compression) [16,17].The new standard JPEG-XR (JPEG Extended Range) uses another transform [18][19][20].
As reported by Yu et al. [6], prediction and DCT-based compression are the methods most commonly used onboard satellites.Although prediction-based methods have a low compression ratio, they are still popular in space missions due to their efficacy at achieving lossless data compression and the low complexity of the algorithm.Even though lossy DCT-based compression methods have undesirable block artificial effects, they have been used for long time.However, in recent years, DWT-based compression schemes are being more used in space missions because they can provide higher image quality.
We evaluated the performances of some image compression systems suitable for space missions, such as DPCM, JPEG-LS, CCSDS-IDC, and JPEG-XR.The considered compression methods are one or bidimensional; notwithstanding, the considered data sets are multispectral.The details of these prediction-and transform-based compression systems are introduced in Sections 3 and 4, respectively.
All the studies were performed considering that the selected compression scheme is going to be developed in hardware and to be used onboard satellites.Thus, the evaluation must consider practical aspects of the systems, such as processing complexity (speed), and ease of implementation in FPGA hardware.

Prediction-Based Compression Systems
This section reviews the two prediction-based compression techniques evaluated in this paper: DPCM and JPEG-LS.The basic idea behind these schemes is to predict the value of a pixel based on the correlation between certain neighboring pixel values, using certain prediction coefficients.The number of pixels used in the prediction is called the order of the predictor.The difference between the predicted value and the ( actual value of the pixels gives the difference (residual) image, which is much less spatially correlated than the original image.The difference image is then quantized and encoded.The basic function of the quantization is to map a large range of values onto a relatively smaller set of values.Predictive methods do not require much storage and present a good tradeoff between complexity and efficiency.The main drawback of prediction methods is their susceptibility to error propagation.
3.1.DPCM.Differential Pulse Code Modulation (DPCM), the most common approach to predictive coding, offers the advantages of computational simplicity and ease of parallel implementation in hardware [10].
In this work, we evaluated a particular lossy DPCM compression method used by CAST/China in the PANMUX camera onboard CBERS 3 and 4 satellites [21].The encoding process of this first-order predictor has five steps: subtraction, scalar quantization, binary encoding, summation, and prediction, as shown in Figure 2. Lookup tables are used in this particular one-dimensional DPCM algorithm in the prediction, quantization, and binary encoding, as shown in Figure 3, simplifying implementation.This is a lossy algorithm with fixed compression ratio of 2, because the quantization introduces error and lowers the bit rate from eight bits to four bits wide.
To ensure that the predictions at both the encoder and decoder are identical, the encoder uses the preceding reconstructed pixel value to predict the next one; that is, x n = ρ • x n−1 , where ρ is the prediction coefficient equal or smaller than one and reduces the diffusion of error code.This particular algorithm actually turns the multiplication function into a lookup table (data in the lookup table makes  up a common subtraction function).The prediction error results are rounded up into integer and then encoded.
The typical image distortions caused by lossy DPCM coding are granular noise (random fluctuations in the flat areas of the picture), slope overload (blurring in the highcontrast edges), and Moiré patterns (in periodic structures).[12] provides simple, efficient, and low-complexity lossless and near-lossless image compression.The core of JPEG-LS is based on the LOCO-I (Low Complexity Lossless Compression for Images) algorithm [22], which relies on prediction, error modeling, and context-based encoding of the errors.In the near-lossless mode, the maximum absolute error can be controlled by the encoder.Basically, JPEG-LS consists of two independent stages called modeling and encoding.The main procedures for the lossless and near-lossless encoding processes are shown in Figure 4.

JPEG-LS. JPEG-LS
The modeling approach is based on the notion of "context."In context modeling, each sample value is conditioned on a small number of neighboring samples.The template used for context modeling and prediction is shown in Figure 4.The context is determined from four neighborhood-reconstructed samples at positions a, b, c, and d.From the values of the reconstructed samples at a, b, c, and d, the context first determines if the information in the sample x should be encoded in the regular or run mode.The run mode is selected when the context estimates that successive samples are likely to be either identical (for lossless coding) or nearly identical within the tolerances required (for near-lossless coding); otherwise, the regular mode is used [12].
Regular Mode: Prediction and Error Encoding.In the regular mode, the context determination procedure is followed by a prediction procedure (decorrelation) and error encoding.The predictor combines the reconstructed values of the three neighborhood samples at positions a, b, and c to predict the sample at position x.
(a) Prediction.The prediction approach is a variation of median adaptive prediction, in which the predicted value is the median of the a, b, and c pixels.In the LOCO-I algorithm, primitive detection of horizontal or vertical edges is achieved by examining the pixels neighboring the current pixel.The pixel x is predicted according to the following algorithm: the b pixel is used in cases were a vertical edge exists left of x, the a pixel is used in cases of a horizontal edge above x, and a + b − c is used if no edge is detected.This simple predictor is called the median edge detection (MED) predictor or LOCO-I predictor [22].The initial prediction is then refined using the average value of the prediction error in that particular context.
(b) Error Encoding.The prediction error is computed as the difference between the actual sample value at position x and its predicted value.This prediction error is then corrected by a context-dependent term to compensate for biases in prediction.In the case of near-lossless coding, the prediction error is quantized.The prediction errors are then encoded using a procedure derived from Golomb coding [9] that is optimal for sequences with a geometric distribution.The context modeling procedure determines a probability distribution that is used to encode the prediction errors.During the encoding process, shorter codes are assigned to the more probable events.
Run Mode.If the reconstructed values of the samples at a, b, c, and d are identical (for lossless coding) or if the differences between them are within set bounds (for nearlossless coding), the context modeling procedure selects the run mode and the encoding process skips the prediction and error encoding procedures.In the run mode, a sequence of consecutive samples with identical values (or values within the specified bound in the case of near-lossless coding) is encoded.

Transform-Based Compression Systems
Transform-based compression systems are based on the insight that the decorrelated coefficients of a transform can be coded more efficiently than the original image pixels.The transform typically results in some energy compaction (i.e., an energy redistribution of the original image into a smaller set of coefficients).Even though the energy is compacted into fewer coefficients, the total energy is conserved, resulting in a significant number of coefficients with values of zero or near zero.
Several kinds of transforms, with different efficiencies, energy compacting methods, and computational complexities, are useful in compression systems.The most common data compression transforms are DCT (discrete cosine transform) and DWT (discrete wavelet transform).

DCT-Based Compression System.
The JPEG (Joint Photographic Experts Group) committee published the JPEG standard (ITU-T T.81 | ISO/IEC 10918-1) in 1992 [23].The JPEG baseline, a typical DCT compression technique, has been widely used for digital imaging, including digital photography and images on the Internet.JPEG and other DCT-based compression techniques have been employed in many space missions, as discussed in [6].
JPEG is a standard for continuous tone still images that allows lossy and lossless coding.Lossless JPEG [11] is an independent predictive coding compression technique that includes differential coding, run length, and Huffman code.Lossless JPEG is not largely used in space mission due to the low compression ratio.
There are several modes defined for JPEG, including baseline, progressive, and hierarchical.The baseline mode, which supports only lossy compression using the DCT, is the most popular.The process flow is shown in Figure 5.The JPEG-baseline encoder starts with an 8 × 8 block-based DCT, quantization, zigzag ordering, and entropy coding using Huffman tables.A quality factor is set using quantization tables.When quality adjustments are applied to an image, block artifacts are induced by the encoding process and serious quality degradation becomes evident.to subband decomposition.The most important visual information tends to be concentrated into a reduced number of components (coefficients); therefore, the remaining coefficients can be quantized coarsely or truncated to zero with little image distortion.Compression methods based on wavelets avoid the block artifacts that occur in compression methods based on DCT.This is one of the reasons why wavelet-based compression schemes tend to produce superior image quality.

DWT-Based Compression
The CCSDS (Consultative Committee for Space Data Systems) is an international group dedicated to providing technical solutions to common problems faced by member space agencies, such as NASA (National Aeronautics and Space Administration), CAST/China, and INPE/Brazil.Two compression algorithms have been proposed by CCSDS.The first method is CCSDS Lossless Data Compression (CCSDS-LDC), which has been widely used in many missions [24,25].The second is the CCSDS Image Data Compression (CCSDS-IDC) algorithm.
The CCSDS-IDC [16,17] is a new image compression recommendation suitable for space applications, which was established in 2005.The compression technique described in this recommendation can be used to produce lossy and lossless compression.
The CCSDS-IDC compressor consists of two main functional parts: a discrete wavelet transform (DWT) module that performs the decomposition of image data and a bit-plane encoder (BPE) that encodes the transformed data, as shown in Figure 6.This architecture is similar to the JPEG2000 structure, but it differs from the JPEG2000 in certain aspects: (a) it specifically targets the high-rate instruments used onboard in space missions; (b) a tradeoff has been performed between compression performance and complexity; (c) the lower complexity supports a low-power hardware implementation; (d) it has a limited set of options.According to the literature [17,26], CCSDS-IDC can achieve performance similar to JPEG2000.
The algorithm decomposes the image with a three-level, two-dimensional, separable DWT, as shown in Figure 7(a).Two wavelets are specified with the recommendation: the 9/7 biorthogonal DWT, referred to as the float DWT, and a nonlinear integer approximation to this transform, referred to as the integer DWT.The float DWT cannot provide lossless compression; therefore, the integer DWT must be used in applications that require perfect image reconstruction.The integer DWT requires only integer arithmetic, while the float DWT requires floating-point calculations.Thus, the integer DWT may be preferable in some applications for complexity reasons.At low bit rates, the float DWT often provides better compression efficacy than the integer DWT [16,17].
The BPE processes wavelet coefficients in groups of 64 coefficients referred to as a block.As shown in Figure 7(b), each block consists of a single coefficient from the lowest spatial frequency subband (referred to as the DC coefficient) and 63 AC coefficients.A segment is defined as a group of S consecutive blocks.The BPE encodes the DWT coefficients segment by segment, and each segment is coded independently of the others.

Other Transform Models. JPEG-XR (ITU-T T.832 |
ISO/IEC 29199-2) is the newest image coding standard from the JPEG committee [18,19].It is a block-based image coder that is similar to the traditional image-coding paradigm and that includes transformation and coefficientencoding stages.JPEG-XR employs a reversible integer-tointeger mapping, called lapped biorthogonal transform (LBT), as its decorrelation tool [20].The main operations of JPEG-XR are transform, scalar quantization, and entropy coding, as shown in Figure 8.
JPEG-XR begins with a transform stage that maps the pixel information from the spatial domain to the frequency domain.Then, a quantization stage divides each coefficient by some integer value, rounding to the nearest integer.For lossless transforms, a quantization parameter of 0 will not affect the transformed coefficients.Next, the transform coefficients are scanned in order of increasing frequency and (finally) entropy coded.
The lifting-based reversible hierarchical lapped biorthogonal transform (LBT) converts the image data from the spatial domain to a frequency domain representation.The transform requires only a small number of integer processing operations for both encoding and decoding.It is exactly invertible in integer arithmetic and hence supports lossless image representation [20].
The two-stage transform is based on two basic operators: the photo core transform (PCT) operator and the optional photo overlap filtering (POT) operator, as shown in Figure 9.The core transform is similar to the widely used discrete cosine transform (DCT) and can exploit spatial correlations within a block-shaped region.The overlap filtering is designed to exploit the correlation across block boundaries and to mitigate blocking artifacts.It can be switched on or off by the encoder.
The smallest element of an image is a pixel.Each 4 × 4 set of pixels is grouped into a block.Then, each 4 × 4 set of blocks is grouped into a macroblock.A set of macroblocks can then be grouped into a tile, although the number of macroblocks included along the width and height may vary between tiles.At the highest level of the hierarchy, the tiles come together to form the complete image.An illustrative example of this partitioning is shown in Figure 10.
The image data is represented in the frequency domain, and the transform coefficients associated with each macroblock are split into three frequency bands: DC coefficients, lowpass coefficients, and highpass coefficients.According to this hierarchy, each macroblock contains 256 transform coefficients: one is DC, 15 are lowpass, and 240 are highpass.This hierarchical division supports direct decompression at three different resolutions.
The bitstream of a JPEG-XR compressed image is structured as shown in Figure 11.The image may be compressed into either the spatial or the frequency format.The spatial structure stores the coefficients of each macroblock together and stores the macroblocks sequentially in raster scan order (left to right, then top to bottom).The frequency structure groups the coefficients according to frequency band.Each band of coefficients is stored in raster scan order.
To enable the optimization of the quantization, JPEG-XR uses a flexible coefficient quantization approach that is controlled by quantization parameters (QPs).Adaptive coefficient scanning is used to convert the two-dimensional array of transform coefficients within a block into a onedimensional vector to be encoded.Finally, the transform coefficients are entropy encoded.For this purpose, a variable-length coding (VLC) look-up table approach is used; a VLC table is selected from a small set of fixed predefined tables, with the table being selected adaptively based on the local statistics.JPEG-XR supports a wide range of color formats, including n-channel encodings using fixed and floating point numerical representations; the bit depth varieties allow for a wide range of data compression scenarios.

Reference Softwares.
In this work, we used reference C++ software implementations as listed in Table 1.The DPCM software was implemented by the Brazilian company AMS Kepler, which is based on the documentation provided by CAST (China) [21].The JPEG-LS Reference Encoderv.1.00was originally developed by Hewlett-Packard Laboratories [27].The CCSDS-IDC reference software [28] described in [16] was developed by the University of Nebraska, Lincoln.A reference software implementation of JPEG-XR has been published as the ITU-T Recommendation T.835 and ISO/IEC International Standard 29199-5 [29].
The C++ source codes have been slightly modified to support the monochromatic raw image format.Certain encoding parameters must be specified in the command line.The selection of compression options and parameters affects the compression efficacy and implementation complexity.Naturally, two different sets of compression parameters yield different rate-distortion results for the same test image.An appropriate selection of compression parameters must be determined for each application.
The main JPEG-LS parameter is Near (the maximum absolute error), which takes values of 0 (for the lossless mode) or 1, 2, 3, and 4 (for the near-lossless mode).The CCSDS-IDC method uses the following main parameters: BitsPerPixel (Bpp) (the desired bit rate in bits/pixel), which takes values of 0.25, 0.5, 1, 2, or 4; the S value, the number of blocks per segment; the TypeDWT, which takes values of 0 (for the float DWT) or 1 (for the integer DWT).JPEG-XR uses the following main parameters: Mode, which takes values of 0 (for All coefficients), 1 (for NoFlexbits), 2 (for  NoHighpass), and 3 (for DC-Only); the Quantization Parameter (QP), which can take values from 0 (no quantization) up to 255.To get lower complexity, the JPEG-XR method was tested with no overlap filter (POT).

Metrics.
We considered compression ratio and distortion measures to compare the various compression algorithms.Although the processing time is also very important, we did not consider it in this work because it is highly dependent on the processing engine and the implemented algorithm.
Compression ratio (CR) is defined as the number of bits in the original image divided by the number of bits used to represent the compressed image.
In lossy image compression, one common reproduction distortion measure is the mean squared error (MSE), defined as where x i, j and x i, j are the original and reconstructed pixel values in the ith row and jth column and w and h denote the image width and height, respectively.More commonly, (objective) image quality is evaluated in terms of peak signalto-noise ratio (PSNR), measured in dB and defined as where B is the dynamic range (in bits) of the original image.
The term 1/12 eliminates the infinite value for PSNR when MSE approaches 0 and represents the MSE associated with the quantization of the original analog data.Since image signals are highly structured, the pixels dependency carry important structural information about the image content.Although the PSNR is a practical and useful image quality measure, the structural information might not be well measured by the PSNR [30].Therefore, we used another quality measure known as structural similarity (SSIM) index [31] and defined as where μ x and μ y are the local sample means of the windows x and y of common size of N × N, σ x and σ y are the local sample standard deviations of x and y, and σ xy is the sample cross-correlation of x and y after removing their means.The constant values C 1 and C 2 are included to avoid instability when the values in the denominator of the equation are very close to zero.The SSIM index is locally computed within a sliding window (e.g., 11 × 11 pixels) that moves pixel by pixel across the image, resulting in an SSIM map [31].This measure takes into account the luminance, contrast, and structure information in the image.We also use a mean SSIM index to evaluate the overall image quality where X and Y are the reference and the distorted images, respectively; x j and y j are the image contents at the jth local window; M is the number of local windows of the image.A MATLAB and C++ implementation of the SSIM index algorithm is available on the Internet [32].

Test Image Dataset.
The scenes captured by CBERS-2B CCD camera have five spectral bands, as shown in Table 2.Each spectral band is a monochromatic raw data, with 5812 × 5812 pixels and eight bits per pixel.The algorithms  performance evaluation was carried out using 20 scenes described in Table 3.
The CBERS-2B CCD images available from the INPE catalog public website [33] were not used in this test because they have radiometric and geometric calibration.The test image set consisted of twenty 5-band images without any calibration processing.These images were chosen to represent a variety of content relevant to different remote sensing applications, such as agriculture, forest, urban areas, surface water, and other scenes with different cloud cover.The

Performance Analysis.
The quantitative performance of the algorithms was evaluated using the dataset listed in Table 3. Table 4 shows the compression ratio and PSNR values averaged over five bands for each scene.The PSNR values taking into account 100 images are plotted for each band in Figure 13.To avoid distortions at lower bit rate, the PSNR and MSSIM average values for each algorithm, plotted in Figure 14, are calculated over all raw images except scenes 1 and 2.
A fixed-rate coder is a desirable feature to guarantee that the memory will never overflow and that the data is always transmitted through the channel in a fixed time.However, some variable-rate schemes can yield higher compression ratios than fixed-rate coders of comparable complexity.In (a) through (d) of Figure 13 we can see that some compression schemes, such as the JPEG-LS and the newly adopted JPEG-XR standard, are variable-rate coders.Although Figure 13(b) shows that CCSDS-IDC provides fixed-rate compression, it can also compress with variable rate (quality limited) using other parameters choices.
(a) DPCM Performance Analysis.The PSNR charts in Figures 13(a) and 14(a) show that DPCM has poor compression ratio and PSNR performances in comparison with other algorithms.Due to the basic prediction algorithm of this particular DPCM, the compression ratio is fixed at 2, and the PSNR quality measurement is lower than 50 dB.The MSSIM chart in Figure 14(b) shows that DPCM can reconstruct images with better structural similarity than some lossy JPEG-LS (Near up to 1) and some CCSDS-IDC (Bpp    Due to the run mode, the lossy JPEG-LS method can degrade images generating line effects that increase with the Near parameter.It must be noted that the vertical stripes in the raw images avoided higher degradation in run mode.However, JPEG-LS can generate more intense horizontal line effects in images free of vertical stripes, as we observed in other evaluation tests performed with smoothed images. The MSSIM chart in Figure 14(b), the SSIM map in Figure 16(b) (left and center), and its histogram in Figure 17(b) show that the lossy JPEG-LS algorithm has lower image fidelity than other algorithms with comparable bit rate.
The advantage of the JPEG-LS is its low complexity in comparison with the transform-based algorithms.
(c) CCSDS-IDC Performance Analysis.We compared the integer DWT of the CCSDS-IDC for "push-broom" sensors.The integer DWT is used for applications requiring lossless compression and is preferable in lossy compression for complexity reasons to avoid floating point operations in the DWT calculation.
An image with width w and height h generates w/8 • h/8 DWT coefficient blocks.The entire image is compressed as a single segment (full-frame compression) defining S = w/8 • h/8 .When S = w/8 , each image segment corresponds to a thin horizontal strip of the image (strip compression).We defined a segment of blocks that correspond to the image width.By setting S = w, each segment consists of eight strips.The advantage of the strip compression is that there is no need to store a complete frame of image or DWT data; thus, it can lead to a memory-efficient implementation that is convenient to push-broom sensors.We also imposed fixed rate constraints on each compressed segment, evaluating the performance at bit rate set to 0.25, 0.5, 1, 2, and 4 bits per pixel.
The CCSDS-IDC strip-based compression at fixed rate can distort some image portions because the segments may not be completely encoded within the segment byte limit, as illustrated by the horizontal stripes in the SSIM map in Figure 16(c) (center image).
There are certain parameters specified by CCSDS recommendation, as the integer and the float DWTs, strip and frame based, as well as rate-limited and quality-limited compression.We are not intended to test the full range of available options.More detailed performance comparison of CCSDS-IDC can be seen in the document [17,26].
For future work, we plan to analyze the lossy JPEG-XR with different parameters, using a more diverse set of test images (captured by other satellite sensors) with a wider range of bit rates, better spatial, and radiometric resolutions (bit depth).We are also planning to conduct a series of subjective tests, in which a group of remote sensing interpreters will be asked to rank lossy decompressed images according to their perceived qualities.
This study aimed to evaluate different image compression methods to choose a compression scheme suitable to be implemented in FPGA hardware and used onboard the next generation of satellites of the Brazilian Space Program.The next phase is to evaluate practical aspects of the systems, such as processing complexity (speed), and ease of implementation in FPGA hardware.

Figure 1 :
Figure 1: A general block diagram of image compression systems: (a) encoder and (b) decoder.

Figure 16 :
Figure 16: Some reconstructed images compressed by different compression algorithms.The left (original size) and center (cropped to 400 × 400) images show SSIM maps of the compressed images, where color indicates the local SSIM magnitude.The right image (cropped to 128 × 128) shows details of the decompressed image (enhanced by linear contrast-stretch for visualization).
Figure 16(a) and the SSIM histogram in Figure 17(a) confirm that the original raw image and the DPCM reconstructed image have poorer structural similarity than CCSDS-IDC Int Bpp = 1 and JPEG-XR QP = 10.(b) JPEG-LS Performance Analysis.The lossless JPEG-LS outperforms the lossless CCSDS-IDC and lossless JPEG-XR.This is confirmed with PSNR results in Figures 13(a) and 14(a).

Figure 17 :
Figure 17: ((a)-(d)) The histograms of the SSIM maps.(e) Color map used to show the SSIM maps, where blue color indicates SSIM indexe qual to 1.0 and red color indicates SSIM index lower than 0.95.The MSSIM index achieves value 1.0 if and only if the two images being compared are equal.

Table specifications Table specifications 8
System.The wavelet transform decomposes the original image into a sum of spatially and frequency localized functions, in a way that is similar × 8 blocks