Coastal Zone Classification Based on Multisource Remote Sensing Imagery Fusion

The main objective of this paper was to assess the capability of multisource remote sensing imagery fusion for coastal zone classification. Five scenes of Gaofen(GF-) 1 optic imagery and four scenes of synthetic aperture radar (SAR) (C-band Sentinel-1 and L-band ALOS-2) imagery were collected and matched. Note that GF-1 is the first satellite of the China high-resolution earth observation system, which acquires multispectral data with decametric spatial resolution, high temporal resolution, and wide coverage. The results showed that based on the comparison of Cand L-band SAR for coastal coverage, it is verified that C band is superior to L band and parameter subsets of σvv , σ 0 vh, and Dcross can be effectively used for coastal classification. A new fusion method based on the wavelet transform (WT) was also proposed and used for imagery fusion. Statistical values for the mean, entropy, gradient, and correlation coefficient of the proposed method were 67.526, 7.321, 6.440, and 0.955, respectively. We therefore conclude that the result of our proposed method is superior to GF-1 imagery and traditional HIS fusion results. Finally, the classification output was determined along with an assessment of classification accuracy and kappa coefficient. The kappa coefficient and overall accuracy of the classification were 0.8236 and 85.9774%, respectively, so the proposed fusion method had a satisfying performance for coastal coverage mapping.


Introduction
Coastal zones, typical land-sea ecosystems, play a key role in the sustainable development and environmental protection of shorelines, where around 50% of the world's population lives within 60 up to 200 km of the coast [1,2].Increasing human activities impact coastal zones.To reduce vulnerability and improve risk assessments for coastal development, it is important to map and monitor coastal zone status in terms of land cover types and change [3,4].
Remote sensing, both passive and active sensors, has been proven to be a valuable tool for coastal zone classification.Optical sensors, such as the Thematic Mapper (TM) (Landsat 5), Enhanced Thematic Mapper Plus (ETM+) (Landsat 7), High Resolution Visible (HRV) (SPOT 3), High Resolution Visible and Infrared (HRVIR) (SPOT 4), and High Resolution Geometric (HRG) (Spot 5) sensors; the new Chinese Gaofen (GF-1, -2, and -4) sensors; and synthetic aperture radar (SAR) sensors like RADARSAT-1/2, ENVI-SAT ASAR, ALOS-1/2, Terra, COSMO-SkyMed, and GF-3 SAR sensors, have been used to map and monitor coastal zones for more than 20 years [5][6][7].It is estimated that a combination of optics imagery and SAR imagery can be better applied in coastal zone classification.Optics imagery has been widely proposed for coastal zone classification in the literature, based on conventional visual photo-interpretation keys, such as tone color, texture, pattern, form, size, and context [8][9][10][11][12].Xiao et al. [13] used ETM+ remote sensing data from the United States' Landsat-7 satellite to build a coastal wetland classification model, which was based on a backpropagation (BP) neural network.The model was applied to natural wetland cover classification research in the core area of the Yancheng National Natural Reserve for Coastal Rare Birds.Zhang et al. [14] proposed a hybrid approach based on Landsat Thematic Mapper (TM) imagery for classifying coastal wetland vegetation.Linear spectral mixture analysis was used to segregate the TM image into four fractional images, which were used for classifying major land cover types through a threshold technique.Although conventional visual photo-interpretation can facilitate different types of coastal classification, it is also biased because optical remote sensing sensors are easily affected by cloud cover or solar illumination.SAR ensures all-day and almost all-weather observations with moderate-to-fine spatial resolution and rapid revisit time, which has been shown to be suitable for accurate and frequent mapping of coastal classification [15,16].In issues of single-polarimetric SAR for coastal detection and classification, Gou et al. [17] proposed an unsupervised method based on a three-channel joint sparse representation (SR) classification with fully polarimetric SAR (PolSAR) data.The proposed method utilizes both texture and polarimetric feature information extracted from the HH, HV, and VV channels of a SAR image.Buono et al. [18] demonstrated PolSAR's capabilities to classify coastal areas of the Yellow River Delta (China) using two well-known unsupervised classification algorithms, H/α-based and Freeman-Durden model-based algorithms, from a fully polarimetric SAR scene collected by RADARSAT-2 in 2008.Comparably, optical and SAR imagery provide abundant spectral and polarimetric features, respectively.Both optical and SAR products have advantages and disadvantages, so a combination of optics and SAR is a new method for coastal classification.Rodrigues and Souza-Filho [19] investigated the capability of Landsat ETM+ and RADARSAT-1 SAR for classification in the case of the mangroves, coastal plateaus, alluvial plains, tidal flats, and salt marsh classes.Supervised classification of Landsat ETM+ imagery and the ETM+ SAR product represented a significant advancement for rapid and accurate coastal mapping.Iii et al. [20] developed a k-means clustering algorithm to classify Landsat TM, color-infrared (CIR) photographs, and ERS-1 SAR data.Individually, the green reflective CIR and SAR data identified broad categories of water, marsh, and forest.In combination with TM, SAR, and the green CIR band, each improved overall accuracy by about 3% and 15%, respectively.
In this paper, optical imagery from Landsat TM, Gaofen-(GF-) 1, and C-band Sentinel-1 and L-band ALOS-2 SAR was collected and matched.Simultaneously, the spectral and polarimetric features of sampled coastal types were analyzed for classification.A wavelet transform (WT) was also proposed for multisource remote sensing imagery fusion to acquire optimum classification results.
The sections of this paper are as follows: the study area and remote sensing data set are introduced in Section 2. The methodology for feature extraction and the WT fusion algorithm are listed in Section 3. The experimental results, including comparison of multifeatures, fusion imagery based on the wavelet transform (WT), and classification results using the maximum likelihood classifier (MLC), are specifically stated in Section 4. Conclusions are given in Section 5.

Experiment
2.1.Study Area.Hangzhou Bay is a representative wetland area located south of the Yangtze River Delta, with a winding shoreline and numerous islands.In addition, the north-south transition of climate and the east-west transition of landforms result in coastal wetland diversity.There are five types of wetland distributed in Zhejiang Province (Table 1), with a total area of 2467775 hectares.It is worth noting that natural wetland covers 891083 hectares (about 36.1%), while artificial wetland covers 1576692 hectares (about 63.9%).

Datasets
2.2.1.ALOS-2.Three scenes of ALOS-2 SAR imagery were collected, including two scenes from quad-polarized imagery and one scene from dual-polarized imagery.ALOS-2 stands for Advanced Land Observing Satellite 2, launched by the Japan Aerospace Exploration Agency (JAXA) in May 2014.As the name indicates, the ALOS-2 is the successor of the ALOS, but specialized for the L-band (1.2 GHz) SAR.

Sentinel-1.
Two Sentinel-1 SAR imagery scenes were collected for comparison.Sentinel-1 is the first of the Copernicus Programme satellite constellation, launched by the European Space Agency in April 2010.The Sentinel-1 pairs are composed of two satellites, Sentinel-1A and Sentinel-1B, that carry a C-band (5.405 GHz) SAR.The shapefile of survey results is also shown in Figure 1, together with SAR imagery.It is noted that the survey results were used for study validation.The specifications of remote sensing datasets are listed in Table 1.VV and VH polarization channels were performed for analysis and comparison between L-band ALOS-2 and C-band Sentinel-1.

Multipolarization Features.
With the exception of the normalized radar cross section (NRCS) of 4 polarization channels, the multipolarization features considered in this study are shown in Table 2. Processing consisted of radiometric calibration, map reprojection, and generation of the multilook covariance matrix C.

Imagery Fusion Method Based on WT.
WT is widely used for imagery fusion through the multiresolution analysis of the spatial-frequency domain.The main idea of WT fusion involves retrieving multiresolution signals from WT and then fusing the images at different scales.It is noteworthy that the WT method was derived by performing an inverse WT using a low-resolution multispectral approximation image and the details from a high-resolution panchromatic image [21,22].
The computation of WT from a 2-D image involves recursive filtering and subsampling.At each level, there are four detailed images: low-low (LL), low-high (LH), highlow (HL), and high-high (HH).An N-level decomposition will finally have 3N + 1 different frequency bands, which include 3N high-frequency bands and one LL-frequency band [23].The 2-D WT will have a pyramid structure.Figure 2 is a schematic of image fusion based on WT.In this study, the MLC method was adopted to achieve high classification accuracy, which relies on the Bayesian maximum likelihood approach that discriminates different classes with the same a priori occurrence probability [24,25].The common classification procedure is widely used for polarimetric SAR classification.In order to evaluate the performance of our proposed WT fusion method with the traditional method, the MLC was performed for data processing.The classifier labels each fusion image pixel according to its normalized imagery value.Hence, once the normalized imagery fusion value is estimated, a proper threshold is set to identify the pixel that falls into which category.When the pixel under test is confusing, characterized by two or three categories, then the pixel is identified as belonging to a mixed category.Then, once pixels are grouped according to their normalized imagery fusion value, pixels within the same group are further divided into 30 small and almost equal-size clusters according to their pixel values.Once small clusters are generated, they are merged to obtain the user-selected number of output classes according to the Wishart metric [18].
The schematic for our study is shown in Figure 3. Prior to MLC, another two steps were implemented according to the flow chart.First, the optical parameter was selected for both SAR and optical imagery based on the observed performance of C-band Sentinel-1 and L-band ALOS-2 SAR and GF-1 high-resolution optical imagery.Second, the optimal parameters from SAR and optical imagery were input for WT fusion after imagery registration (Figure 2).Then, the MLC method could be applied for image fusion.Finally, a comparison of classification results and reference data was carried out to evaluate precision.

Comparison of Multipolarization Parameters between
Sentinel-1 and ALOS-2.The multipolarization parameter assessment consisted of the mean and standard deviation for both C-band Sentinel-1 and L-band ALOS-2 SAR.For each type of polarization parameter listed in Table 2, 10000 pixels were randomly selected from the matched Sentinel-1 and ALOS-2 dataset for different land cover types: silt beach, sand beach, brush, shallow water, aquiculture area, rice field, and so on.The statistical results for the polarization parameters are shown in Figure 4.The four polarimetric parameters are also shown in Figure 5.
From Figure 4, it can be seen that the error for the Cband Sentinel-1 (blue) was much lower than that for the L-band ALOS-2 (red) SAR.In addition, the four polarimetric parameters had different performances in depicting scattering discrepancy of coastal zone types.Compared with the other three parameters, R cross values ranged from 0.6 to 0.7 and barely contributed to the detection of the discrepancy.Therefore, any single polarimetric parameter could not effectively meet the classification demands for the coastal zone types.We recommend adopting a combination of σ 0 vv , σ 0 vh , and D cross of C-band Sentinel-1 as the optimal SAR parameters for coastal land cover classification.

Imagery Fusion
Based on the WT Method.Prior to imagery fusion, the hazen-intensity-saturation (HIS) transforms were respectively applied for GF-1 and Sentinel-1 SAR imagery to decompose the imagery into H, I, S spaces.We selected individual components from GF-1 optical and Sentinel-1 SAR imagery for fusion; then, they were fused into a new single component using WT.The WT method was used to eliminate distortion issues for spectral features in the transform.Finally, revised HIS transforms were performed to restore the fusion results to the RGB space.The fusion imagery is shown in Figure 6, in comparison with the GF-1 imagery and results from the traditional HIS fusion method.
Compared with pseudo RGB composite imagery from the GF-1 image (Figure 6(a)), the HIS fusion result (Figure 6(b)) contained the most texture details, especially for mountainous areas.The bridge could be clearly distinguished from complex ocean color in the background.However, spectral information in the HIS fusion result was severely lost.As a compromise between spectral and polarimetric information, we proposed a new fusion method based on WT.The result of our proposed fusion method is shown (Figure 6(c)).The wavelet basis used for decomposition and reconstitution was "db13."It is well known that optical and polarimetric features can be presented in the same dimension.The fusion imagery proposed in this study contained precise polarimetric scattering information, which was highly sensitive to  strong scattering targets, such as bridges, ships, and nearshore facilities.In addition, the nearshore marine dynamic factors, such waves and currents, can easily be traced in fusion imagery.To compare the performances of the traditional HIS fusion method and the proposed method based on WT, we examined three indicators for the mean, standard deviation (Std), entropy, gradient, and correlation coefficient (Cor) [26][27][28][29].
It can be seen in Table 3 that our proposed method had a satisfying performance for optical and SAR imagery fusion.Compared with traditional HIS fusion results, although the Std of our proposed fusion results was much larger, the other features were superior to the GF-1 and HIS fusion results.According to the statistics, we recommend adopting our proposed method for optical and SAR imagery fusion.

Classification Results
Using the ML Method.A ML classification was then applied to the fusion results.The procedure was as follows: (1) In accordance with an expert interpretation diagram (Figure 1), the five types of coastal land cover were selected as classification marks.Classes "1"-"5" were assigned to pixels corresponding to sea, intertidal zone, aquaculture zone, buildings, and plant cover.
For each type of coastal coverage, about 100000 pixels were selected for training and the reference data were selected for validation (2) All the training samples were used as inputs for the MLC method.
(3) After completing the training, the validation samples were then applied to generate the type of identification accuracy and kappa coefficient.The five test areas, which corresponded to the five regions defined by the reference map, were manually identified in the classification outputs.Finally, the coastal classification map is shown in Figure 7.
The classification outputs (Figure 7) showed that multisource remote sensing imagery fusion based on our proposed fusion method had a satisfying performance for coastal coverage classification.The color bars in Figure 6 range from 1 to 5 to represent the coverage types of sea, intertidal zone, aquiculture zone, buildings, and plant cover.In Table 4, the kappa coefficient and overall classification accuracy were 0.8236 and 85.9774%, respectively.Aside from the aquiculture zone, the product accuracy of other four selected coverage types was greater than 90%.The low accuracy of the aquiculture zone (50.11%) was attributed to the similar spectral and polarimetric information for sea and the aquiculture zone.
The criteria of the expert interpretation diagram are according to the criteria shown in Table 5.

Conclusions
This paper investigated the utility of multisource remote sensing imagery based on WT for coastal coverage classification.Five scenes of GF-1 optical imagery and four scenes of SAR (C-band Sentinel-1 and L-band ALOS-2) imagery were collected and used to identify the optimal combination of SAR band and polarimetric parameters.A fusion method based on WT was proposed and performed for imagery fusion.Finally, the classification output was provided, along with a classification accuracy assessment and the kappa coefficient.The conclusions are as follows: (1) In terms of response of C-and L-band SAR to coastal coverage, the C-band Sentinel-1 is superior to the L-band ALOS-2 SAR.Moreover, compared with the other three parameters, the R cross values ranged from 0.6 to 0.7 and hardly contributed to the detection of the discrepancy.Therefore, it is verified that C band is superior to L band, and parameter subsets of σ 0 vv , σ 0 vh , and D cross can be effectively used for coastal classification.
(2) In terms of fusion performance for our proposed method, although the Std of our proposed fusion results was much bigger, the other features (mean, entropy, and gradient) were superior to the GF-1 and HIS fusion results.In addition, the Cor statistics showed that the results of our proposed method was much better than the HIS fusion results.
(3) In terms of classification assessment of our proposed fusion method, the kappa coefficient and overall accuracy were 0.8236 and 85.9774%, respectively, which had a satisfying performance for coastal coverage mapping.

2. 2 . 3 .
GF-1.Five GF-1 optical imagery scenes were also collected for analysis.The Chinese GF-1 is the first satellite of the Major National Science and Technology Project of China, known as the China high-resolution earth observation system, launched in April 2013.The GF-1 panchromatic multispectral (PMS) sensor and wide-field view (WFV) cameras acquire data with high spatial resolution, wide coverage, and high revisit frequency, which are highly valuable data sources for coastal zone dynamic monitoring and classification.

Figure 1 :
Figure 1: Data coverage of remote sensing imagery and survey shapefile results.Red rectangles represent the coverage area of Landsat optics imagery, yellow rectangles represent the coverage area of GF-1 optics imagery, blue rectangles represent the coverage area of ALOS-2 SAR imagery, green rectangles represent the coverage area of Sentinel-1 SAR imagery, and the grey area represents the coverage area of survey shapefile results.

Table 1 :
Specifications of remote sensing datasets.

Table 3 :
Statistics for imagery features.

Table 4 :
Kappa coefficient and classification accuracy results.

Table 5 :
Classification criterion and interpretation sign for coastal visual interpretation.