Wetland Change Detection Using Cross-Fused-Based and Normalized Difference Index Analysis on Multitemporal Landsat 8 OLI

Wetlands are one of the most important ecosystems on the Earth and play a critical role in regulating regional climate, preventing floods, and reducing flood severity. However, it is difficult to detect wetland changes in multitemporal Landsat 8 OLI satellite images due to the mixed composition of vegetation, soil, and water. The main objective of this study is to quantify change to wetland cover by an image-to-image comparison change detection method based on the image fusion of multitemporal images. Spectral distortion is regarded as candidate change information, which is generated by the spectral and spatial differences between multitemporal images during the process of image cross-fusion. Meanwhile, the normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were extracted from the cross-fused image as a normalized index image to enhance and increase the information about vegetation and water. Then, the modified iteratively reweighted multivariate alteration detection (IR-MAD) is applied to the generally fused images and normalized difference index images, providing a good evaluation of spectral distortion. The experimental results show that the proposed method performed better to reduce the detection errors due to the complicated areas under different ground types, especially in cultivated areas and forests. Moreover, the proposed method was tested and quantitatively assessed and achieved an overall accuracy of 96.67% and 93.06% for the interannual and seasonal datasets, respectively. Our method can be a tool to monitor changes in wetlands and provide effective technical support for wetland conservation.


Introduction
Wetlands are a unique ecosystem formed by the interaction between water and land, and they cover 6% of the Earth's surface [1].Due to seasonal changes, the characteristics of wetlands vary among water, soil, and vegetation.This makes the wetland landscape more complex, and it becomes more difficult to extract information about changes in these regions.In addition, the combination of reflectance spectra of the underlying soil, the hydrologic regime, and atmospheric vapor makes optical classification more difficult, and these factors could introduce a reduction in spectral reflectance.Therefore, it is often difficult to achieve the expected results using a single method to extract information about wetland change [2].
Postclassification comparison (PCC), in which two multitemporal images are independently classified and then compared [3], is one of the methods used for wetland change detection.First, it is applied for detecting the trajectories of corresponding wetland cover types.More specifically, it includes many classification methods, such as the regression tree algorithm or the maximum likelihood classification [4,5].However, in some of these methods, high-accuracy classification and ground truth information are required [3][4][5][6].
The image-to-image (or direct) comparison change detection method, which is another method for wetland change detection, is used to obtain a difference image of spectral changes through the analysis and calculation of the spectral characteristics from multitemporal images, and then a binary image is generated in which the change areas are dis-tinguished from unchanged areas [7].The advantages of this method are first that it provides a faster comparison of images and second that it demands no ground truth information; however, it could not display the change trajectories of wetland cover types [8].Change detection methods such as change vector analysis (CVA) [9], principal component analysis (PCA) [10], Erreur Relative Globale Adimensionnelle de Synthese (ERGAS) [11], and multivariate alteration detection (MAD) [12] directly calculate multitemporal images.
However, the results of the change detection method based on the difference image largely depend on the spectral characteristics and may include some false positives.For this reason, a change detection method based on cross-fusion and spectral distortion is proposed to improve the accuracy of change detection in flood zones [13].Successful change detection results have been achieved in coal-mining subsidence areas; nevertheless, there are still some false detections in wetland areas [14].
To mitigate the false positive results, we employ an image-to-image method through NDVI and NDWI extraction based on a cross-fusion image to detect the change information of wetlands.In the case of wetlands, vegetation, soil, and water coexist, and the application of the cross-fusion method is beneficial for improving spatial resolution and enhancing information about wetland change.In addition, based on the cross-fused image, NDVI and NDWI are extracted in order to enhance the information about vegetation and water.We can then derive the change information using the modified IR-MAD algorithm, which is a wellestablished change detection method for multitemporal multispectral images [12].Finally, the change area of wetlands is obtained using the automated threshold method.

Study Area and Dataset
In this study, we collected three Landsat 8 OLI multitemporal images covering the Shengjin Lake Nature Reserve area.These images represent the seasonal (25 July 2016 and 16 December 2016) and interannual (6 November 2013 and 16 December 2016) changes in the types of land cover, as shown in Figures 1(a)-1(c).In the preprocessing, the relevant bands were selected, including the 30-meter-resolution MS (multispectral) bands 2-7 and the 15-meter-resolution PAN (panchromatic) band 8.Then, the Shengjin Lakeprotected area vector data were used to cut the images, with the specific parameters shown in Table 1.
The Shengjin Lake National Nature Reserve (30 °15 ′ ~30 °30 ′ N, 116 °55 ′ ~117 °15 ′ E) is located in Chizhou City, Anhui Province.The protected region, with a total area of 333.40 km 2 , consists of Shengjin Lake, cultivated areas, urban areas, forest, and bare land.The Shengjin Lake wetland ecological environment is well-preserved, with rich natural and cultural landscapes.It is one of the most complete areas in the wetland ecosystem of inland freshwater lakes in the lower reaches of the Yangtze River.It connects with the Yangtze River, and the water level of the lake is regulated by the Huangpen sluice.The location of the study area is shown in Figure 1(d).The water level of Shengjin Lake varies between 3.4 and 7.4 m due to the sluice.Water level changes lead to the largest lake area in the summer wet season and a smaller area in the winter dry season.During the dry season, two largest Carex meadows ("upper lake meadow" and "lower lake meadow") provide suitable living environments and food sources for Greater White-fronted Geese and Bean Geese, and it has become a critical winter habitat for rare birds [15,16].

Methodology
In this section, we detail the process of extracting wetland change information from bitemporal images using a modified IR-MAD.We consider the two datasets F H t 1 and F H t 2 consisting of high-resolution PAN and low-resolution MS images acquired in the same geographical area at different times t 1 and t 2 , respectively.The process flow is shown in Figure 2, and the details of each step are described below.
3.1.Cross-Fused Image Generation.Generally, the image fusion method fuses high-resolution PAN images and lowresolution MS images into a high-resolution MS image.In this paper, the PAN and MS images are generally fused using Gram-Schmidt adaptive (GSA) to produce high-resolution multispectral F H t 1 and F H t 2 images.The GSA algorithm is applied as a representative component substitution-(CS-) based fusion algorithm, and there is no limit to the number of bands of the fused image [17].The major drawback of the CS-based fusion method is spectral distortion, also called color (or radiometric) distortion.This spectral distortion is caused by the mismatch between the spectral responses of the MS and PAN bands according to the different bandwidths [18].In this study, the spectral distortion will be regarded as a candidate detection feature for wetland cover change [17].To this end, we extract the high-resolution NIR image instead of high-resolution PAN image.The NIR band has a narrower bandwidth, and by using this band, the mismatch level of spectral response outside the NIR spectral range will increase, and the spectral distortion will become more pronounced [13].At the same time, the NIR band is a very useful information source for detecting water or vegetation areas, because the water area appears dark since it has strong absorption characteristics, and the vegetation has the opposite characteristics and appears light.Thus, the NIR band is useful in extracting change information.Then, the cross-fused image CF H 1 is generated where the F H t 1 MS image is fused with the NIR band of F H t 2 by using the GSA image fusion algorithm.CF H 2 can be obtained in the same way as CF H  1 , and the formula is as follows:    [19,20].
The cross-fused image has high spatial and temporal resolution, and the vegetation and water details of the bitemporal images are preserved.By using the NDVI and the NDWI, the change information from water and vegetation is further optimized, and the distinction among water bodies, wet soil, and vegetation is improved.The spectral difference is enhanced twofold, and the detection sensitivity of the vegetation and water increases.The equation is as follows: where NIR F , R F , and green F are the NIR, red, and green bands of the cross-fused image, respectively.According to (3) and ( 4), the ND H CF 1 and ND H CF 2 images are generated from the cross-fused images CF H 1 and CF H 2 and include the NDVI and NDWI bands.

Wetland Change Area
Extracted.The IR-MAD algorithm, which is based on canonical correlation analysis, considers two K band multispectral images R and T of the same area, but acquired at two different times, and it has an important application in the field of multitemporal multispectral image change detection.The random variables U and V, generated by any linear combination of the intensities of the spectral bands using coefficient matrices a and b, are defined.The equation is as follows [21].
where the superscript T is the transpose of each matrix.The task is to find suitable vectors a and b by maximizing the variance of U − V.This leads to solving two generalized eigenvalue problems for a and b from a canonical matrix analysis.
The MAD variate M k , which is generated by taking the paired difference between U and V, represents the changed information [22], where the equation is as follows (6).
In this study, corresponding to the two multitemporal normalized difference index images (ND H CF 1 and ND H CF 2 ) and two general fused images (F H t 1 andF H t 2 ), MAD variate M p is generated by using optimal coefficients a and b through (7) and (8): where M 1 is the MAD variate of the generally fused image with the same temporal data.Meanwhile, two normalized index images (ND H CF 1 and ND H CF 2 ) in M 2 are extracted from the cross-fusion image.
The probability of the changed information for pixel j, which is calculated by the sum of the squares of the standardized MAD variate, is defined in (9).
where variable Z j represents a weight for the probability of changes in each pixel to identify a greater chi-square value, M kj is the MAD variate of the kth band for pixel j, and σ M k is the variance of the no-change distribution.The above results can be regarded as the weights of the observations.The iteration process would continue for a number of fixed iterations or until there is no significant change in the canonical correlations.The latter is used in this study [22].Then, the optimal matrices a and b are recalculated by the weight factor.Journal of Sensors In this study, based on the combination of M 1 and M 2 , the final change detection index Z j is calculated.
M p kj is the kth band MAD variate of the pth pair of fused images for pixel j.This method can effectively reduce the falsely detected changes by considering Z j values twice in (10).
This modified IR-MAD algorithm can not only alleviate the problem of spectral distortion that caused massive false change alarms in the process of using bitemporal images to generate the cross-fused images but also reduce the interaction between bands of multispectral images [14,21].Therefore, this algorithm can yield better change detection results in multitemporal images.Finally, the Otsu thresholding algorithm, which is based on histogram image segmentation [23], has effective performance and easy application, and it was applied to the modified IR-MAD image to obtain binary data of the changed and the unchanged area [23].

Experimental Result and Discussion
To evaluate the effectiveness of this method, we analyze and discuss the seasonal and interannual variations in the study area.We use the cross-fused and PCC change detection methods to compare with our result.In the cross-fused change detection method, original IR-MAD is applied between CF H 1 and CF H 2 images to extract the changed area.In the process of PCC, three general fused images are classified into 7 classes (water, bare land, meadow, cultivated area, city, forest, and mudflats) through the maximum likelihood classification.
To quantitatively compare the performance of these methods, ground truths of seasonal and interannual variation images were generated from GSA-fused images (F H t 1 and F H t 2 ) by manually digitizing the changed areas of Shengjin Lake Nature Reserve as shown in red in Figures 3(a) and 3(e).The results are overlain on the multispectral images of 6 November 2013 and 25 July 2016, respectively.In the quantitative analysis process, the confusion matrix method was applied to evaluate the statistical accuracy of the tested methodologies, and some indices such as overall accuracy (OA), kappa coefficient (KC), commission error (CE), omission error (OE), and false alarm rate (FAR) were calculated  5 Journal of Sensors [24].The detailed quantitative change detection accuracy assessment results for each method are shown in Figure 3 and Table 2.The red color indicates the change pixels extracted from the change detection results of the different methods, and the results are overlain on the multispectral image of 25 December 2016.
Through the observation and analysis of the results in Figure 3, in areas where different ground types coexist (water, bare land, meadow, cultivated area, city, forest, and mudflats), the proposed method, compared to PCC based on the general fused image and cross-fused method, can more accurately detect the wetland change information and effectively reduce the change detection errors for interannual or seasonal wetland cover change.In the results of the PCC and cross-fused methods, some parts of the unchanged area are considered to be changed areas.As shown in Table 2, the CE value of PCC reaches 70%.In addition, the results of our study are more accurate than the PCC; the OA value reaches 90% and the FAR value reaches 0.02.
Figure 3 includes the whole study area, allowing for an initial visual assessment of the results of wetland change extent extraction.Figures 4 and 5 show the subimages extracted from "upper lake meadow" and "lower lake meadow" regions of Figure 3.As shown in Figures 4 and 5, the PCC and cross-fused method are not sensitive to artificially cultivated areas and forest areas affected by seasonal changes, and they detect too many false positives caused by similar spectral characteristics.
The proposed method efficiently detects the changed area that corresponds to the complex area with similar spectral characteristics, improves the accuracy of wetland change detection, and minimizes the impacts of seasonality and artificiality.The proposed method also has good performance in meadow areas.However, it produces false positives in some mudflat edge regions, such as yellow areas in Figures 4(b) and 5(f).On the one hand, this is because spatial inconsistency occurred due to the different look angles of bitemporal imagery.On the other hand, the proposed method is based only on the NDWI, which is sensitive to water.As a result, some omission errors can occur in areas where the water level falls and the mudflats are exposed because the mudflats still contain a certain amount of water.

Conclusions
In this paper, we proposed an image-to-image change detection method using multitemporal images to quantify wetland cover changes; the method is based on a combination of a cross-fusion image and normalized difference index image.For multitemporal Landsat 8 OLI images, the GSA fusion method is used to generate cross-fusion images, and then NDVI and NDWI are extracted.The optimal change information was calculated through the modified IR-MAD, which used pairs of normalized difference index images and general fused images.The experimental results showed that the proposed method increases the accuracy of change detection and minimizes the error detection in the complex areas under different ground types.Especially in the cultivated area affected by manmade alterations, change information can be identified more accurately, and a lower FAR can be achieved.This allows us to help wetland managers implement effective management plans.Further, our method is of guiding importance in the monitoring of wetland health and wetland conservation.

Figure 1 :
Figure 1: Multitemporal images of Shengjin Lake used in the experiment.

Figure 2 :
Figure 2: Workflow of the proposed methodology for wetland vegetation extraction using multitemporal satellite images.The superscript H and L represent high resolution and low resolution, respectively.

Figure 3 :
Figure 3: Results of wetland change area by using the tested methods: (a)-(d) interannual change detection result; (e)-(h) seasonal change detection result.