New Approach for Snow Cover Detection through Spectral Pattern Recognition with MODIS Data

1Division of Earth Environmental System Science, Major of Spatial Information Engineering, Pukyong National University, Daeyeon-3 Nam-Gu, Busan 608-737, Republic of Korea 2Korea Aerospace Research Institute (KARI), 169-84 Gwahak-ro, Yuseong-gu, Daejeon 305-806, Republic of Korea 3Division of Earth Environmental System Science, Major of Environmental Atmosphere Sciences, Pukyong National University, Daeyeon-3 Nam-Gu, Busan 608-737, Republic of Korea


Introduction
Approximately 40-50% of the land surface in the Northern Hemisphere is covered with snow during the winter season. This region makes up the biggest single component of the cryosphere and plays an important role in Earth's global energy budget due to its high albedo and emissivity [1][2][3][4]. Snow cover on the land surface can affect ecological, geochemical, biogeochemical, and socioeconomic factors [5,6].
Snow cover is a key component driving the dynamics of regional and global climate [7]. Snow cover anomaly is an important measure of global atmospheric circulation [8]. At the regional scale, the extent of snow cover in Europe and Asia during winter influences precipitation in India in the following summer season. Thus, variation in snow cover extent can affect regional meteorological patterns [9]. Snowmelt is an important water resource in many areas of the world [10,11]. It plays a key role in the management of water usage and groundwater storage and impacts the surrounding agricultural land and activity due to its effect on river flow [12,13]. Because it is important from both scientific and resource management standpoints, accurate monitoring of snow cover on the ground is urgently needed. Therefore, knowing the exact extent of snow cover is essential to understanding climate change and water resource management.
Satellite-based remote sensing data is widely used as a tool for monitoring snow cover in many regions because it is able to monitor large areas regularly [14]. For this reason, studies of snow cover mapping using satellite data have been undertaken since 1966, when the first operational  snow maps were created by the National Oceanic and Atmospheric Administration (NOAA) [15][16][17][18]. To map snow cover using satellite data, discrimination between snow and cloud acts as the main source of error, because snow and cloud have similar spectral reflectance and emissivity [19][20][21]. Generally, misclassified cloud and snow cover can lead to errors in the retrieval of land surface products, such as land surface temperature (LST), land surface albedo (LSA), and vegetation index (VI) estimates from satellite-based data [22,23].
To solve this problem, in previous studies, static multispectral threshold methods were used to classify cloud and snow cover, which considers differences in the optical properties of snow and cloud [24][25][26][27][28][29]. The Land Surface Analysis Satellite Applications Facility (LSA-SAF) distinguishes snow-covered surfaces, snow-free surfaces, and cloud via 22 threshold tests using visible and infrared (IR) radiance observations from the Spinning Enhanced Visible and InfraRed Imager (SEVIRI), collecting land cover and LST data every 15 min [28]. Thompson et al. [30] also used a static threshold method employing the Normalized Difference Snow/ice Index (NDSI), Normalized Difference Vegetation Index (NDVI), and two spectral bands from the Moderate Resolution Imaging Spectroradiometer (MODIS) to discriminate between cloud and snow cover, to improve the MODIS cloud mask product. However, these threshold methods do not consider that satellite-measured reflectance of snow cover on the ground may vary due to the sun-target-sensor geometry, as well as characteristics of the snow such as moisture content, grain size, grain shape, and impurities [16,17,[31][32][33]. Snow cover is not a Lambertian reflector; thus, remote sensing sensors measure the reflected radiation over a range of viewing and solar angles [34]; as such, static threshold methods of snow cover detection may encounter errors when discriminating between snow cover and clouds. The reflectance of snow cover changes with solar position and contamination. Although the shape of the reflectance curve of snow cover does not change, the magnitudes of different portions of the curve vary [35]. This paper presents a simple new algorithm for mapping snow cover using satellite data. In this study, to overcome the challenges of snow cover detection using static threshold methods, we traced the changes in the reflectance pattern according to wavelength using the dynamic wavelength warping (DWW) method, which is based on dynamic time warping (DTW). DTW is a pattern recognition method used in various fields such as speech recognition and data mining. Also DTW is used in remote sensing field. Maus et al. [36] applied time-weighted version of DTW for landuse and land cover mapping using remote sensing image time series, and Petitjean and Weber [37] used DTW method for satellite image time series analysis. This study presents results of the DWW method using a preconstructed reflectance spectral library of snow cover as reference data and observed reflectance profiles collected at processing time. We applied our algorithm to MODIS level 1 b data (MOD021km) to detect snow cover. Then, to evaluate the algorithm, we compared our results with MODIS snow cover product (MOD10 L2) data retrieved using the static threshold method. The data used in this work is described in Section 2. In Section 3, the methodology to detect snow cover and distinguish snow cover from snow-free land and clouds is presented, and the method of snow cover detection and the conventional DTW algorithm are explained. The results of this study and their validation compared to MOD10 L2 are described in Section 4. Finally, in Section 5, we summarize the findings of this study.

Data
In this study, MODIS data were used, which had spatial resolutions ranging from 500 m to 1 km, as well as digital elevation model (DEM) data with 1 km spatial resolution. These data were used for estimating snow cover, while MODIS snow cover data were used to evaluate the snow cover estimates of our proposed algorithm. Data from 30 to 80 ∘ N and from 60 to 180 ∘ E ( Figure 1) collected during the Northern Journal of Sensors 3 Hemisphere winter season (from October to February) of 2011 to 2013 were utilized. This area is suitable for a general study because it has various land cover types and a wide range of ground elevations.

MODIS Satellite
Data. MODIS provides many products for various Earth science applications, such as air quality, volcanic eruptions, fires, weather forecasting, and agriculture [34]. The data used in this study were the MOD021km, MOD03, MOD10 L2, and MOD35 L2 products provided by the National Aeronautics and Space Administration (NASA). The MOD021km data include calibrated top of atmosphere (TOA) radiance and reflectance for 36 bands, which are determined by resampling channel data with different spatial resolutions down to 1 km resolution. In this study, we used six shortwave bands (1, 2, 3, 4, 6, and 26) and two IR bands (20,31) from MODIS. MOD03 data are a MODIS geolocation product and are provided as a companion dataset to the MOD021km data to enable further processing. This product contains ground location (geodetic latitude and longitude), sensing geometry (sensor zenith and azimuth, solar zenith, and azimuth), and land/sea mask data for each 1 km MODIS observation instantaneous field of view (IFOV) [38,39] and is used to remove ocean pixels and normalize the reflectance based on the solar zenith angle (SZA). MOD35 L2 is a MODIS cloud mask product that consists of a global cloud mask, quality assurance data, and other ancillary parameters. It indicates a level of confidence for each pixel (cloudy, uncertain clear, probably clear, and clear sky) that a clear sky scene is being observed at 1 km and at 250 m spatial resolutions. The cloud mask in the 250-m version is based on the two visible channels (bands 1 and 2) only, whereas the 1 km version data are based on various methods, including 14 tests to discriminate between clouds and snow [40]. Therefore, we used the 1 km cloud mask in this study. The MOD10 L2 is a Level 2 swath snow cover product with 500-m resolution. It is generated from MODIS calibrated radiance data (MOD02), geolocation data (MOD03), and cloud mask products (MOD35 L2) at 5 min intervals and contains snow cover (binary), fractional snow cover, quality assessment (QA) data, latitudes and longitudes in compressed Hierarchical Data Format-Earth Observing System (HDF-EOS) format, along with corresponding metadata. In the analysis and construction of the spectral library for snow cover detection, only good quality data from MODIS snow cover and cloud mask products were used, as determined by the QA information provided.

DEM and In Situ Data.
To understand the effect of terrain, we used hydrologically corrected DEM data from the HYDRO1k dataset, which was developed at the United States Geological Survey's (USGS) Earth Resources Observation and Science (EROS) Data Center and has 1 km resolution. Hydro 1k data sets are currently being developed individually for each continent. The absolute vertical accuracy for HYDRO1k data, a hydrologically modified version of the elevation model GTOPO30, is not specified officially, but GTOPO30 has a vertical accuracy of 39 m in terms of linear error at the 90% confidence level, corresponding to a root mean square error of 18 m over the area of interest [41]. The Lambert Azimuthal Equal Area projection was selected for this data. Because this projection exhibits regional distortion in the study area, reprojection was performed to Geographic World Geodic System 1984 (WGS84) to allow for more intuitive analysis.
This study used hourly snow depth data measured from KMA (Korea Meteorological Administration) meteorological stations over Korea for the relative evaluation our algorithm. The data were extracted from October 2011 to January 2013 for 25 stations. The location of 25 stations is shown in Figure 1.

Methods and Analyses
3.1. Synthesis Methodology. Several algorithms developed in the past use various spectral signatures for detection of snow cover to distinguish snow from snow-free land and clouds. Snow-free land is distinguished from snow cover simply because it has low reflectance in the visible region of the electromagnetic spectrum, unlike snow cover. However, clouds and snow both exhibit high reflectance in the visible spectrum and have similar spectral properties because they are both composed of water or ice. Therefore, discrimination between snow and clouds is not simple. In particular, it is extremely difficult to differentiate them using a single channel [42]. In the shortwave band (1.6 m), snow cover has lower reflectance than clouds [21]. Thus, reflectance at 1.6 m is widely used for discrimination between snow cover, snowfree land, and clouds, including in NDSI calculations with other threshold methods [7,12,18,28,42,43]. As mentioned, threshold methods have been widely used in previous studies for detection of snow cover but can cause errors such as misdetection or false detection, because these methods do not consider the variation in reflectance from snow cover due to sun-target-sensor geometry and the characteristics of snow such as contamination, age, and particle size.
In this study, we proposed a new algorithm for discriminating between snow cover and other data types (e.g., snow-free land and clouds) based on DWW. This algorithm was developed based on MODIS, and eight channels (0.466, 0.554, 0.647, 0.857, 1.382, 1.629, 3.959, and 11.03 m) were used. Figure 2 shows a flowchart of the snow cover detection algorithm proposed in this study; there are four steps, including the conventional threshold and DWW techniques to discriminate between snow cover, snow-free land, and clouds. The conventional thresholds were determined from scatterplot and histogram analyses using spectral reflectances, NDVI, and NDSI for the easily detectable snowy pixel, while DWW process plays role of discrimination between snowy and cloudy pixels. More detailed description of DWW is provided in Section 3.2. The algorithm is applied only during daytime hours (SZA < 80 ∘ ) because most tests used in our algorithm for discrimination between snow cover and other pixel types (e.g., snow-free land and clouds) depend on reflectance in the shortwave region of the spectrum.
In the first step, snow cover candidate areas are identified. Snow cover has unique optical characteristics, including high reflectance in the visible region and low reflectance in the shortwave spectrum, in contrast to snow-free land [44]. Step 1 Step 2 Step 3 Step High reflectance in the visible wavelengths characterizes snow cover, water clouds, and ice clouds. However, in the shortwave region, water clouds exhibit high reflectance, ice clouds have moderate reflectance, and snow cover has very low reflectance [16]. Based on a previous study, we use the reflectance anomaly at 1.629 m (Ano 1.6 ) to discriminate between snow cover candidate areas and nonsnow areas. Figure 3 shows histograms of Ano 1.6 for snow cover, snowfree land, and clouds. We determined a threshold value of Ano 1.6 based on this analysis, which is indicated by the dashed black line in Figure 3. In this step, we remove snowfree land and some cloud pixels. Snow cover and some clouds are not removed and remain snow candidate pixels. The second step is based on threshold classification using reflectance in MODIS band 6 (1.629 m) to discriminate between snow cover and clouds in the remaining snow candidate pixels. The 1.629 m channel on the MODIS instrument is used for both cloud and snow products, as it provides an essential tool for distinguishing clouds from snow [45]. We used reflectance in the 1.629 m channel ( 1.6 ) to classify obvious snow cover and cloud pixels from among the snow candidates. By analyzing 1.6 in clouds and snow cover, we observed relatively low reflectance (0-0.24) for snow cover and relatively high reflectance (0.1-0.8) for clouds, consistent with the findings of a previous study ( Figure 4). Therefore, we set the threshold for snow cover classification to 0.1 and the threshold for cloud classification to 0.24 (represented by the black dashed line and black dash-dot line in Figure 4, resp.). For 1.6 values between 0.1 and 0.24, it is difficult to distinguish between snow and clouds using 1.6 , so these pixels remain snow candidates and are classified in the next step.
The third step uses a new method proposed in this study based on DWW. In this step, snow cover and cloud pixels are sorted from the snow candidate pixels that were not classified in the second step. Because they have similar optical properties, it is difficult to differentiate them using threshold classification methods. Therefore, to efficiently discriminate between clouds and snow, we used DWW, because snow cover and clouds exhibit different changes in reflectance Journal of Sensors by wavelength. In this step, the spectral reflectance profile observed by the satellite sensor at collection time and one of 49 preconstructed snow spectral reflectance libraries selected based on the conditions of SZA and DEM are used as input data for DWW. Through DWW, we obtain two results: the cost matrix ( ) and the warping path. contains the smallest cumulative cost required to reach any element in the matrix. The warping path indicates the optimal alignment, which is the shortest path through from the upper-right to lowerleft, following a specific search pattern. Because the variability of snow cover reflectance in the visible and NIR wavelengths is large, it is difficult to use the cost matrix directly. Therefore, we use the warping path to detect snow cover. If the target pixel is covered by snow, the snow reflectance spectral library and spectral reflectance profile should match at a given wavelength. Therefore, we classified a pixel as snow cover when the warping path fell exactly along the diagonal. However, if the difference between the snow spectral reflectance library and the snow spectral reflectance profile is larger than the standard deviation in the shortwave IR region, the target pixel is not classified as snow cover, despite a diagonal warping path, because snow cover exhibits low variability of shortwave reflectance. Additionally, we use the brightness temperature difference (BTD) between MODIS bands 21 (3.9 m) and 31 (11 m), as well as NDSI criteria, to remove thin clouds over snow cover. These criteria have been widely used in previous studies to distinguish between snow cover and clouds [28,46]. The last step was used to detect snow in dense forests. The detection of snow cover becomes difficult in areas where snow cover is mixed with canopies that are not covered with snow, because the reflectance in the visible region is reduced significantly by the canopy and shadows of trees [13,38].
To map snow cover in forested areas, previous studies have investigated the relationship between NDVI and NDSI [47,48]. To supplement the detection of snow cover in densely forested areas, this study also used criteria tests incorporating NDSI and NDVI values. Forested areas were identified using MODIS global land cover data (MCD12Q1) with the International Geosphere Biosphere Programme (IGBP) scheme.

Conventional DTW Algorithm and Its Application to
Snow Cover Detection. In this section, we present the DTW similarity measure. DTW is a type of dynamic programming technique, a nonlinear warping algorithm that compares several temporally successive data points (i.e., without skipping any data) to determine the similarity of the two types of data [49,50]. The DTW algorithm is superior to the Euclidean distance method in measuring the similarity of time series data because it matches similar shapes even if there is a temporal difference between the data [51,52]. Thus, it has been employed in many fields, such as human action recognition, speech recognition, fingerprint recognition, anomaly detection, classification, and clustering [53][54][55][56]. DTW was proposed by Sakoe and Chiba [57] based on the Euclidean distance, which is a string metric used to measure the difference between two sequences [58]. Assume we have two time series, time series and . This distance matrix is called the "local cost matrix" for the alignment of the sequences and [59]. Then, we calculate the cumulative cost matrix ( ), an × matrix that stores the cumulative smallest cost required to arrive at each element in the matrix by following an express command from (1, 1) to ( , ) [60,61]. Each element of is calculated using (1), which also identifies the search direction.
To find the best match between time series and , we can determine the path in the cumulative cost matrix that minimizes the cumulative distance between the series. The relationship between and can be represented by the warping path, = ⟨ 1 , 2 , 3 , . . . , , ⟩, max( , ) ≤ ≤ + − 1, where = ( , ) indicates the alignment and matching relationship between time series and [62].
is determined from ( , ) to the point of minimal cumulative cost among adjacent cells (vertically, horizontally, and diagonally). An example of a warping path from DTW and the corresponding alignment are presented in Figure 5. If the patterns of temporal change between two sequences are similar, the warping path approaches the diagonal; otherwise, it diverges from the diagonal.
In this study, we used DTW to track the pattern of change in the reflectance profile by wavelength to discriminate snow cover from nonsnow pixels. Thus, DWW is used by applying the concept of wavelength in place of time in the conventional DTW. For classification of snow cover, we used reflectance profiles observed by satellite and a preconstructed snow spectral reflectance library as input data for DWW. If the pixel is covered by snow, the change in reflectance according to wavelength should be the same with preconstructed snow spectral reflectance library; however, there may be a difference in reflectance, so snow cover is detected only when the warping path from DWW is diagonal.

Preconstructed Reflectance Spectral Library of Snow Cover.
Reference data are needed to measure the similarity of the reflectance profiles observed by sensors using DWW. Reflectance from snow cover varies depending on the solar zenith angle, because snow cover is not a Lambertian target [11,63]. In addition, the reflectance of snow cover varies based on topography. Therefore, we constructed a snow spectral reflectance library in advance to provide reference data for DWW based on SZA and DEM. To construct the snow spectral reflectance library, we divided data into seven classes each of SZA (less than 50 ∘ , 50-55 ∘ , 55-60 ∘ , 60-65 ∘ , 65-70 ∘ , 70-75 ∘ , and 75-80 ∘ ) and DEM (0-500 m, 500-1000 m, 1000-1500 m, 1500-2000 m, 2000-2500 m, 2500-3000 m, and over 3000 m). At low SZA (less than 50 ∘ ), the number of snow-covered pixels was insufficient, as was the case for high altitudes (over 3000 m), so these groups were integrated into one condition. For snow spectral reflectance library construction, we used 1,670,324 sampled data that were classified as snow cover by MODIS. To construct a snow spectral reflectance library representing the reflectance from snow cover in each condition, we used more than 20,000 datasets for each term.
Reflectance observed by satellite sensors varies according to the geometry of the sun, satellite, and target, causing variation among measurements of the same target. In this study, to remove the effects of atmospheric spherical curvature and refraction, we normalized the observed reflectance with SZA using a simple equation that considers different observation times, as given below: In the above equation, , 0 , and represent the observed reflectance, normalized reflectance, and SZA, respectively. The snow spectral reflectance library was constructed as a simple average of the data sampled in each term. Figure 6 shows the preconstructed snow spectral reflectance library used in this study. It reflects the reflectance of typical snow cover, which is high in the visible and near-infrared (NIR) wavelength range but drops to near zero in the water absorption bands. Different reflectance patterns were observed under different SZA conditions, with the largest difference at visible wavelengths. Reflectance exhibits large variability in the visible and NIR region and small variability in the shortwave region, as shown in Figure 6.

Assessment of Snow Cover Image Classification Accuracy.
As snow, clouds, and snow-free conditions need to be assessed at the same time as image acquisition, verification was based on visual evaluation [64]. The accuracy of snow cover maps from the proposed method was evaluated quantitatively and qualitatively using MODIS swath snow cover (MOD10 L2) as reference data. A total of 219 snow cover images were selected randomly for use. In most previous studies, snow cover means only present of snow. And the issue of differentiating snow from clouds is generally ignored [30].
To assess the accuracy of binary snow cover maps, we used a confusion matrix, which is a commonly used quantitative method for characterizing image classification accuracy using four categories: (1) true positive (detected as snow cover by MOD10 L2 and our method, a); (2) false positive (detected as snow cover by MOD10 L2, but detected as nonsnow by our method, b); (3) false negative (detected as nonsnow by MOD10 L2, but detected as snow cover by our method, c); and (4) true negative (detected as nonsnow by MOD10 L2 and our method, d). In this study, we used producer's accuracy (PA), user's accuracy (UA), and overall accuracy (OA) as accuracy indices. PA quantifies the errors of commission; OA characterizes the errors of omission. These indices are used to assess overestimation and underestimation, respectively. OA is defined as the number of pixels classified correctly divided by the total number of pixels in the reference image. PA, UA, and OA are expressed by the following equations:

Quantitative Assessment of Our Results with MOD10 L2.
To quantitatively evaluate the proposed algorithm, we used MOD10 L2, which is widely used and known to be accurate [64]. For this evaluation, we used 219 scene data, which were randomly selected from October to February in 2011 to 2013. MODIS detects snow cover when the SZA is below 85 ∘ , whereas we detected snow cover only when SZA was below 80 ∘ . Therefore, the data collected when SZA was above 80 ∘ was excluded from our evaluation. Because the results of this study have 1 km spatial resolution while MOD10 L2 has a 500 m spatial resolution, one pixel in our results corresponds to four pixels in MOD10 L2. For rigorous validation, when three or more of the four MOD10 L2 pixels corresponding were covered by snow, we categorized that pixel as snow cover. Table 1 shows the classification results merged into one confusion matrix. The PA, UA, and OA were 92.92%, 78.41%, and 92.24%, respectively. Compared with MOD10 L2, this study had a false detection rate of 6.08% and a misdetection rate of 1.68%. The main errors of false detection are misclassification between clouds and snow. Among the validation dataset, 23.77% pixels were detected as snow cover by MODIS, and 28.17% were categorized as snow cover using our method. Thus, our results tended to be slight overestimates compared with MOD10 L2. These errors may be caused by a combination of the following issues: problems detecting snow cover in shadowed areas, misclassification errors at cloud borders, difficulty detecting snow in dense forest, and detection of snow cover with a large viewing zenith angle (VZA) and SZA. Figure 7 shows PA, UA, and OA percentages for each scene used for validation. PA, UA, and OA have variabilities of 5%, 7%, and 5%, respectively. PA varies between 65% at 05:00 UTC on October 20 in 2011 (scene number 31), when the percentage of snow cover on the surface was 3.6% and 98.4% at 22:15 UTC on January 16 in 2012 (scene number 93), when the percentage of snow cover on the surface was 75.4%. PA was affected by the ratio of snow-covered land, such that when the ratio of snow cover was more than 10%, PA was greater than 80%, and when that ratio was below 10%, PA decreased gradually to 65%. . The range of UA was wider than that of PA. In most scenes, the OA was over 80% and its minimum and maximum values were 72.4% and 99.8%, respectively. The accuracy of snow cover detection is influenced by the geometric relationship between the sun, Earth's surface, and the satellite because the reflectance of snow cover is affected by these geometric relationships, especially in the visible spectrum [21]. The performance of our results was evaluated based on intervals of SZA and VZA. SZA and VZA were categorized into 7 and 14 classes, at 5 ∘ intervals. Figures 8 and 9 present total PA, UA, and OA of the snow cover classification for the various SZA and VZA classes. As mentioned, we integrated the range of SZA from 0 ∘ to 50 ∘ into one class because the snow-covered land area was insufficient. PA was below 80% for the lowest SZA class, and increased with increasing SZA up to 95.3%. UA was around 78% in all Producer's accuracy User's accuracy Overall accuracy Viewing zenith angle ( ∘ ) SZA classes, with a range of 77.5-79.2%. These results indicate that as SZA increases, the underestimation error decreases and the overestimation error remains constant. OA tended to decrease as SZA increased, with values of 97.9% in the lowest SZA class and 88.8% in the highest SZA class. The larger the SZA, the greater the change in PA, but OA was not significantly affected as the proportion of snow cover increased.
The variation of the accuracy indices with VZA class showed that OA was hardly influenced by VZA and was greater than 90% in all VZA classes (91.9%-98.6%), while PA and UA tended to decrease as VZA increased. PA decreased slightly with increasing VZA from of 85.8% in the highest VZA class to 94.9% in the lowest VZA class. UA did not decrease significantly when VZA was below 30 ∘ but decreased drastically as VZA increased over 30 ∘ , to 51.5% in the largest VZA class. These findings show that the method proposed in this study overestimates snow cover compared with MOD10 L2 at high VZA. MOD10 L2 has an accuracy range of 93% to 100% under ideal conditions of illumination angle, cloud-free sky, and several centimeters of snow on a flat surface; however, the accuracy can vary under conditions of warm surface illumination, low illumination, and snow and cloud confusion, affecting the accuracy of the MODIS cloud mask [65]. In the snow cover mapping, larger viewing angles can lead to errors. Snow is not a Lambertian reflector and exhibits strong forward scattering, unlike other land types such as vegetation, and this effect becomes stronger as snow getting older. The increase in forward scattering is greater in the NIR wavelengths than visible wavelengths. Hall and Riggs (2011) noted that, at viewing angles greater than around 30 ∘ from nadir, the amount of reflected solar irradiance will vary with viewing angle due to the anisotropy of the snow cover, and thus the reflectance measured by satellite will be slightly different from that at the nadir. For this reason, the VZA threshold for the MODIS snow cover and snow grain size model was set at 25 ∘ , calculated based on the area of a MODIS pixel [66,67]. We compared the ground snow depth measurements from 25 stations located in Korea with our results and MOD10 L2 product for five relative clear days. Ground measurements are provided only when snow is present on a given date. Therefore, we consider dates with no ground measurements as snow-free condition. The PA, UA, and OA calculated from ground snow depth measurement are shown in Table 2. The PA, UA, and OA of our results are about 85.1, 97.5, and 93.6%, respectively, while those of MOD10 L2 for the same points are about 78.7, 97.3, and 91.2%, respectively. Our results and MOD10 L2 show similar OA and UA; however, PA has slightly improved the result of the study by about 6.4% compared to MOD10 L2. This error may be due to misclassification between clouds and snow cover.

Qualitative Assessment of Our Results with MOD10 L2.
Comparing the results of this study with MOD10 L2, the PA, UA, and OA were 92.92%, 78.41%, and 92.24%, respectively. MOD10 L2 are based on a snow-mapping algorithm that uses NDSI and other criteria tests to discriminate between snow cover and clouds in the MODIS cloud mask algorithm. Because the MODIS snow cover algorithm developed by Hall et al. (1995) is based on a static threshold method and does not account for variability in the reflectance of snow cover due to properties of the snow (grain size, moisture content, impurity, age, etc.), terrain effects, or geometric factors such as SZA and VZA, this algorithm may cause errors such as misclassification. Therefore, we carried out image-based qualitative assessment of snow cover estimates determined using the method proposed in this study. For this analysis, we used MODIS true color image composites, including MODIS bands 3 (0.469 m), 4 (0.555 m), and 1 (0.645 m) of the target scene, as well as data from the previous or next day, because clouds exhibit very high spatial variability at the scale of a hundred meters to a few kilometers, while snow cover exhibits low spatial variability over short time scales [68,69]. The meaning of each color is indicated by the color bar. As a whole, errors occur in the boundary between clouds and snow cover, but our results are similar to MOD10 L2. The results of this analysis, like those of the quantitative assessment, indicate that the overestimation error (green) is greater than the underestimation error (red) compared with MOD10 L2. In the overestimated area indicated as green in Figure 10(a), MODIS classified the pixels as clouds; thus, this error is caused by the confusion between snow and clouds. However, when comparing two RGB images taken on different days, the overestimated region is considered snowcovered land, not clouds, because this region shows brightness features with similar shape on both days, as shown in Figures 10(d) and 10(g). Also, Figures 10(b) and 10(c) show the overestimation error. Most overestimation error occurs with high VZA, more than 25 ∘ from the nadir. In MOD10 L2, the overestimated area was classified as clouds. However, comparing an RGB image taken at the same time and an RGB image offset by one day, these areas also appear to have snow cover, because there was almost no change in shape within a day.

Performance of DWW for Snow Cover Detection.
Most snow cover detection algorithms use static threshold methods based on reflectance in the visible and NIR regions of the spectrum. In particular, NDSI, which plays an important role in snow cover detection in many previous studies, is highly dependent upon the viewing angle. The MODIS snow cover algorithm used to derive MOD10 L2 uses NDSI, NDVI, and other thresholds. In the MODIS snow cover algorithm, MOD35 L2 was used to remove cloud pixels; various threshold methods, including 14 tests for discriminating between clouds and snow, were used in the MODIS cloud mask algorithm. Therefore, the accuracy of MOD10 L2 is influenced by the accuracy of MOD35 L2. Consequently, the accuracy of MOD10 L2 depends on several factors at acquisition time.
To compensate for this limitation of mapping snow cover, we propose a new method using DWW. We performed two case studies to confirm the ability of DWW to detect snow cover. For this purpose, we selected two pixels, one observed with low VZA (blue circle) and the other with high VZA (red circle), as shown in Figure 10(b). These pixels were classified as snow cover in qualitative comparison of RGB true color images. The same pixels were classified by the MODIS snow cover algorithm as clouds, but the proposed algorithm categorized them as snow cover.
Figures 11(a) and 11(b) show a plotted reflectance profile observed by satellite and selected snow reflectance profiles from the spectral library under different SZA and DEM conditions (left) and with indicated by several black arrows and circle (right) at each sample pixel with different values of VZA (16.16 and 51.23 ∘ , resp.). As shown in Figures 11(a) and 11(b), these sampled pixels exhibit reflectance profiles typical of snow cover (high reflectance of visible wavelengths and low reflectance in the shortwave region of the electromagnetic spectrum) unlike those of snow-free land. And those pixels have been observed under clear sky conditions which are based on the RGB true color image. As mentioned, the reflectance observed in the sampled pixels varies in same wavelength due to several factors. The difference in reflectance of the same wavelength observed in two sample pixels is large in the visible region (from 0.168 at 0.466 m to 0.246 at 0.647 m) and is lower around 1.6 m (from 0.003 at 1.382 m to 0.041302 at 1.629 m). Thus, the NDSI also differs. However, in both cases, the reflectance profile exhibits similar patterns of variation according to wavelength with selected spectra from the snow reflectance spectral library, although there were some differences between the observed reflectance profile and the snow reflectance spectral library. Therefore, the corresponding pixel was categorized as snow cover because , which depends on , is diagonal, indicating that each node of the reflectance profile is matched with the same node of the snow reflectance spectral library. As mentioned, MOD10 L2 classified these pixels as clouds despite having NDSI values greater than 0.4, which is the threshold for the MODIS snow cover algorithm. Such pixels affect the accuracy of MOD35 L2. This analysis shows that DWW is more useful than static threshold methods when the reflectance of snow cover exhibits variability due to conditions such as SZA, VZA, and the state of snow at observation time.

Summary and Conclusion
In this study, we proposed a simple new algorithm for mapping snow cover from MODIS observations. Most snow cover algorithms, including the MODIS snow cover algorithm, use a static threshold method, which does not account for variations in the reflectance of snow cover due to conditions such as moisture content, impurities, and the geometric relationship between the sun, target, and sensor. These methods can lead errors such as confusion between cloud and snow. To improve upon the static threshold method, we used DWW in proposed snow cover algorithm, which is based on DTW but uses wavelength instead of time. For DWW processing, 49 snow reflectance spectral libraries were constructed to provide a reference dataset of about 1.6 million data collected under various SZA and DEM conditions.
To account for different observation times, we normalized the observed reflectance profiles using the cosine of SZA. The snow cover algorithm proposed in this study performs several steps sequentially. First, a threshold value of Ano 1.6 was used to distinguish snow-free land, which resulted in removal of snow-free land and some clouds. Second, threshold values of 1.6 were used to discriminate between easily detectable clouds and snow cover. Then, for the pixels that had not yet been classified as either snow cover or clouds in second step due to similar optical characteristics, DWW processing was carried out to distinguish between clouds and snow cover. When the warping path that results from DWW processing is diagonal on , the corresponding pixel is  Figure 11: Performance of dynamic wavelength warping (DWW) for detection of snow cover under low (a) and high (b) VZA conditions: (left) reflectance profile observed by satellite and snow reflectance spectral library of the corresponding pixel; (right) Cumulative smallest cost matrix and warping path computed by DWW between the reflectance profile and snow reflectance spectral library. classified as snow cover, as the snow reflectance spectral library matched the spectral reflectance profile at a given wavelength. In addition, we used the BTD between MODIS bands 21 (3.9 m) and 31 (11 m) and NDSI criteria to remove thin clouds over snow cover.
To evaluate the proposed snow cover algorithm, we quantitatively compared snow cover estimates from the proposed method with the MOD10 L2 product for 219 randomly selected scenes. As a result of this quantitative comparison, PA and UA were 92.92% and 78.41%, respectively, while OA was 92.24%, indicating a good overall classification accuracy (i.e., 80%) [70]. When evaluating accuracy based on SZA and VZA classes, OA was greater than 88% for all SZA classes and over 91% for all VZA classes. The results of this study overestimate snow cover compared with MOD10 L2, with false detection and misdetection ratios of 6.08% and 1.68%. The main cause of this error is misclassification due to confusion of snow cover and clouds. Compared with in situ measurement, the proposed snow cover algorithm showed that PA, UA, and OA were improved by 6.383, 0.2, and 2.4%, respectively, compared to MODIS snow cover algorithm, indicating that our snow cover algorithm is feasible for discrimination between snow cover and others. To further evaluate the results of this study, a qualitative comparison was performed using RGB true color images. We found overestimation of snow areas, especially at high VZA. This area was classified as cloud by the MODIS snow cover algorithm and as snow cover by the proposed snow cover algorithm but should be considered as snow cover due to the presence of a brightness feature with the same shape in 2 RGB true color images collected on different dates; additionally, the reflectance profile has a similar reflectance pattern to those in the snow reflectance spectral library. Comparison of our snow cover classification results with MOD10 L2 data indicates that our results have good overall agreement with MOD10 L2. The DWW-based snow cover detection algorithm proposed in this study is likely to be more useful for detection of snow cover than static threshold methods when the reflectance of snow cover exhibits variability due to various conditions.

Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.