^{1}

^{1}

^{2}

^{1}

^{1}

^{1}

^{2}

The present study demonstrates the application of artificial neural networks (ANNs) in predicting the weekly spring discharge. The study was based on the weekly spring discharge from a spring located near Ranichauri in Tehri Garhwal district of Uttarakhand, India. Five models were developed for predicting the spring discharge based on a weekly interval using rainfall, evaporation, temperature with a specified lag time. All models were developed both with one and two hidden layers. Each model was developed with many trials by selecting different network architectures and different number of hidden neurons; finally a best predicting model presented against each developed model. The models were trained with three different algorithms, that is, quick-propagation algorithm, batch backpropagation algorithm, and Levenberg-Marquardt algorithm using weekly data from 1999 to 2005. A best model for the simulation was selected from the three presented algorithms using the statistical criteria such as correlation coefficient (

The process of discharge simulation from a spring is a very complex, highly nonlinear phenomenon having temporal and spatial variability. The weekly spring discharge modeling has a vital role in better management of the water resources management. Many models such as black box, conceptual, and physically-based models have been developed especially for rainfall, runoff, and sediment process. On the other hand, very few models are available for accurate estimation of discharge of spring, and, in many situations, simple tools such as linear theoretical models or black box models have been used with advantage. However, these models fail to represent the nonlinear process such as rainfall, runoff, and sediment yield [

The artificial neural network, a soft computing tool, basically a black-box model and has its own limitations [

Artificial neural networks are highly simplified mathematical models of biological neural networks having the ability to learn and provide meaningful solutions to the problems with high-level complexity and nonlinearity. The ANN approach is faster compared to its conventional techniques, robust in noisy environments, and can solve a wide range of problems. Due to these advantages, ANNs have been used in numerous real-time applications. The most commonly used neural networks in hydrology being three layered and four layered having input layer, where the input is fed to the network, hidden layer(s) where the data is processed, and output layer where the output will be presented Figure

Three-layered feed-forward artificial neural network configuration.

The processing elements in each layer are called neurons or nodes. The information flow and processing in the network is from the input layer to the hidden layer and from the hidden layer to the output layer. The number of neurons and hidden layers in the network is problem dependent and is decided by the trial and error method. A synaptic weight is assigned to each link to represent the relative connection strength of two nodes at both ends in predicting the input-output relationship. The output,

Sigmoid function is continuous and differentiable everywhere, and a nonlinear process can be mapped with it [

Study area is located near the College of Forestry and Hill Agriculture, Hill Campus Ranichauri of the G. B. Pant University of Agriculture and Technology, Pantnagar, in the

Location of the study area.

The present watershed of the study area drains into the Henval river in the ^{2}). It is located between 78°22′28′′ and 78°24′57′′E longitude and 30°17′19′′ and 30°18′52′′N latitude. The elevation varies from 960 to 2000 m above mean sea level (MSL). The study spring is located at 78°24′34′′E and 30°18′47′′N at the elevation of 1844 m above MSL in dense forest area. For the present study, the weekly data of spring discharge, rainfall, evaporation, temperature were collected for seven years from 1999 to 2005. Study area was tracked along with GPS, and the location (latitude and longitude) of the Hill campus spring was noted with the help of GPS. The same locations were marked on the map with GIS environment, which is shown in Figure

The output from the model is spring discharge at the time step,

Selection of the best model from the batch backpropagation algorithm.

Model inputs | No. of inputs | Algorithm | Architecture | No. of hidden layer | Training | Testing | Validation | |||

DC | DC | DC | ||||||||

_{t}, E_{t}, T_{t} | 3 | [3-4-1] | One | 0.671 | −0.235 | 0.600 | −0.133 | 0.594 | −0.883 | |

_{t−1}_{t}_{t−1}_{t}_{t}_{−1 }, _{t}_{t}_{−1 } | 7 | [7-2-1] | One | 0.979 | 0.956 | 0.979 | 0.941 | 0.986 | 0.961 | |

_{t−1}_{t−2}_{t}_{t−1}_{t−2}_{t}_{t−1}_{t−2}_{t}_{t−1}_{t−2} | 11 | [11-28-1] | One | 0.959 | 0.907 | 0.921 | 0.809 | 0.953 | 0.893 | |

_{t−1}_{t−2}_{t-3}, _{t}_{t−1}_{t−2}_{t−3}_{t}_{t−1}_{t−2}_{t−3}_{t}_{t−1}_{t−2}_{t−3} | 15 | [15-21-1] | One | 0.943 | 0.851 | 0.945 | 0.865 | 0.967 | 0.928 | |

_{t−1}, Q_{t−2}, Q_{t−3}, Q_{t−4}, R_{t}, R_{t−1}, R_{t−2}, R_{t−3}, R_{t−4}, E_{t}, E_{t−1}, E_{t−2}, E_{t−3}, E_{t−4}, T_{t}, T_{t−1}, T_{t−2}, T_{t−3}, T_{t−4} | Batch backpropagation | One | ||||||||

_{t}, E_{t}, T_{t} | 3 | [3-2-2-1] | Two | 0.623 | −0.575 | 0.623 | −1.378 | 0.554 | −1.322 | |

_{t−1}_{t}_{t−1}_{t}_{t−1}_{t}_{t−1} | 7 | [7-6-5-1] | Two | 0.982 | 0.960 | 0.970 | 0.935 | 0.975 | 0.944 | |

_{t−1}_{t−2}_{t}_{t−1}_{t−2}_{t}_{t−1}_{t−2}_{t}_{t−1}_{t−2} | 11 | [11-4-3-1] | Two | 0.988 | 0.975 | 0.981 | 0.960 | 0.967 | 0.907 | |

_{t−1}_{t−2}_{t−3}_{t}_{t−1}_{t−2}_{t−3}_{t}, E_{t−1}, E_{t−2}, E_{t−3}_{t,} T_{t−1}, T_{t−2}, T_{t−3} | 15 | [15-7-3-1] | Two | 0.980 | 0.958 | 0.960 | 0.901 | 0.985 | 0.963 | |

_{t−1}_{t−2}_{t−3}_{t−4}_{t}_{t−1}_{t−2}_{t−3}_{t−4}_{t}_{t−1}_{t−2}_{t−3}_{t−4}_{t}_{t−1}_{t−2}_{t−3}_{t−4} | Two |

In the present study feed-forward Quick-propagation, Batch backpropagation, and Levenberg-Marquardt’s ANN models have been used for the simulation of the spring discharge. The input-output datasets were first normalized considering the maximum value of the series, and the reducing the individual variables, in the range of 0 to 1 to avoid the saturation effect, may be possible by using sigmoid activation function.

The sigmoid function was the activation function used in the present study, and constant quick propagation coefficient 1.75 and learning rate 0.8 were selected by different hit and trial methods in the quick propagation algorithm for possible better optimization. The weights were updated after presenting each pattern from the learning dataset, rather than once per iteration. In the Batch backpropagation algorithm, a constant learning rate (

The number of input nodes in the input layer was taken equal to the number of input variables. Since no guideline is yet available on the number of hidden nodes of the hidden layer (Vemuri, 1992) [

In developing Linear Multiple Regression (LMR) models, the spring discharge at time

The developed regression models’ description is as follows.

Model: LMR-1:

The graphical representation along with the corresponding scattered plots of developed LMR models are shown in Figures

It can be seen from Table

Hence, ANN2 and ANN8 are the best performance models for the study spring. Finally, the model ANN8 was selected on the basis of the overall performance for the spring discharge simulation for the current spring in the quick-propagation algorithm. It can also be seen from Tables

Selection of the best model from the quick-propagation algorithm.

Model inputs | No. of inputs | Algorithm | Architecture | No. of hidden layer | Training | Testing | Validation | |||

DC | DC | DC | ||||||||

_{t}, E_{t}, T_{t} | 3 | [3-2-1] | One | 0.710 | 0.039 | 0.582 | −0.179 | 0.586 | −0.626 | |

_{t−1}, R_{t}, R_{t−1}, E_{t}, E_{t−1}, T_{t}, T_{t−1} | One | |||||||||

_{t−1}, Q_{t−2}, R_{t}, R_{t−1}, R_{t−2}, E_{t}, E_{t−1}, E_{t−2}, T_{t}, T_{t−1}, T_{t−2} | 11 | [11-14-1] | One | 0.984 | 0.967 | 0.981 | 0.963 | 0.965 | 0.922 | |

_{t−1}, Q_{t−2}, Q_{t−3}, R_{t}, R_{t−1}, R_{t−2}, R_{t−3}, E_{t}, E_{t−1}, E_{t−2},E_{t−3}, T_{t,} T_{t−1}, T_{t−2}, T_{t−3} | 15 | [15-15-1] | One | 0.996 | 0.993 | 0.977 | 0.952 | 0.974 | 0.944 | |

_{t−1}, Q_{t−2}, Q_{t−3}, Q_{t−4}, R_{t}, R_{t−1}, R_{t−2}, R_{t−3}, R_{t−4}, E_{t}, E_{t−1}, E_{t−2}, E_{t−3}, E_{t−4}, T_{t}, T_{t−1}, T_{t−2}, T_{t−3}, T_{t−4} | 19 | Quick propagation | [19-30-1] | One | 0.997 | 0.995 | 0.931 | 0.854 | 0.970 | 0.936 |

_{t}, E_{t}, T_{t} | 3 | [3-2-2-1] | Two | 0.626 | −0.499 | 0.611 | −0.187 | 0.646 | −0.233 | |

_{t−1}, R_{t}, R_{t−1}, E_{t}, E_{t−1}, T_{t}, T_{t−1} | 7 | [7-4-5-1] | Two | 0.988 | 0.975 | 0.975 | 0.941 | 0.985 | 0.970 | |

_{t−1}, Q_{t−2}, R_{t}, R_{t−1}, R_{t−2}, E_{t}, E_{t−1}, E_{t−2}, T_{t}, T_{t−1}, T_{t−2} | Two | |||||||||

_{t−1}, Q_{t−2}, Q_{t−3}, R_{t}, R_{t−1}, R_{t−2}, R_{t−3}, E_{t}, E_{t−1}, E_{t−2},E_{t−3}, T_{t,} T_{t−1}, T_{t−2}, T_{t−3} | 15 | [15-11-14-1] | Two | 0.998 | 0.996 | 0.979 | 0.948 | 0.979 | 0.950 | |

_{t−1}, Q_{t−2}, Q_{t−3}, Q_{t−4}, R_{t}, R_{t−1}, R_{t−2}, R_{t−3}, R_{t−4}, E_{t}, E_{t−1}, E_{t−2}, E_{t−3}, E_{t−4}, T_{t}, T_{t−1}, T_{t−2}, T_{t−3}, T_{t−4} | 19 | [19-10-13-1] | Two | 0.998 | 0.960 | 0.990 | 0.978 | 0.978 | 0.965 |

Selection of the best model from the Levenberg-Marquardt algorithm.

Model inputs | No. of inputs | Algorithm | Architecture | No. of hidden layer | Training | Testing | Validation | |||

DC | DC | DC | ||||||||

_{t}, E_{t}, T_{t} | 3 | [3-8-1] | One | 0.606 | −0.957 | 0.491 | −1.892 | 0.623 | −1.449 | |

_{t−1}, R_{t}, R_{t−1}, E_{t}, E_{t−1}, T_{t}, T_{t−1} | 7 | [7-3-1] | One | 0.995 | 0.990 | 0.966 | 0.932 | 0.961 | 0.909 | |

_{t−1}, Q_{t−2}, R_{t}, R_{t−1}, R_{t−2}, E_{t}, E_{t−1}, E_{t−2}, T_{t}, T_{t−1}, T_{t−2} | One | |||||||||

_{t−1}, Q_{t−2}, Q_{t−3}, R_{t}, R_{t−1}, R_{t−2}, R_{t−3}, E_{t}, E_{t−1}, E_{t−2},E_{t−3}, T_{t,} T_{t−1}, T_{t−2}, T_{t−3} | 15 | [15-10-1] | One | 0.983 | 0.960 | 0.950 | 0.894 | 0.971 | 0.940 | |

_{t−1}, Q_{t−2}, Q_{t−3}, Q_{t−4}, R_{t}, R_{t−1}, R_{t−2}, R_{t−3}, R_{t−4}, E_{t}, E_{t−1}, E_{t−2}, E_{t−3}, E_{t−4}, T_{t}, T_{t−1}, T_{t−2}, T_{t−3}, T_{t−4} | 19 | Levenberg-Marquardt | [19-13-1] | One | 0.982 | 0.956 | 0.989 | 0.969 | 0.975 | 0.928 |

_{t}, E_{t}, T_{t} | 3 | [3-2-2-1] | Two | 0.528 | −210441.73 | 0.521 | −168494.734 | 0.392 | −214100.83 | |

_{t-1}, R_{t}, R_{t-1}, E_{t}, E_{t-1}, T_{t}, T_{t-1} | 7 | [7-3-2-1] | Two | 0.985 | 0.967 | 0.970 | 0.935 | 0.168 | 0.934 | |

_{t−1}, Q_{t−2}, R_{t}, R_{t−1}, R_{t−2}, E_{t}, E_{t−1}, E_{t−2}, T_{t}, T_{t−1}, T_{t−2} | 11 | [11-3-2-1] | Two | 0.986 | 0.972 | 0.983 | 0.961 | 0.957 | 0.914 | |

_{t−1}, Q_{t−2}, Q_{t−3}, R_{t}, R_{t−1}, R_{t−2}, R_{t−3}, E_{t}, E_{t−1}, E_{t−2},E_{t−3}, T_{t,} T_{t−1}, T_{t−2}, T_{t−3} | 15 | [15-3-3-1] | Two | 0.982 | 0.941 | 0.93 | 0.89 | 0.986 | 0.91 | |

_{t−1}, Q_{t−2}, Q_{t−3}, Q_{t−4}, R_{t}, R_{t−1}, R_{t−2}, R_{t−3}, R_{t−4}, E_{t}, E_{t−1}, E_{t−2}, E_{t−3}, E_{t−4}, T_{t}, T_{t−1}, T_{t−2}, T_{t−3}, T_{t−4} | Two |

The aim of the current study is to present the representative algorithm, with a particular ANN model for the spring discharge simulation for the current spring; in this line of context, all the presented models from the three algorithm are representative for the current spring discharge simulation, but the performance of the model ANN8 with quick-propagation algorithm is quiet good amongst all the best presented models. Hence, finally, the ANN8 model with quick-propagation algorithm is selected for the simulation of spring discharge in the study.

The comparative plots presented for the observed and estimated spring discharges and their corresponding scatter plots for the best representative model for the study location during training, testing, and validation in Figures

Observed and predicted (ANN2) weekly spring discharge—quick-propagation algorithm.

Training

Testing

Validation

Observed and predicted (ANN5) weekly spring discharge—batch backpropagation algorithm.

Training

Testing

Validation

Observed and predicted (ANN3) weekly spring discharge—Levenberg Marquardt’s algorithm.

Training

Testing

Validation

Observed and predicted (LMR-1) weekly spring discharge.

Observed and predicted (LMR-2) weekly spring discharge.

Table

Comparison of the best ANN and LMR models.

Model inputs | No. of inputs | ANN quick propagation | LMR model | ||

DC | DC | ||||

_{t−1}, R_{t}, R_{t−1}, E_{t}, E_{t−1}, T_{t}, T_{t−1} | 7 | 0.983 | 0.964 | 0.970 | 0.941 |

_{t−1}, Q_{t−2}, R_{t}, R_{t−1}, R_{t−2}, E_{t}, E_{t−1}, E_{t−2}, T_{t}, T_{t−1}, T_{t−2} | 11 | 0.990 | 0.960 | 0.976 | 0.948 |