Neural Network for WGDOP Approximation and Mobile Location

,


Introduction
Mobile positioning is becoming an increasingly important problem, whose solutions can be generally divided into two major categories-handset-based methods and network-based methods [1].When equipped with a global positioning system (GPS) receiver, handset-based location schemes require modifications to the handset to calculate its own position.The network-based location schemes can be used in situations where GPS signals are not available, for example, indoor environment, or when GPS-embedded handsets are not available.The network-based methods require the estimation of the mobile location based on the received signals between the mobile station (MS) and a set of base stations (BSs).For many applications in wireless sensor networks (WSN), such as environmental sensing and activities measuring, it is crucial to know the locations of the sensor nodes; this is known as a "localization problem" [1].
Geometric dilution of precision (GDOP) can be applied as a criterion for choosing the appropriate geometric configuration of the measurement units.Different stations can constitute different combinations, and the stations randomly selected can yield with relatively poor accuracy.Reference [2] proposed a method based on fuzzy clustering to analyze the positioning distribution of each combination.The key of this method is to apply only those stations with good GDOP and time of arrival (TOA).In reference [3] a time-varying function based on the GDOP curved surface is defined, so it is quite complex and difficult to effectively track a mobile target and maintain good GDOP.However, the proposed target localization scheme is to use the best mobile sensor nodes (MSNs) location, which can effectively mitigate the effect of GDOP and provide more accurate location estimates in mobile sensing systems.On the other hand, reference [4] also proposed a difficult hybrid algorithm, by combining pseudorange differencing and ridge regression technique, to improve both the position accuracy of pseudolite only system and effectively reduce the GDOP effects.But the proposed method in this paper can be applied for those conditions, such as in indoor navigation system design and when GPS satellite signals are not available.
This paper considers both network-based and handsetbased methods, employing the concept of GDOP, which was developed to select the optimal geometric configuration of satellites in GPS.Thus when enough measurements are available, the optimal measurements selected can reduce the adverse geometry effects, thereby improving the location accuracy.However, excessive or redundant measurements would increase the computational overhead, and sometimes it may not improve the location accuracy significantly.So before positioning, it is very important to select a subset with the most reasonable measurement units rapidly.
Implementation of the GDOP method assumes that all pseudorange errors are independent and identically distributed [5].But the measurements errors usually have different variances in practice [6].Generally, the satellite signal is obtained by combining the values such as user range accuracy, carrier-to-noise ratio, elevation angle, and the ephemeris.In [7], Sairo et al. proposed a method that takes into account the different error variances.In [8], the elevation and receiver's signal-to-noise ratio (SNR) is used to weigh GDOP and provide the positioning solution.When baro-altitude measurements or a priori terrain elevation information is used, the conventional GDOP formula cannot be applied and must be modified [9].The combinations of GPS and Galileo satellite constellations will provide more visible satellites with better geometric distribution and their accessibility will be significantly improved.A weighted GDOP (WGDOP) algorithm was proposed in [10] for the combined GPS-Galileo navigation receiver.Reference [11] considered the value of the WGDOP, the number of visible satellites, and the constellation costs as three objective functions of navigation constellation performance.Simulation results show that these optimal solutions can provide more visible satellites and better WGDOP.In addition, several methods based on WGDOP have been proposed to improve the GPS positioning accuracy [12][13][14][15][16][17][18].Most, if not all, of those methods need matrix inversion to calculate WGDOP.The matrix inversion method is rather time consuming and causes a great deal of computational burden.Thus those performances are achieved at the expense of increased computational complexity, which is usually too much to be practical.
Simon and El-Sherief [19,20] employed backpropagation neural network (BPNN), a supervised learning neural network [21], to obtain an approximation to the GDOP function.The BPNN was employed to "learn" the relationship between the entries of a measurement matrix and the eigenvalues of its inverse.Three other input-output relationships were proposed and compared based on simulation results [22].However, BPNN generally converges slowly and tends to get trapped in local minima.Considering both effectiveness and efficiency, this paper presents two novel architectures based on an alternative artificial neural networks method, namely, the resilient backpropagation (Rprop) method [21], to approximate WGDOP.Rprop is an algorithm with good convergence speed, accuracy, and robustness to the training parameter [23].Comparing to the BPNN, the Rprop converges faster and needs less training.Fast convergence can curtail the training time, and build relevant predictive models of neural network quickly.But collecting the training data spends a lot of cost, time, and resources.A training pattern consists of a set of the input vectors and the corresponding output vectors.In many situations, the data for training is not available.The accuracy of prediction may be affected by the lack of sufficient training data.However, the more training data we have the more training time and resource it costs.It is very critical to achieve higher accuracy when the training data is not sufficient.Both faster convergence speed and less number of training iterations for neural network are very important.Simulation results have shown that the proposed architectures based on Rprop can provide faster convergence and less number of training iterations.So they can be applied to select the location measurement units based on Rprop in GPS, WSN, and cellular communication systems.In practice, the measurement units of GPS, WSN, and cellular communication systems are satellites, sensors, and BSs, respectively.
To select the most appropriate set of BSs to achieve the minimum positioning error, we need to consider not only GDOP effect but also the non-line-of-sight (NLOS) error statistic to each BS in cellular communication systems.The reciprocal of square root of an upper bound of NLOS errors is set as the weight coefficient.In this paper, we select a subset of four measurements in the location process.We will first select the serving BS as one of the four, and then combine it with three other measurements to form the WGDOP subsets.After calculating the WGDOP values, the subset with minimum WGDOP is used to estimate the MS location.Simulation results show that Rprop provides much better estimation of average WGDOP residual than those based on BPNN.The number of epochs of Rprop is less than the traditional BPNN.Rprop algorithm converges faster than BPNN and requires much less convergence time.The proposed architectures using Rprop for WGDOP approximation always yield better location results compared with the other architectures.The proposed architectures using Rprop for both WGDOP approximation and the matrix inversion method provide nearly identical MS location estimation.The four randomly selected BSs with poor WGDOP will give bad location estimation, and the positioning accuracy would be seriously affected by the geometric configuration of BSs and MS.The proposed BSs selection criterion reduces greatly the number of subsets while providing a comparable level of accuracy in location estimation.Therefore, the conclusion is that the proposed algorithm can be applied in practical situations.
To improve positioning accuracy even further, the higher precision MS locations according to the first several minimum WGDOP subsets can be used as the inputs to the next Rprop.After training, Rprop can be applied to predict the final MS location by means of the higher precision MS locations.The simulation results confirm that the proposed methods employing at most two higher MS locations can always perform better than using all seven BSs.In essence, these methods greatly reduce the NLOS error and enhance the performance of MS location estimation effectively.
The remainder of this paper is organized as follows: Section 2 presents the calculation of GDOP and WGDOP.Section 3 describes briefly BPNN and Rprop methods.The six types of mapping for WGDOP approximation based on Rprop are proposed in Section 4. Section 5 presents a proposed BSs selection criterion and the location methods.Simulation results are given in Section 6 followed by conclusion in Section 7.

Calculation of GDOP and WGDOP
The concept of GDOP is commonly used to determine the geometric effect of GPS satellite configurations.It has a simple form if all the measurements have the same error variance.The smaller the GDOP value, the more accurate the positioning is.In order to improve the positioning accuracy, we should minimize GDOP among the selected measurement units.
Using a three-dimensional (3D) Cartesian coordinate system, the distances between satellite  and the user can be expressed as where (, , ) and (  ,   ,   ) are the locations of the user and satellite , respectively;  is the speed of light,   denotes the time offset, and V  is pseudorange measurements noise.Equation ( 1) can be linearized with Taylor's series expansion at the approximate user position ( x, ŷ, ẑ), neglecting the higher order terms.Defining r as   at ( x, ŷ, ẑ), we can obtain where   ,   ,   are, respectively, the coordinate offsets of , , , The linearized equations can be expressed in a vector form with direction cosines from the user to the th satellite where is the geometry matrix.
The vector variable  in (4) can be solved with the leastsquare (LS) algorithm, namely, Assuming that the pseudorange errors are uncorrelated with equal variances  2 , the error covariance matrix can be expressed as The variances are functions of the diagonal elements of (  ) −1 .The GDOP is a measure of accuracy for positioning systems and is defined as In practice, the measurement errors do not have the same variance, especially when different systems are combined.The covariance matrix has the form Now, define a weight matrix  as follows, where  2  = 1/  ,  = 1, 2, . . .,  are variances of measurement errors.
With the weighting matrix defined above, we now need to solve a weighted least-square (WLS) problem and the solution is given by To select the set of the most appropriate measurement units that renders the minimum positioning error, we must consider not only the GDOP effect but also the ranging error statistics.In this paper, we employ WGDOP, instead of GDOP, to select measurement units so as to improve the accuracy of location.The optimal subset is the one with the minimum WGDOP, which is given by the trace of the inverse of the    matrix: The conventional method for calculating WGDOP is to use matrix inversion for all subsets, requiring a great deal of computational effort.When the number of measurement units increases, the computation time will increase rapidly.
In this paper, we employ an artificial neural network learning algorithm to obtain approximate WGDOP.

Traditional BPNN Algorithm and Rprop Algorithm
It has been known that BPNN is capable of learning and realizing both linear and nonlinear functions [21].The learning process of BPNN can be considered as one of gradient descent methods that minimizes some measure, for example, meansquare value of the difference between the actual output vector of network and the desired output vector.Define an error function where   is the output vector of the network while   is the desired output vector.Then, the gradient decent algorithm is employed to adapt the weights (namely, synapses) as follows: where  is a predetermined learning rate, and   denotes the weight connecting neuron  to neuron .The major drawbacks of traditional BPNN include slow learning process and the tendency to be trapped easily in local minima.
Comparing to the traditional BPNN algorithm, the Rprop algorithm offers faster convergence and is usually more capable of escaping from local minima.In a sense, Rprop is a first-order algorithm and its time and memory requirement scales linearly with the number of parameters.In practice, Rprop is easier to implement than BPNN.Besides, a hardware implementation for Rprop has been presented in [24].Briefly speaking, Rprop performs a direct adaptation of the weighting step based on local gradient information.The main idea of Rprop is to reduce the potential spurious effect of the partial derivative on weight-updates by retaining only the sign of the derivative as an indication of the direction, in which the error function will be changed by the weight-update.We introduce for each weight an individual update-value Δ  (), which solely determines the size of the weight-update.This adaptive update-value evolves during the learning process based on its local sight on the error function , according to the following learning rule [25]: where 0 <  − < 1 <  + .We can simply describe the adaptation rule as follows: every time the partial derivative of the error function  with respect to the corresponding weight   changes its sign, the update-value Δ  is decreased by a factor  − .If the derivative retains its sign, the updatevalue is slightly increased in order to accelerate convergence in shallow regions.
Once the update-value for each weight is adapted, the weight-update itself follows a very simple rule: if the derivative is positive (increasing error), the weight is decreased by its update-value, and if the derivative is negative, the updatevalue is added to the weight:

Proposed Network Architectures for WGDOP Approximation
Researchers have employed the conventional BPNN to estimate GDOP; see, for example, [19,20,22].This can reduce the computational complexity required to compute the matrix inversion in calculating GDOP.Since the statistics of different location measurement units are, in general, not equal to each other, WGDOP serves as an index for the precision of location in different networks, such as GPS, WSN, and cellular communication systems.In this paper, the original four types BPNN, defined by four different input-output mapping, for GDOP calculation [19,20,22] will be extended to WGDOP with the employment of Rprop.In addition, we propose two new mapping architectures.
To further reduce the computational overhead and improve location performance, the selection of optimal measurement units is necessary.Instead of using all visible satellites, four satellites are usually sufficient for GPS positioning.As such, we take only four BSs from among seven with better geometry to estimate the MS location in cellular communication networks.The different structures of four location measurement units were implemented to illustrate the applicability of Rprop for WGDOP predictions in 3D environments.Accordingly, from (13),    is a 4 × 4 matrix and it has four eigenvalues, namely,   ,  = 1, 2, 3, 4.
Therefore, the four eigenvalues of (  ) −1 are  −1  ,  = 1, 2, 3, 4. Consequently, WGDOP can be expressed as On the other hand, the geometry matrix and weight matrix composed of four location measurement units in twodimensional (2D) environments are respectively.Therefore,    is a 3 × 3 matrix and WGDOP is In the following, we present six types of Rprop mapping architectures for WGDOP prediction in 3D and 2D environments and the mapping relationship with the three layer "input -hidden neuron number-output " structures.These six types of architectures are described by a block diagram as shown in Figure 1.

Type 1. (A) 3D: four inputs are mapped to four outputs
The network has the input-output pairs: )  .One can see that the mapping from  to  −1 is nonlinear, which is usually difficult to be determined analytically.After the training period, this mapping relationship can be approximated quite well by neural network.WGDOP is estimated by taking the square root of the sum of the outputs.
(B) 2D: three inputs are mapped to three outputs The network has the following input-output pair: The sum of the outputs gives the square of WGDOP.
where Given a number of known input vectors and corresponding output vectors, Rprop is employed to train a network until it obtains approximate WGDOP values.After the training, the elements of matrix  and  as input data cannot only pass through the trained Rprop more quickly but also predict the WGDOP more accurately.The simulation results show that the proposed Type 5 and Type 6 need fewer hidden neurons and the number of training iterations.Thus they have much reduced computational load and are more practical.Note that all the above architectures for obtaining WGDOP are applicable regardless of the number of the location measurement units.

Proposed BS Selection Criterion and
Location Methods Using Next Rprop The measurements are divided into a number of subsets, and then the location estimates rendered by the minimum WGDOP subset can be determined accordingly.In this paper, we only consider the subsets of four measurements.Thus, the measurements could be divided into 35 ( 7 4 ) possible subsets.In such system, the BS which serves a particular MS is called the serving BS which can provide more accurate measurements.To further simplify the process, the proposed BS selection criterion first chooses the serving BS and selects three measurements from the other six BSs to form a subset.As such, the number of possible subsets is reduced from 35 ( 7  4 ) to 20 ( 6 3 ) and the computational load can be much reduced.In this way, WGDOP is computed for 20 possible subsets and the one with the smallest WGDOP is selected.

Proposed Location Methods
Using Next Rprop.Based on the above BS selection criterion, the simplest location method employs the BSs with minimum WGDOP to estimate the MS location.In general, the subsets with smaller WGDOP provide more accurate MS location results.The proposed method calculates the MS location of several subsets with the first  minimum WGDOP.The corresponding higher precision MS locations of the first  minimum WGDOP subsets are defined as the candidate points.We then employ another Rprop to estimate the final MS location.Specifically, the candidate points are fed into the next Rprop which has the following input-output mapping: During the training period, Rprop is trained to establish the relationship between the candidate points and the true MS location.After the training, the candidate points become the input data and are passed through the trained Rprop to predict the final MS location.

Simulation Results
We consider the problem of mobile location using TOA measurements and attempt to improve the performance of the MS location estimate in 2D environments.Computer simulations are performed to investigate improvement in location accuracy.We consider a center hexagonal cell (where the serving BS resides) with six adjacent hexagonal cells of the same size, as shown in Figure 2. The serving BS, that is, BS 1 , is located at (0, 0).Each cell has a radius of 5 km and the MS locations are uniformly distributed in the center cell [26].The dominant error for wireless location systems is usually due to the NLOS propagation effect.NLOS error statistics are provided and can vary significantly from one BS to another.In this paper, the NLOS propagation model is based on the uniformly distributed noise model [27], in which the TOA NLOS errors from all the BSs are different and assumed to be uniformly distributed over (0,   ), for  = 1, 2, . . ., 7, where   is an upper bound.The specific variables are chosen as follows:  1 = 200 m,  2 = 400 m,  3 = 350 m,  4 = 700 m,  5 = 300 m,  6 = 500 m, and  7 = 350 m.The reciprocal of the square root of an upper bound of the NLOS errors is set to be diagonal elements of the weight matrix .
In the simulation, we have only considered single hidden layer, which is most commonly used.Figure 3 shows how the converged WGDOP residual varies as the number of training iterations (epochs) increases.Here, WGDOP residual is defined to be the difference between the actual WGDOP and the estimated WGDOP.Observe that WGDOP residual decreases as the number of epochs increases.At the beginning of the training period, the error is decreasing rapidly.When the number of epochs increases to more than 2000, this reduction slows down.This suggests that Rprop offers much faster convergence, compared to the traditional BPNN [22].
The number of hidden neurons can also be critical here.When there are too few hidden neurons, a bigger error may occur.On the other hand, a larger number of hidden neurons can slow down the convergence.Some general rules for determining the number of hidden neurons are (i) half of the sum of the input neurons and the output neurons; (ii) the same as that of the number of input layer's neurons; (iii) two times the number of input neurons plus one; (iv) three times the number of input neurons plus one.Figure 4 shows the results of WGDOP residual for various numbers of hidden neurons.The hidden layer neurons with (2 + 1) provide reasonably accurate results.
Based on the optimized neural network structure stated above, the Rprop algorithm can be applied to predict WGDOP value after the training period.Figures 5 and 6 show the mean-square error (MSE) of WGDOP over time when using BPNN and Rprop, respectively.So Rprop method can offer much faster convergence time.Rprop algorithm has been used to train the neural networks and was shown to converge faster than BPNN.Rprop-based Type 5 with 1,000 epochs gives a superior performance than BPNN-based Types 1, 3, and 5 with 2,000 epochs.We have found that when there are only 500 pieces of input-output patterns as training data, the proposed Rpropbased Type 6 still works better than BPNN-based Types 2, 4, and 6 with 2,000 epochs.The results obtained are very promising and confirm the quality of the Rprop algorithm with respect to both convergence time and the number of epochs.
We can observe that Rprop-based method is better than BPNN-based one.The Rprop algorithm derives more accurate estimations of average WGDOP residual than BPNN with a learning rate of 0.01.The superior performance for the Rprop algorithm has been demonstrated by comparing the average WGDOP residual.We can find that the WGDOP precision of Rprop is better than BPNN.Therefore, we apply Rprop and propose an algorithm to estimate WGDOP in this paper.Figure 9 shows CDFs of the WGDOP residual for the six types of mapping architectures with (2 + 1) hidden neurons after 2000 training iterations, where  is the number of input neurons.WGDOP is equal to the square root of the sum of  −1  ,  = 1, 2, 3, which are the outputs of threeoutput architectures.We can see that the one-output architectures approximate WGDOP with much better accuracy than those of the three-output architectures.The Type 1 mapping architecture predicts the eigenvalues inverse and then obtains WGDOP value with poor accuracy.The results show that the proposed Type 5 yields the best performance among the three-output architectures and the proposed Type 6 provides much better accuracy than all the other oneoutput architectures.The proposed Type 6 with  hidden neurons and 500 epochs renders superior performance to other architectures, such as those with (2 + 1) neurons sufficient to select the first two higher precision candidate points.

Conclusion
This paper presents novel Rprop-based architectures for both WGDOP approximation and location estimation.The proposed architectures based on Rprop for WGDOP approximation can be applied to GPS, WSN, and cellular communication systems.Since WGDOP index depends on both NLOS a priori information and BS geometry, the reciprocal of the square root of an upper bound of NLOS errors is set to be the weight coefficient.In this paper, the architectures of traditional BPNN to approximate GDOP are extended to WGDOP based on Rprop.The results show that the proposed architectures for predicting WGDOP yield much improved accuracy and significantly reduced computation.We also propose to combine the serving BS and the other three measurements, thereby reducing the number of possible subsets while achieving comparable performance.The robust performance of the proposed Rprop-based architectures is the same as the matrix inversion method.To significantly improve the location accuracy, we further employ another Rprop that takes the higher precision MS locations of the first several minimum WGDOP as the inputs, to determine the final MS location estimation and achieve effective performance improvement.In general, the matrix inversion method by using all seven BSs can give a good performance of MS location estimation.However, the simulation results confirm that the proposed methods, employing at most two higher MS locations, can perform better than the traditional method by using all seven BSs with matrix inversion.In essence, the proposed methods can make effective performance improvement of MS location estimation.The proposed algorithms can be applicable to all positioning techniques.

Figure 1 :
Figure 1: The input-output relationships for six types of mapping using Rprop.

Figure 3 :Figure 4 :
Figure 3: WGDOP residual reduction according to the number of epochs.

Figure 5 :Figure 6 :
Figure 5: Behavior of the average learning time for BPNN and Rprop for Types 1, 3, and 5.

Figure 12 :
Figure 12: CDFs of the location error of using the candidate points based on Rprop.
5.1.Proposed BS Selection Criterion.The proposed BS selection criterion with minimum WGDOP can be modified for application in cellular communication systems.Incorporating not only BSs geometry but also NLOS error statistics, we focus on the reciprocal of the square root of the upper bound of the NLOS errors as factors to determine the weight matrix.
The proposed Rprop algorithm with six types can yield smaller MSE with very short time consuming.From Figures7 and 8, one can see the cumulative distribution function (CDF) curves for the six types of mapping architectures based on BPNN and Rprop.The quantity of epochs of Rprop is fewer than the traditional BPNN.Even