JAM Journal of Applied Mathematics 1687-0042 1110-757X Hindawi 10.1155/2019/8450905 8450905 Research Article Alternating Projections Filtering Algorithm to Track Moving Objects http://orcid.org/0000-0002-5185-1978 Qranfal Youssef 1 Makinde Oluwole D. Department of Applied Mathematics Wentworth Institute of Technology Boston MA 02115 USA wit.edu 2019 1352019 2019 08 11 2018 24 03 2019 23 04 2019 1352019 2019 Copyright © 2019 Youssef Qranfal. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

An interest is often present in knowing evolving variables that are not directly observable; this is the case in aerospace, engineering control, medical imaging, or data assimilation. What is at hand, though, are time-varying measured data, a model connecting them to variables of interest, and a model of how to evolve the variables over time. However, both models are only approximation and the observed data are tainted with noise. This is an ill-posed inverse problem. Methods, such as Kalman filter (KF), have been devised to extract the time-varying quantities of interest. These methods applied to this inverse problem, nonetheless, are slow, computation wise, since they require large matrices multiplications and even matrix inversion. Furthermore, these methods are not usually suitable to impose some constraints. This article introduces a new iterative filtering algorithm based on alternating projections. Experiments were run with simulated moving projectiles and were compared with results using KF. The new optimization algorithm proves to be slightly more accurate than KF, but, more to the point, it is much faster in terms of CPU time.

1. Introduction

Tracking moving objects or a substance is a common optimization and control problem  in various fields such as aerospace [2, 3], medical imaging , data assimilation , and video surveillance . Methods, such as Kalman filter, have been devised to solve this problem. However, these methods come with a price, for large systems, of being slow in computation since they involve matrix-matrix multiplication and even matrix inversion. Remedies, such as Expectation Maximization Filter (EMF) algorithm, have been proposed including when nonnegative results are desired . A new remedy is yet introduced here.

The novel filtering algorithm presented in this article aims, as does KF, to find an estimate x^k, k=1,,K, to the unknown xk of the problem modeled by the two linear space-state equations, (1)xk=Akxk-1+μk,zk=Hkxk+νk.The error vector μk has E(μk)=0 and E(μkμk)=Qk; the latter is the covariance of the error in modeling the transition from xk-1 to xk. E(νk)=0 and E(νkνk)=Rk are the mean and covariance, respectively, of the noise vector νk. The assumption is that the error and noise are white so that (2)Qk=σk2I,Rk=diagγk,where I is the identity matrix with order known from context; diag(a) denotes the square matrix that has the ai>0 in its main diagonal and 0 otherwise. Both matrices Qk and Rk are assumed to be diagonal. The new algorithm is then numerically tested to solve a reconstruction problem arising in object tracking.

Tracking a moving object (or moving objects) with large unknown variables and few given data, as in aerospace, medical imaging, or data assimilation, is an ill-posed inverse problem. This ill-posedness is further inflated by physical degradation of the acquired data causing noise. Filtering algorithms, as the one presented in this article, are therefore very suitable tools.

The remainder of the paper is organized as follows. Section 2 describes the stochastic modeling of the state evolution and projection in space. The state evolution and space linear models are the basis to apply the new method. Then Section 3 reviews KF and introduces the new filtering algorithm. The application of the new filter to tracking moving objects is covered in Section 5. Section 6 details the simulations and numerical experiments of moving tracking objects using both KF and the new filter; these corroborate the effectiveness of this new algorithm in terms of convergence and CPU time. Section 7 closes the paper by summarizing the findings.

2. Problem Formulation 2.1. A Stochastic Model of Moving Objects

A new filtering algorithm is introduced. Without loss of generality, it is illustrated, with an example of tracking moving objects. It could be applied to any instance where KF was and is used when modeled by the two linear state-space equations (3) and (9). For instance, the number of photons in [4, 6] is the unknown xk of both these equations.

To illustrate the new algorithm, tracking moving objects is studied here where detectors, such as radars or cameras, measure their positions over time. The goal is to estimate their positions, velocities, and/or acceleration, all at once, given a few noisy measurements of their positions. Kalman filter can average out the noise of the measured data and gives somehow smooth tracks of these over time moving objects in space [11, 17]. Filtering is a widely used method in engineering where a filtering algorithm, such as KF, reduces the noise from signals while keeping the utile information. As in the case of KF, both a process/evolution model and a measurement equation are used to estimate the unknowns. Note that the unknown vectors xk of both state-space equations (3) and (9) in this illustrating example include the positions, velocities, and acceleration of the moving objects.

Velocity and acceleration can be easily derived from the position. Someone could be interested in finding only the positions and can easily derive the velocities and acceleration therefrom; however, having velocities and acceleration helps to build the state evolution of (3). In addition, having more than the positions in the unknown xk adds more to their components, therefore increasing the size of the problem to illustrate the algorithm.

Detectors are exploited to register the positions of the moving objects. The problem is then modeled as follows. Let tk, k=1,,K be an index of a sequence of acquisition times, J the total number of unknowns, and I the total number of detectors during the time tk. The variables xkRJ and zkRI denote the state vector containing the unknowns of the moving objects and the measured data during the tk=(Δt)k time, respectively. The unknowns include the 2D or 3D components of positions, velocities, and/or acceleration. The jth entry in the vector xk has a component of either the position, velocity, and/or acceleration during the time tk. The observations z1,z2,zK are random vectors. The ith entry in the vector zk measures the component of position of a moving object registered at the ith detector during the time tk. Furthermore, each observation zk depends on xk only. The sequence x1,x2,xK satisfies Markov property with unknown time-varying transition matrix AkRJ×J. That is to say, the present state xk depends only on the previous one xk-1 according to the following linear equation: (3)xk=Akxk-1+μk,where μk is the error random vector in modeling the transition from xk-1 to xk with E(μk) equal to zero and covariance matrix E(μkμk)=Qk.

2.2. Example

To make sense of this linear evolution model, consider the following basic 2D example with a constant time increment (Δt)k=Δt. Assume that the state of a moving object is the vector (4)xk=sk,xsk,yvk,xvk,yak,xak,y,where sk,x and sk,y are the horizontal and vertical components of position, respectively, with corresponding velocity components vk,x and vk,y and acceleration components ak,x and ak,y. That is, sk=sk,xsk,y, vk=vk,xvk,y, and ak=ak,xak,y. The basic kinematic equations with constant acceleration are (5)sk=sk-1+vk-1Δt+0.5ak-1Δt2,(6)vk=vk-1+ak-1Δt,(7)ak=ak-1.Equations (5)-(7) can be written in a matrix form, (8)sk,xsk,yvk,xvk,yak,xak,yxk=10Δt00.5Δt20010Δt00.5Δt20010Δt000010Δt000010000001Aksk-1,xsk-1,yvk-1,xvk-1,yak-1,xak-1,yxk-1.

2.3. Assumptions

Let (hk)ij be the quantity that links the unknown in position j during the acquisition time tk to detector i. The time-varying matrix defined by Hk=[(hk)ij] is called projection or observation matrix. This matrix Hk accounts for the relationship between the measurement vector zk and the state vector xk. The observations/measurements and state vectors are related by the linear equation (9)zk=Hkxk+νk,where E(νk)=0 and E(νkνk)=Rk are the mean and covariance, respectively, of the noise vector νk.

For instance, consider example 2.2 where the state vector xk is of size 6. Two noisy measurements, zk=[zk,x,zk,y], namely, the horizontal and vertical components of the position, are known. Then the 2×6 matrix Hk is (10)Hk=100000010000.

Note that this is just an illustrating example with very small sizes; xk, Hk, and Ak have sizes 6×1, 2×6, and 6×6, respectively. In practice, the sizes are of hundreds, thousands, or even more.

The optimal x^k is to be found given the data zk, the evolution matrix Ak, and the observation system matrix Hk at each time step k. Equations (3) and (9) are the state-space form of a particular case of a more general filtering problem [18, 19]. The actual model is a linear dynamic system for which the analytic filtering solution is given by the KF .

The covariance matrices Qk and Rk being diagonal and the error as well as the noise being white are three assumptions adopted in this paper. In case one assumption or more are not satisfied, a prewhitening process  can be used to bring a general setting to the one of this paper.

3. Filtering and Reconstruction

The index k is dropped whenever it is needed for convenience. For instance, at each instance time k, the state vector xk, the data vector zk, and the matrix Hk are referred to as just x, z, and H, respectively. The moving objects problem amounts to finding xRJ solution of the linear equation (11)z=Hx+ν,where zRI, HRI×J are the given observation data vector and the observation system matrix, respectively. The vector νRI represents the additive noise in recording the data z. A new method, based on alternating projections over convex sets and compared to KF, is introduced, but, first, a review of the KF follows.

3.1. Kalman Filter

The KF update [13, 20] is the following: Given an unbiased estimate x^k-1 of the state vector xk-1, the prior estimate of xk based solely on the activity dynamics is (12)yk=Akx^k-1.The estimate x^k will have the form (refer, for instance, to ) (13)Pk=AkPk-1Ak+Qk,(14)Kk=PkHkHkPkHk+Rk-1,(15)x^k=yk+Kkzk-Hkyk.Pk and Pk-1 in (13) are the covariances of the estimated activity x^ at time k and k-1, respectively. As mentioned before, the matrices’ sizes involved in (13) and (14) are of the order of hundreds, thousands, or even more. Therefore, a major drawback of KF is the matrix-matrix multiplications involved in (13) and (14) and matrix inversion involved in (14). Attempts have been made to rectify these two shortcomings; see, for instance, [18, 19] for more details. The goal of the new filter is manyfold, but without imposing nonnegativity as in [13, 14]. It is intended to find a substitute algorithm to KF that filters out errors from modeling the dynamical system and the noise from the data. It will insure temporal regularization and will be an optimal recursive estimate. In addition, it does not require the storage of past measurement data, use matrix-matrix multiplications, or necessitate any matrix inversion. Furthermore, it does not need to calculate, update, or store any covariance matrix. The aim is then to keep properties of KF while ameliorating it by requesting few more. Each recursive step in the new approach is an iterative operation that involves only matrix-vector multiplication. This should then handle the problems of huge number of variables, such as the case in aerospace.

4. Alternating Projections Minimization

The approach here to solving for both linear state-space equations (3) and (9) is by solving first, in this section, a regularization problem (16) that involves a regularization parameter α. Once the scheme in solving problem (16) is developed, both (3) and (9) are put together, at each time step k, in one regularization problem (30) similar to problem (16) using a regularization parameter αk updated at each time step k. As for the variable y of problem (16), it is substituted by the state transition in time Akxk-1 vector of (3) and (12). This process of solving (3) and (9) using the scheme developed in this section is done in the Section 5. Thus, a regularization problem is introduced and solved next.

The authors in [13, 14, 21] studied a regularization problem. This paper considers the following similar problem, for 0α1, using the Euclidian distance instead: (16)minFx=αz-Hx2+1-αy-x2.The new filtering algorithm takes care of both underdetermined and overdetermined cases. Moreover, an iterative method, within the framework of alternating projections, is developed.

First, two convex sets are defined of IJ-dimensional vectors similar to what is done in  as (17)R=r=rijRIJfor  all  i,  zi=j=1Jrij,Q=q=qx=qij=Hijxj  for  some  x.The sets R and Q are nonempty, closed, and convex in RIJ. Recall the function F in (16); d(r,q) is also defined as (18)dr,q=Fx=αq-r2+1-αy-x2.Expand the most right hand side of expression (18). (19)dr,q=αi=1Ij=1JHijxj-rij2+1-αj=1Jyj-xj2The alternating projections minimization method works as follows. First start with an initial guess x^0. Then, having obtained the lth iterate, set ql=q(xl). Afterwards perform the following two steps:

Step 1.

Minimize d(r,ql) with respect to r to get rl+1, which is in the convex set R.

Step 2.

Minimize d(rl+1,q(x)) with respect to x to get xl+1. The corresponding q(xl+1) is in the convex set Q.

The sequence {xl} converges to a limit x, which is the estimate x^ we are looking for. This iterative scheme is applied to the problem at hand while performing step 1 using (19): (20)drij=0rij=Hijxj+ci,for  some  cRI.However rijR so that j=1Jrij=zi; therefore zi=(Hx)i+Jci, where (Hx)i=j=1JHijxj, which leads to(21)ci=1Jzi-Hxi,rij=Hijxj+1Jzi-Hxi.Starting with a guess x^0, the update expression is (22)rij=rijl+1=rxjl=Hijxjl+1Jzi-Hxli.Now, Step 2 is performed using (19): (23)dxj=0αi=1IHijrij-Hijxj+1-αyj-xj=0.Therefore, (24)αi=1IHijrij-i=1IHij2xj+1-αyj-1-αxj=0.So that, (25)αi=1IHijrij+1-αyj=αi=1IHij2xj+1-αxj,αi=1IHijrij+1-αyj=xjαi=1IHij2+1-α.This leads to the update formula, (26)xjl+1=αi=1IHijrijl+1+1-αyjαi=1IHij2+1-α.Both update formulas (22) and (26) could be combined in only one: (27)xjl+1=αi=1IHij2xjl+NHijzi-Hxli+1-αyjαi=1I,Hij2+1-α,where N=1/J. Starting with a guess x^0, the sequence {xl} converges to a limit x, which is the desired estimate x^. Let us apply the above update formula (27) to the tracking moving objects problem.

5. Application to Moving Objects

The goal here is to solve for both linear state-space equations (3) and (9) by using the finding of Section 4. It goes as follows. The error and noise covariance matrices in tracking the moving objects problem are modeled as diagonal matrices with nonnegative entries; refer to Section 2.1. That is, (28)Qk=σk2I,(29)Rk=γk2I,where I is the identity matrix; its dimension is known from context and σk1. In case 0<σk<1, it suffices, for instance, to multiply matrix Qk by σk4 and use 1/σk2 instead of σk2. In case when Rk=diag(a) in expression (29), Rk-1/2=diag(1/a) is used instead of 1/γkI (γk>0).

Recall that the aim is to find an estimate x^k, k=1,,K, to the unknown xk of the problem modeled by the two linear space-state equations (3) and (9).

The functional to be minimized at each recursion step k is (30)Fxk=zk-HkxkRk-12+yk-xkQk-12=1γkzk-1γkHkxk2+1σkyk-1σkxk2=1γkzk-1γkHkxk2+1σk2yk-xk2=σk2-1σk2σkγkσk2-1zk-σkγkσk2-1Hkxk2+1σk2yk-xk2Fxk=αkσkγkσk2-1zk-σkγkσk2-1Hkxk2+1-αkyk-xk2,where αk=σk2-1/σk2. Note that 0αk1 since σk1, which makes F(xk) the same functional to be minimized as in (16).

It suffices now to use the change of variables, given in step 2 of the Alternating Projections Filter (APF) algorithm 5.1 that follows, and to recall that the predicted state vector yk is Akxk-1. The iterative method, based on alternating projections and given by the updated formulas (22) and (26), or the combined formula (27) at each time step k, is used. The clustering point x will be the estimate state x^k that is solved for using (3) and (9). Thus the following APF algorithm is obtained.

5.1. Alternating Projections Filter Algorithm

To solve the linear state-space problem given by (3) and (9), start with a x^0 and set N=1/J. For k=1,,K execute the following steps.

Assume the recursive steps up to time k-1 are done; carry the change of variables (31)α=σk2-1σk2,P=σkγkσk2-1Hk=1γkαHk,b=σkγkσk2-1zk=1γkαzk.

Make c=yk=Akx^k-1.

Do rijl+1=Pijsjl+N(bi-(Psl)i).

Compute(32)sjl+1=αi=1IPijrijl+1+1-αcjαi=1IPij2+1-α,l=0,1,

The update formula for the next estimate is x^k=s, where s is the cluster point of the sequence (sl)lN.

Observe that the two steps (5) and (6) could be combined into one step as was done before in (27). (33)sjl+1=αi=1IPij2sjl+NPijbi-Psli+1-αcjαi=1IPij2+1-α=αsjli=1IPij2+Ni=1IPijbi-Psli+1-αcjαi=1IPij2+1-αl=0,1,Note how the iterate sjl+1 in (33) is formed as a ratio of two convex combinations. The numerator is a convex combination of the predicted (yk)j, associated with the same coefficient 1-α as in problem (16), relying only on the evolution model, and of the calculated APF iterate associated with the same coefficient α as in problem (16) as well, relying only on the observation model. The denominator is also a convex combination; it is a weighting factor for the updated vector estimate. The APF does not involve any matrix-matrix multiplication or any matrix inversion. It is iterative, involves only matrix-vector multiplication, and does not need to calculate, update, or store any covariance matrix.

The temporal regularization parameter α is well defined and takes values between 0 and 1 when σk varies between 1 and . If α=0 at each time step k, so that σk=1, then x^k is Akx^k-1 in (33). That is, the observations are discarded completely to the extent that only the evolution model is relied on. This defeats the purpose of using the full power of detectors. When α=1 at each time step k, the predicted yk in step (4) is not needed in step (6). Indeed using (33), (34)sjl+1=αsjli=1IPij2+Ni=1IPijbi-Psli+1-αcjαi=1IPij2+1-α,=sjli=1IPij2+Ni=1IPijbi-Pslii=1IPij2,sjl+1=sjl+i=1IPijbi-PsliJi=1IPij2.Thus, with the above update (34), the Landweber iteration  is retrieved in the static case. Surely, choosing α=1 means that σk-1/σk=1 or simply σk=. That is, the covariance matrix in the transition equation (3) is very huge, which implies there is no confidence at all in the evolution model. In other words, the evolutionary state of the variable is discarded. Only the observations zk are meaningful in finding the x^k=x^, which is then a stationary state as it should be.

6. Numerical Experiment

Data Availability. The simulated data used to support the findings of this study are described as follows. The APF algorithm 5.1, presented here in the linear case, is validated with moving objects while modifying the setting of example 2.2 to have a fairly large size problem. To achieve this goal, the APF algorithm is applied to an example where there are (J-1) different moving objects starting at the same position with the same velocity that varies over time.

The experiment is done with different large values of J, the number of unknowns in the state vector xk, to compare results to KF for accuracy and efficiency. Indeed, the goals of this comparison are twofold. First it makes sure that the APF algorithm gives similar results to those of KF; this serves to compare the quality of the outputs of both algorithms for accuracy. Second it compares the CPU running times of both APF and KF for efficiency. The KF algorithm is used here in its conventional/classical formulation. Furthermore, matrices involved in this illustrative example are quite sparse. Both algorithms APF and KF, as they are used here, do not use this sparsity property to decrease the running times. Thus, this approach shows and compares the capacity of both algorithms in a more general case, including when matrices might not be sparse.

The tk=(Δt)k=Δt is set and the test is done with different values of Δt. The I simulated camera detectors measure only the position with some error but not the velocity, so that I=J-1. The evolution matrices are taken as Ak=A of size J×J. This matrix is (35)A=100Δt010001Δt-1001.The value -1 in the (J,1) position in the matrix A is arbitrary; its sole purpose is to make sure that the velocity is changing over time. The unknown J×1 state vector is represented as (36)xk=sk,1sk,J-1vk,and the projection/measurements matrices Hk=H, where H is of size I×J, are (37)H=10000100010.The experiment is run with an initial state vector of size J where (x0)1==(x0)J-1=3 m and (x0)J=v0=5 m/s2; both values 3 and 5 are arbitrary, so that (38)x0=335.The measured data is simulated with the Gaussian noise process in (39). It counts for constant and arbitrary acceleration, accel = 0.6 m/s2, to be added to each position measurement, so that (39)ProcessNoise=accel·Δt22·randn1,where randn1 is a J×1 random vector of the standard normal distribution. The current velocity is updated from the previous one vk-1=(xk-1)J as (40)xkJ=vk=xk-1J+accel·Δt.The current xk is then updated according to (41)xk=Axk-1+ProcessNoise.The state vector xk in (41) is updated with ProcessNoise as defined in (39) and also with ProcessNoise equal to zero; no significant change has been noticed. The data or measurements zk at time tk are simulated with the state vector, without the last component which contains the velocity, with an added Gaussian noise according to (42)zk=Hxk+DataNoise·randn2,where DataNoise is some arbitrary value and randn2 is a J×1 random vector of the standard normal distribution. As for KF, the covariance matrices Qk are set to be equal to Q with Q=σ2I and Q=  accel2σ2I; a slight improvement for KF in terms of accuracy is noticed. Recall that we set α=σ2-1/σ2; choosing this regularization parameter depends on how much trust is put in each of both state-space equations. The example presented in Table 1 has α=0.7. The results presented in Figure 1 over 100 s are done using parameters summarized in Table 1.

Experiment parameters.

 parameter value x 0 3 ⋯ 3 5 ⊤ (Δt)k 0.05 accelk 0.6 DataNoisek 10 αk 0.7 σk2 10 / 3 γk2 100 Qk=accelk2σk2I 1.2       I J 500

Tracking of moving objects: simulated positions in blue, noisy measurements of positions in black, Kalman filter (KF) recovering of positions in red, and Alternating Projections Filter (APF) recovering of positions in green. Running CPU times of both algorithms depend on the size J and the size of time step Δt. As J increases and/or Δt decreases, time of both APF and KF increases; however APF is much faster than KF. Indeed, APF is over 25 times faster than KF with size J=2000 of the unknown state vector and time step Δt=0.05 s over 100 s.

The three curves in Figure 1 are averages of their corresponding J-1 positions. The latter represent the simulated moving objects, reconstructed positions using KF, and reconstructed positions using the APF algorithm 5.1. Taking averages of the positions is for plotting and comparing errors purposes only. Averages are not taken over the time interval Δt, but are rather averages of all the J-1 positions at every time step k. The sampling interval has no impact on the resulting averages.

The relative error between the simulated and both reconstructed curves has a median around 30%. Running CPU times of both algorithms depend on the size J and the size of time step Δt. As J increases and/or Δt decreases, time of both APF and KF increases; however APF is much faster than KF. Indeed, APF is over 25 times faster than KF with size J=2000 of the unknown state vector and time step Δt=0.05 s, corresponding to K=100, over 100 s.

7. Conclusion

A novel filtering algorithm APF, to be applied to the linear case, was presented; it could be implemented in aerospace and other fields. It could also be applied, for instance, anytime tracking of moving objects is desired, such as in medical imaging and data assimilation where large size systems are involved. Experiments were run comparing APF and KF in terms of accuracy and computer speed. Quality of reconstructed curves is about the same in both algorithms, although APF performs slightly better than KF. More importantly, APF is up to 25 times faster than KF, seconds instead of minutes. As it is the case with KF, APF algorithm filters out errors from modeling the dynamical system and the noise from the data. Both algorithms insure temporal regularization and output an optimal recursive estimate. However, APF does not use any matrix-matrix multiplication and does not necessitate any matrix inversion. Furthermore, APF does not need to calculate, update, or store any covariance matrix. This is not the case for KF regarding these last three properties. Indeed, these three properties are at the heart of making APF take much less CPU time compared to KF, so that APF is very suitable for large scale systems such as the ones in aerospace. APF could be used in any discipline which has used, for instance, KF or in any field that is interested in time-varying variables such as financial risk assessment/evaluation and forecasting or control. The results substantiate the efficiency of this novel APF algorithm.

Nomenclature k :

Time parameter

K :

Maximum value of k

x k :

Position vector of a moving object at time k

A k :

Evolution matrix from position xk-1 to position xk

z k :

Measured data vector at time k

H k :

System matrix relating xk to zk

E ( w ) :

Expected value of a vector w

E ( w w ) :

Covariance matrix of a vector w

μ k :

Error vector in modeling the transition from xk-1 to xk

Q k :

Covariance of μk

ν k :

Noise vector of measured zk

R k :

Covariance of νk.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The author declares no conflicts of interest.

Ding J. Zhang G. Zhao L. Urban and indoor weak signal tracking using an array tracker with MVA and nonlinear filtering Journal of Applied Mathematics 2014 2014 10 107156 10.1155/2014/107156 Schmidt S. F. The Kalman filter—its recognition and development for aerospace applications Journal of Guidance, Control, and Dynamics 1981 4 1 4 7 10.2514/3.19713 Grewal M. S. Andrews A. P. Applications of Kalman filtering in aerospace 1960 to the present IEEE Control Systems Magazine 2010 30 3 69 78 10.1109/MCS.2010.936465 Kervinen M. Vauhkonen M. Kaipio J. P. Karjalainen P. A. Time-varying reconstruction in single photon emission computed tomography International Journal of Imaging Systems and Technology 2004 14 5 186 197 10.1002/ima.20023 Qranfal J. Tanoh G. Regularized Kalman filtering for dynamic SPECT Journal of Physics: Conference Series 2008 124 012042 10.1088/1742-6596/124/1/012042 Qranfal J. Incorporating nonnegativity and spatial regularization constraints using proximal projections International Journal of Pure and Applied Mathematics 2012 81 2 267 304 2-s2.0-84870590271 Houtekamer P. L. Mitchell H. L. Data assimilation using an ensemble kalman filter technique American Meteorological Society 1998 126 796 811 Houtekamer P. L. Mitchell H. L. Pellerin G. Buehner M. Charron M. Spacek L. Hansen B. Atmospheric data assimilation with an ensemble Kalman filter: results with real observations Monthly Weather Review 2005 133 3 604 620 10.1175/mwr-2864.1 2-s2.0-13244288261 Evensen G. Data Assimilation: The Ensemble Kalman Filter 2009 2nd Berlin, Germany Springer-Verlag 10.1007/978-3-642-03711-5 MR2555209 Czyzewski A. Dalka P. Examining kalman filters applied to tracking objects in motion Proceedings of the 9th International Workshop on Image Analysis for Multimedia Interactive Services May 2008 Klagenfurt, Austria 175 178 10.1109/WIAMIS.2008.23 Suliman C. Cruceru C. Moldoveanu F. Kalman filter based tracking in an video surveillance system Advances in Electrical and Computer Engineering 2010 10 2 30 34 10.4316/aece.2010.02005 Agarwal S. Goyal A. An automated approach for video surveillance using kalman filter International Journal on Recent and Innovation Trends in Computing and Communication ISSN 2016 4 8 Qranfal J. Byrne C. L. Smart filter for dynamic SPECT image reconstruction International Journal of Pure and Applied Mathematics 2011 73 4 69 78 MR2951368 Qranfal J. Byrne C. EM filter for time-varying SPECT reconstruction International Journal of Pure and Applied Mathematics 2011 73 4, [2012 on issue title page] 379 403 MR2951367 Hu S. Xu S. Wang D. Zhang A. Optimization algorithm for kalman filter exploiting the numerical characteristics of SINS/GPS integrated navigation systems Sensors 2015 15 11 28402 28420 10.3390/s151128402 Pimentel S. Qranfal Y. Data Assimilation with Filtering Methods that Use Kullback-Leibler Distance 2016, http://www.isda2016.net/assets/posters/PimentelDataassimilation.pdf Humpherys J. Redd P. West J. A fresh look at the Kalman filter SIAM Review 2012 54 4 801 823 10.1137/100799666 MR3023372 2-s2.0-84869013972 Anderson B. D. O. Moore J. B. Optimal Filtering 1979 Printice-Hall Simon D. Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches 2006 Wiley-Interscience 10.1002/0470045345 2-s2.0-84889830739 Kalman R. E. A new approach to linear filtering and prediction problems Trans. ASME, Series D, Journal of Basic Engineering 1960 82 1 35 45 10.1115/1.3662552 Byrne C. L. Iterative image reconstruction algorithms based on cross-entropy minimization IEEE Transactions on Image Processing 1993 2 1 96 103 10.1109/83.210869 2-s2.0-0027190119 Byrne C. L. Alternating Minimization and Alternating Projection Algorithms: A Tutorial 2011, http://faculty.uml.edu/cbyrne/CandT.pdf Csiszar I. Tusnady G. Information geometry and alternating minimization procedures Statistics & Decisions 1984 2 205 237 MR785210 Vardi Y. Shepp L. A. Kaufman L. A statistical model for positron emission tomography Journal of the American Statistical Association 1985 80 389 8 37 10.2307/2288030 Landweber L. An iteration formula for Fredholm integral equations of the first kind American Journal of Mathematics 1951 73 615 624 MR0043348 10.2307/2372313 Zbl0043.10602