Object Tracking via 2DPCA and ℓ 2 -Regularization

. We present a fast and robust object tracking algorithm by using 2DPCA and ℓ 2 -regularization in a Bayesian inference framework. Firstly, we model the challenging appearance of the tracked object using 2DPCA bases, which exploit the strength of subspace representation. Secondly, we adopt the ℓ 2 -regularization to solve the proposed presentation model and remove the trivial templates from the sparse tracking method which can provide a more fast tracking performance. Finally, we present a novel likelihood function that considers the reconstruction error, which is concluded from the orthogonal left-projection matrix and the orthogonal right-projection matrix. Experimental results on several challenging image sequences demonstrate that the proposed method can achieve more favorable performance against state-of-the-art tracking algorithms.


Introduction
Visual tracking is one of the fundamental topics in computer vision and plays an important role in numerous researches and practical applications such as surveillance, human computer interaction, robotics, and traffic control.Existing object tracking algorithms can be divided into two categories, that is, discriminative or generative.Discriminative methods treat tracking as a binary classification problem with local search which estimates the decision boundary between an object image patch and the background.Babenko et al. [1] proposed an online multiple instance learning (MIL), which treats ambiguous positive and negative samples as bags to learn a discriminative classifier.Zhang et al. [2] propose a fasting compressive tracking algorithm which employs nonadaptive random projections that preserve the structure of the image feature.
Generative methods typically learn a model to represent the target object and incrementally update the appearance model to search for the image region with minimal reconstruction error.Inspired by the success of sparse representation in face recognition [3], supersolution [4], and inpainting [5], recently, sparse representation based visual tracking [6][7][8][9] has also attracted increasing interests.Mei and Ling [10] first extend sparse representation to object tracking and cast the tracking problem as determining the likeliest patch with a sparse representation of templates.The method can handle partial occlusion by treating the error term as sparse noise.However, it requires solving a series of complicated ℓ 1 norm related minimization problems many times and the time complexity is quite significant.Although some modified ℓ 1 norm methods have been proposed to speed up tracker, they are still far away from being real time.
Recently, many object tracking algorithms have been proposed to exploit the power of subspace representation from different points.Ross et al. [11] present a tracking method that incrementally learns a PCA low-dimensional subspace representation, efficiently adapting online to changes in the appearance of the target.However, this method is sensitive to partial occlusion.Zhong et al. [8] proposed a robust object tracking algorithm via sparse collaborative appearance model that exploits both holistic templates and local representations to account for appearance changes.Zhuang et al. [12] cast the tracking problem as finding the candidate that scores the highest in the evaluation model based upon a matrix called discriminative sparse similarity map.Qian et al. [13] exploit an appearance model based on extended incremental nonnegative matrix factorization for visual tracking.Wang and Lu [14] present a novel online object tracking algorithm by using 2DPCA and ℓ 1 -regularization.This method can achieve good performance among many scenes.However, the coefficients and the sparse error matrix used in this method need an iterative algorithm to compute and the space and time complexity are too large to meet the real-time tracking.
Motivated by the aforementioned work, this paper presents a robust and fast ℓ 2 norm tracking algorithm with adaptive appearance model.The contributions of this work are threefold: (1) we exploit the strength of 2DPCA subspace representation using ℓ 2 -regularization; (2) we remove the trivial templates from the sparse tracking method; (3) we present a novel likelihood function that considers the reconstruction error, which is concluded from the orthogonal left-projection matrix and the orthogonal right-projection matrix.Both qualitative and quantitative evaluation on video sequences demonstrate that the proposed method can handle occlusion, illumination changes, scale changes, and norigid appearance changes effectively in a lower computation complexity and can run in real time.

Object
where ‖ ⋅ ‖  denotes the Frobenius norm; U represents the orthogonal left-projection matrix; V represents the orthogonal right-projection matrix.The cost function is set as an ℓ 2 -regularization quadratic function: Here,  is a constant.The solution of ( 2) is easily derived as follows: Here, I 1 and I 2 mean the identity matrix.⊗ stands for Kronecker product.vec(A) means the vector-version of the matrix A. Therefore, we can get the projection coefficients matrix A.
Obviously, the projection matrix P is independent from B and we can precalculate it in each frame before circulation for all candidates.When a new candidate comes, we can simply calculate P vec(U T BV) to obtain vec(A), which makes the proposed method very fast.
Here, we abandon the trivial templates completely, which makes the target able to be represented by the 2DPCA subspace fully.The error matrix can be obtained by the following equation after we get the projection coefficients matrix A from ( 3): So, the error matrix can be calculated once.

Tracking Framework Based on 2DPCA and ℓ 2 -Regularization
Visual tracking is treated as a Bayesian inference task in a Markov model with hidden state variables.Given a series of image matrices  = [B 1 , B 2 , . . ., B  ], we aim to estimate the hidden state variable x  recursively: where where  denotes the th sample of the state x.Thus, we obtain A  and the likelihood can be measured by the reconstruction error: ) .
However, it is noted that, just by penalizing error level, the precise location of the tracked target can be benefited.Therefore, we present a novel likelihood function, which considers both the reconstruction error and the level of error matrix: where E  can be calculated by ( 9): Here, A  is calculated by (3).
Online Update.In order to handle the appearance change of tracked target, it is necessary to update the observation model.If some imprecise samples are used to update, the tracked model may degrade.Therefore, we present an occlusion-radio-based update mechanism.After obtaining the best candidate state of each frame, we compute the corresponding error matrix and the occlusion ratio .Two thresholds thr 1 = 0.1 and thr 2 = 0.6 are introduced to define the degree of occlusion.If  < thr 1 , the tracked target is not occluded or a small part of it is occluded by the noise.Therefore, the model with sample is updated directly.If thr 1 <  < thr 2 , the tracked target is partially occluded.The occluded part is replaced by the average observation and the recovered candidate is used for update.If  > thr 2 , most part of the tracked target is occluded.Therefore, the sample is discarded without update.After we cumulate enough samples, we use an incremental 2DPCA algorithm to update the tracker (left-and right-projection matrices).

Experiments
The proposed tracking algorithm is implemented in MAT-LAB which runs on a computer with Intel i5-3210 CPU (2.5 GHz) with 4 GB memory.The regularization  is set to 0.05.The image observation is resized to pixels for the proposed 2DPCA representation.For each sequence, the location of the tracked target object is manually labeled in the first frame.600 particles are adopted for the proposed algorithm accounting for the trade-off between effectiveness and speed.Our tracker is incrementally updated every 5 frames.
To demonstrate the effectiveness of the proposed tracking algorithm, we select six state-of-the-art trackers: the ℓ 1 tracker [10], the PN tracker [16], the VTD tracker [17], the MIL tracker [1], the Frag tracker [18], and the 2DPCAℓ 1 tracker [14] for comparison on several challenging image sequences including Occlusion 1, David Outdoor, Caviar 2, Girl, Car 4, Car 11, Singer 1, Deer, Jumping, and Lemming.The challenging factors include severe occlusion, pose change, motion blur, illumination variation, and background clutter.

Qualitative Evaluation
Severe Occlusion.We test four sequences (Occlusion 1, David-Outdoor, Caviar 2, and Girl) with long time partial or heavy occlusion and scale change.Figure 1(a) demonstrates that ℓ 1 algorithm, Frag algorithm, 2DPCAℓ 1 , and our algorithms perform better, since these methods take partial occlusion into account.ℓ 1 algorithm, 2DPCAℓ 1 , and our algorithms can handle occlusion by avoiding updating occluded pixels into the PCA basis and 2DPCA basis separately.Frag algorithm can work well on some simple occlusion cases (e.g., Figure 1(a), Occlusion 1) via the part-based representation.However, this method performs poorly on some more challenging videos (e.g., Figure 1(b), DavidOutdoor).MIL tracker is not able to track the occluded target in DavidOutdoor and Caviar 2, since the Harr-like features the MIL method adopted are less effective in distinguishing the similar objects.For the Girl video, the in-and out-of-plane rotation, partial occlusion, and the scale change make it difficult to track.It can be seen that the Frag and the proposed tracker work better than the other methods.
Illumination Change.Figures 1(e) and 1(f) present tracking results using the Car 4, Car 11, and Singer 1 sequences with significant change of illumination and scale as well as background clutter.The ℓ 1 tracker, 2DPCAℓ 1 tracker, and the proposed tracker perform well in the Car 4 sequences whereas the other trackers drift away when the target vehicle goes underneath the overpass or the trees.For Car 11 sequences, 2DPCAℓ 1 and the proposed tracker can achieve robust tracking results whereas the other trackers drift away when drastic illumination change occurs or when similar object appears.In the Singer 1 sequence, the drastic illumination and scale change make it difficult to track.It can be seen that the proposed tracker performs better than the other methods.
Motion Blur.It is difficult for tracking algorithms to predict the location of the tracked objects when the target moves abruptly.Figures 1(h) and 1(i) demonstrate the tracking results in the Deer and Jumping sequences.In Deer sequences, the animal appearance is almost indistinguishable due to the fast motion and most methods lost the target right at the beginning of the video.At frame 53, PN tracker locates the similar deer instead of the right object.From the results, we can see that the VTD tracker and our tracker perform better than the other algorithms.2DPCAℓ 1 tracker may be able to track the target again by chance after failure.The appearance changes of the Jumping sequences are drastic such that the ℓ 1 , Frag, and VTD tracker drift away from the object.Our tracker successfully keeps track of the object with small errors whereas the MIL, PN, and 2DPCAℓ 1 can track the target in some frames.Background Clutter. Figure 1(j) illustrates the tracking results in the Lemming sequences with scale and pose change, as well as severe occlusion in cluttered background.Frag tracker lost the target object at the beginning of the sequence and when the target object moves quickly or rotates, the VTD tracker fails too.In contrast, the proposed method can adapt to the heavy occlusion, in-plane rotation, and scale change.

Quantitative Evaluation.
To conduct quantitative comparisons between the proposed tracking method and the other sate-of-the-art trackers, we compute the difference between the predicted and the ground truth center location error in pixels and overlap rates which are most widely used in quantitative evaluation.The center location error is usually defined as the Euclidean distance between the center locations of tracked objects and their corresponding labeled ground truth.Figure 2 demonstrates the center error plots, where a smaller center error means a more accurate result in each frame.Overlap rate score is defined as score = area(  ∩   )/area(  ∪   ).  is the tracked bounding box of each frame and   is the corresponding ground truth bounding box. Figure 3 shows the overlap rates of each tracking algorithm for all sequences.Generally speaking, our tracker performs favorably against the other methods.

Computational Complexity.
The most time consuming part of the generative tracking algorithm is to compute the coefficients using the basis vectors.For the ℓ 1 tracker, the computation of the coefficients using the LASSO algorithm is ( 2 + ). is the dimension of subspace and  is the number of basis vectors.The load of the 2DPCAℓ 1 tracker [10] with ℓ 1 regularization is (). stands for the number of iterations (e.g., 10 on average).For our tracker, the trivial templates are abandoned and square templates are not used.So the load of our tracker is .The tracking speed of ℓ 1 , 2DPCAℓ 1 , and our method is 0.25 fps, 2.2 fps, and 5.2 fps separately (fps, frame per second).Therefore, we can get that our tracker is more effective and much faster than the aforementioned trackers.

Conclusion
In this paper, we present a fast and effective tracking algorithm.We first clarify the benefits of the utilizing 2DPCA basis vectors.Then, we formulate the tracking process with the ℓ 2 -regularization.Finally, we update the appearance model accounting for the partial occlusion.Both qualitative and quantitative evaluations on challenging image sequence demonstrate that the proposed method outperforms several state-of-the-art trackers.

Figure 1 :
Figure 1: Sampled tracking results of partial evaluated algorithms on ten challenging sequences.

Representation via 2DPCA and ℓ 2 -Regularization
is the motion model that represents the state transition between two consecutive states.(B| x  ) is the observation model which indicates the likelihood function.| x −1 ) = (x  ; x −1 , Σ), where Σ is a diagonal covariance matrix which indicates the variances of affine parameters.Observation Model.If no occlusion occurs, an image observation B  can be generated by a 2DPCA subspace (spanned by U and V and centered at ).Here, we consider the partial occlusion in the appearance model for robust tracking.Thus, we assume that the centered image matrices B  (B  = B  − ) can be represented by the linear combination of the projection matrices U and V.Then, we draw  candidates in the state x  .For each of the observed image matrices, we solve a ℓ 2 -regularization problem: min Motion Model.We apply an affine image warp to model the target motion between consecutive states.Six parameters of the affine transform are used to model (x  | x −1 ) of a tracked target.Let x  = {  ,   ,   ,   , ,   }, where   ,   ,   ,   , , and   denote  and  translations, rotation angle, scale, and aspect ration and skew, respectively.The state transition is formulated by random walk; that is, (x