Research on Moving Target Detection and Tracking Technology in Sports Video Based on SIFT Algorithm

Sports video moving target detection and tracking play an important role in enhancing the popularity of sports and the promotion of sports events. This paper combines the SIFT algorithm to carry out the research of sports video moving target detection and tracking technology, to identify sports features, and to improve the sports feature detection algorithm. Moreover, this paper divides the point cloud data into multiple cube grids under the coordinate system where it is located, and then ﬁnds the center of gravity of the data points in each grid, and replaces the coordinates of all points in the grid with the coordinates of the center of gravity. In addition, this paper combines data analysis to verify the algorithm and build a sports video moving target detection system. The experimental research results verify that the sports video target detection and tracking technology based on the SIFT algorithm proposed in this paper has good results.


Introduction
In recent years, sports behavior recognition technology has been increasingly integrated into daily sports video analysis. Moreover, there are more and more scientific researchers engaged in sports behavior recognition, and the research on behavior recognition technology is in full swing. In addition, the emergence of various latest methods and theories and the introduction of many new algorithms from other fields have made great progress in behavior recognition [1]. e main process of sports behavior recognition technology can be roughly divided into these four steps: feature extraction, feature representation, behavior modeling, and behavior classification [2]. According to specific research goals and needs, corresponding changes can be taken in these steps. For example, some algorithms directly combine feature extraction and representation into one step, and some methods do not even require behavior modeling to send the descriptors after feature extraction and representation directly into the classifier to recognize their classification. At the same time, some methods incorporate iterative feedback processes such as deep learning. In addition, some methods also perform further processing (such as dimensionality reduction, etc.) on the descriptors after the feature representation to make these features more distinguishable, etc. [3]. e models used in time series modeling can be regarded as further expressions after feature representation. e time series information extracted by these models is not presented in front of people in an intuitive form but through the parameters of the model after modeling. Make an expression. Existing methods for time series modeling include hidden Markov models, conditional random fields, linear dynamic systems, and the recently popular recurrent neural network models.
is article combines the SIFT algorithm to carry out the research of sports video moving target detection and tracking to identify sports features, which provides a theoretical reference for the application of dynamic recognition technology in sports competitions and sports training.

Related Work
e visual SLAM system completes the estimation of its position in the environment (white positioning), mainly relying on the visual odometer module [4]. e binocular visual odometer calculates the depth information by calculating the parallax between the left and right cameras, while the monocular visual odometer cannot obtain absolute scale information and can only be obtained through other sensors or environmental information [5]. Although the current advanced visual odometer has been able to run in real time and obtain high positioning accuracy, almost all methods are assumed to operate in a static environment [6]. If there is interference from moving objects in the field of view of the sports camera, the visual odometer will have a large estimation error or even fail. Aiming at the problem of moving objects in the scene interfering with the visual odometer, the random sampling consensus algorithm is currently the most mature and effective method [7]. is method fits the model and eliminates the data points that are inconsistent with the model as outliers. When the moving object in the camera's field of view only occupies a small part, the RACSAC method can be used to filter the feature points, and the feature points on the moving object can be removed as external points, and combined with the motion estimation in the visual odometer, thus get better positioning results [8]. But when the moving object occupies a large part of the camera's field of view, the RANSAC algorithm may also treat the characteristic points on the moving object as interior points. At this time, relying on this method will not be able to achieve the purpose of eliminating interference [9]. Literature [10] uses pretraining to segment the feature points in the image into dynamic and static feature points, but this method is difficult to implement in actual real-time applications. Literature [11] relies on the dense scene flow to segment dynamic objects, but the scene flow calculation also needs to use the visual mileage calculation method to compensate for the posture. Literature [12] uses image segmentation technology to segment the image into static background and motion regions and uses the partitioned regions to perform motion estimation separately and then perform a global fusion. Although this algorithm has high accuracy, it is difficult to meet real-time requirements. Literature [13] uses the IMU information as a priori to segment the dynamic feature points in the image and realize the visual positioning of the dynamic environment by combining the information of the inertial navigation system and the depth visual odometer.
Literature [14] uses a single-chip microcomputer as the controller to design a high-speed positioning control system for image monitoring dynamic brackets and dynamically calculates the rotation angle of the pan/tilt based on the control motor; Literature [15] analyzes the sports trajectory tracking strategy to design a dynamic Fuzzy PID controller for point-line calculation trajectory tracking, research on the visual pan-tilt pose calculation and dynamic trajectory tracking system control technology. e vigorous development of machine vision under the compatibility of various devices also marks the beginning of vision requirements. Literature [16] analyzes the research on the coordination strategy of the multimovement sports system to capture the moving target and uses the stereo vision motion detection to complete the motion parameter estimation of the moving target. At present, the three-dimensional object positioning technology based on binocular stereo vision has become one of the hot spots in the research of vision measurement [17]. Compared with the monocular camera motion measurement technology in the large tank discharge hole visual positioning control system, stereo vision can calculate more information when the parallax is obtained. It is not only compatible with the characteristics of the monocular camera but also can be used to construct three-dimensional objects. e accuracy and applicability of the model are high [18].

Moving Target Detection Based on SIFT Algorithm
is article analyzes the moving target detection algorithm, and this part mainly combines the SIFT algorithm to identify and track the moving target. e essence of the Nonmaximum Suppression (NMS) algorithm is to retain the most significant or least significant key points within a certain range from the key points initially extracted by the algorithm and discard other key points. e specific steps are as follows. For a point kp i in the key point set KP, (1) the algorithm first takes its neighborhood nbhd(kp i ) and judges whether its saliency value is the largest or smallest in the neighborhood. If the maximum or minimum value is taken, it is marked as the true key point; otherwise, it is marked as the rest key point. (2) e algorithm traverses the entire set of key points KP and removes all the key points marked as surplus points. e saliency in the above process can be determined according to the algorithm or requirements, and features such as curvature, the interval length of the normal vector distribution of the neighborhood points, and the shape index value can also be used.
According to the above method, if the neighborhood of size k � 3 is taken, it is determined whether the curvature of each key point in the graph is the maximum value in the neighborhood, and the nonmaximum points are removed, as shown in Figure 1. is example uses curvature as the saliency, which not only retains the high-quality key points but also greatly reduces the redundancy. e Intrinsic Shape Signatures (ISS) algorithm was proposed by Zhong et al. e ISS algorithm uses the dispersion difference degree between the three main directions of the p point neighborhood nbhd(p) local reference coordinate system as the evaluation index of the significance of the p point and extracts the points with a large dispersion difference degree by comparing with the preset threshold.
First, we use the PCA algorithm to calculate the three eigenvalues λ 1 , λ 2 , λ 3 of the nbhd(p) covariance matrix cov(p) in descending order. e three eigenvalues obtained after the eigenvalue decomposition of the covariance matrix of nbhd(p), respectively, represent the degree of dispersion along the three eigendirections v 1 , v 2 , and v 3 . erefore, the ratio of the two eigenvalues can be used to express the degree of difference in the dispersion of the two main axis directions.

2
Advances in Multimedia Formula (1) shows the criterion for extracting key points, where Th 12 and Th 23 are preset thresholds, and the size of the threshold determines how many key points are extracted. e smaller the threshold value is, the more key points are extracted.
For any point p i in the point cloud, according to formula (2), it calculates the centroid p i of the neighborhood nbhd(p i ) of the p i point and transforms it to the centroid system, calculates the covariance matrix cov(p i ), and then decomposes the eigenvalue of the covariance matrix cov(p i ), as shown in the following formula [19]: (2) en, the Hotelling transform is performed on the neighborhood point q j ∈ nbhd(p i ), and the neighborhood coordinates of the point p i are projected onto the three principal axes, as shown in the following formula: q j is the coordinate of the neighborhood point before transformation and q j is the coordinate after transformation. en, the algorithm calculates the ratio δ between the nbh d(p i ) coordinate distribution ranges on the x and y axes (the first and second largest spindles), as shown in formula (4), where X � x qj , q j ∈ nbhd(p i ) , Y � y qj , q j ∈ nbhd(p i ) [20].
For symmetrical surfaces (that is, the neighbors are distributed in the same direction along the largest and second largest axes), the value of δ is equal to 1; for asymmetric surfaces, δ is greater than 1. e algorithm sets the threshold as th 1 (t 1 > 1)and determines whether δ > th 1 is satisfied, and if it is satisfied, it is recorded as a key point. e algorithm traverses each point in the point cloud P and completes the preliminary basket selection of the key points, which is marked as KP.
For the key point kp i that has been preliminarily selected, a quadric surface fitting is performed on its neighborhood to obtain a parametric surface S. Subsequently, a uniform n × n grid (n � 20) is used to sample the fitted surface, and the principal curvature k 1 , k 2 and Gaussian curvature K at the sampling point are calculated, and the parameter d is used to evaluate the quality of key points according to formulas (5) and (6). In this paper, in order to simplify the calculation, the algorithm uses neighborhood points instead of uniform sampling points to calculate Q k , and n is the number of neighborhood points [21].
Finally, Q k is the saliency parameter. is article uses the NMS algorithm to perform nonmaximum suppression, where only the maximum value is retained to complete the screening of key points.
e Local Surface Patch (LSP) algorithm uses the least squares method to fit a local point cloud into a parametric surface. It calculates the first basic quantity and the second basic quantity of the surface and then constructs the shape index Si(p i ) and filters out the points whose Si(p i ) satisfies certain conditions as the key points. Finally, the NMS algorithm with Si as the saliency parameter, further filters the initially selected key points to complete the final keypoint detection. e specific process is as follows.
For the neighborhood nbhd(p i ) of any point p i in the point cloud P, the algorithm first establishes the LRF and rotates the neighborhood nbhd(p i ) to the three principal axis directions of the LRF to eliminate the influence of the initial pose on further calculations. en, the quadric surface s(p i ) is fitted, and the principal curvature k 1 , k 2 at the point p i on the surface s(p i ) is calculated according to the following formula [22]: It can be seen that the value range of the shape index value Si defined by the above formula is [0,1]. When the Si value is large, the corresponding partial surface is convex; when the Si value is small, the corresponding partial surface is concave.
After completing the Si calculation of all points in the point cloud, the preliminary screening of the key points can be completed according to the following formula: P 1 P 2 P 3 P 4 P 5 P 6 P 7 P 8 P 9 P 10 P 11 P 12 Figure 1: Key points after nonmaximum suppression.

Advances in Multimedia 3
Si Among them, μ Si(p i ) is the mean value of Si in the neighborhood of p i , α and β are preset parameters, and the value of α and β should be between 0 and 1.
en, Si is used as a saliency parameter. Using the NMS algorithm, this paper judges whether the Si value is the maximum or minimum value of the Si value of each point in the neighborhood point by point. If it is, keep it. If it is not, delete it. Finally, the key point set KP is obtained.
e Histogram of Normal Orientation (HoNO) algorithm first calculates the angle between the normal vector of each point in the neighborhood nbhd(p i ) of each point p i in the point cloud P and the normal vector of the target point p i , and then counts them to form a histogram. According to the histogram characteristics, the flat area is excluded and the salient area of the feature is detected. en, by evaluating the properties of the histogram and the neighborhood covariance matrix, key points are extracted from the salient regions.
First, for each point p i ∈ P, this paper uses the PCA algorithm to estimate the normal vector n i°. en, for every point q j ∈ nbhd(p i ) except p i in the neighborhood nbhd(p i ) of p i , calculate the normal vector angle 〈p i , q j 〉 between q j and q i according to formula (10) and count the included angles into the histogram H i containing N boxes, where the length of each box is 10 degrees. In order to eliminate the influence of the neighborhood point density on the algorithm, it is necessary to normalize the histogram after the large-angle filling angle of the normal vectors of all neighborhood points is completed.
Obviously, the histogram H i of the point p i with the neighborhood approximate to the plane has the characteristic of "the first box has a higher value and the rest of the box values are approximately 0." Similarly, the area with a larger degree of curvature has a large normal vector distribution range. erefore, most of the box values in its histogram are nonzero. erefore, it is necessary to design parameters to describe the distribution of values in the histogram.
As shown in formula (11), Kurt is used to express the kurtosis and dispersion of the histogram distribution. If the parameter kurtosis parameter Kurt(H i ) of the point p i histogram is less than the preset parameter Th, there is no obvious presence in the histogram. at is, the distribution range of the values in the histogram is larger, and p i is retained as the key point; otherwise, it is removed.
Finally, after the Kurt parameter calculation of all points and the determination of the preliminary key points have been completed, the key point set is deredundant by using Kurt(H) as the saliency parameter and NMS is used to obtain the final key point set KP.
e Harris operator is extended to three-dimensional space, and the specific steps are as follows.
First, for the point p i in the point cloud P, the algorithm queries the neighborhood nbhd(p i ) and establishes the LRF, and the neighborhood nbhd(pi) is translated to the LRF coordinate system that is the origin of p i .
After establishing LRF, the algorithm sets the parameters according to formula (13) and performs quadric surface fitting on the neighborhood to obtain the fitted quadric surface parameter p 1 , p 2 , . . . , p 6 .
For parametric surfaces, adding more high-order terms means that it can adapt to more complex surfaces. However, more complex surfaces do not have clearly defined derivatives at certain points in the defined domain. Moreover, when the neighborhood radius is not large, the vicinity of the target point can be approximated as a quadric surface. erefore, the directional derivative can be easily obtained according to the following equations: Considering the influence of noise, the Gaussian function originally proposed by Harris and Stephens can be applied, as shown in the following equations: Substituting the quadric surface equation, it is simplified as shown in the following equations: 4 Advances in Multimedia en, the analysis matrix E is constructed as follows: By analyzing the determinant value and trace of the matrix E, the Harris corner response value h(p i ) at the point p i is constructed. As shown in formula (23), k is a nonnegative preset parameter.
e specific steps to implement the 3D-SIFTalgorithm in this paper are as follows: (1) e algorithm constructs the point cloud scale space in a three-dimensional coordinate system. e scale space of the point cloud is L(x, y, z, σ), the point whose coordinate is (x, y, z) and the pixel value I(x, y, z) in its neighborhood are convolved with a three-dimensional Gaussian function G(x, y, z, σ) whose scale can be changed by changing σ, as shown in the following formula: Among them, the specific expression of the Gaussian function is shown in the following formula: e definition of the cube grid size during downsampling is shown in the following formula: (2) e algorithm builds the DoG space of the 3D point cloud. e process of downsampling is equivalent to performing local mean filtering on the point cloud set. e local features in each cube grid disappear, replaced by the center of gravity coordinates, and there will be discontinuities between the grids. erefore, in order to ensure that the algorithm can find characteristic points in the point cloud stably, it is necessary to construct a DoG space for the point cloud.
e construction formula is shown in the following formula:   G(x, y, z, σ) with the coordinate data I(x, y, z) of the point. It is equivalent to smoothing the point cloud data layer by layer, and each layer needs to be divided into several small scales distinguished by a certain step length, as shown in formula (28). In the formula, T is a preset parameter for calculating the Gaussian scale space.
(3) e algorithm calculates the Gaussian filter response value of the sampling point in the Gaussian scale space.
When calculating the Gaussian filter response value, in order to improve the efficiency of data processing, the effect of increasing the distance between the points on the characteristics of the sampling points can be considered (here, the curvature c is used as the main geometric feature). It is considered that the closer the point to the sampling point p 0i is, the greater the contribution to its curvature. erefore, it is possible to consider weighting the coefficient w j of the distance dist < 3σ from the point p 0i . en, the Gaussian filter response value F can be calculated according to the following formula: In formula (29), c j is the curvature value of the neighboring point p oj of the sampling point p oi . e calculation formula of the weighting coefficient w j is shown in formula (30). In the formula, dist 2 (p 0i , p 0j ) represents the square of the distance between the sampling point p 0i and the neighboring point p 0j .
According to the above formula, the corresponding value of Gaussian filtering for each data point in the 3D point cloud data can be calculated. By calculating the difference between the Gaussian filter response value F of the sampling point at the current scale and the Gaussian filter response value F_last at the previous scale, the DoG value of the sampling point at the current scale can be obtained, as shown in the following formula:

Research on Sports Video Moving Target Detection and Tracking Based on SIFT Algorithm
e data set of this paper mainly comes from the network, and the data set of this paper is shown in Figure 2.
e key point detection experiment in this paper is based on the model point cloud of the above data set. e evaluation index is calculated in each data set and the average value is calculated. Finally, the average value of the data set is calculated and the corresponding curve is drawn.
is paper chooses the relative repetition rate, the accuracy rate of the descriptor matching experiment, and the operation efficiency as the evaluation indicators.
(1) Relative repeatability (repetition rate). e repetition rate represents the consistency between the key points detected when the point cloud P changes and the key points detected by the point cloud before the change. As shown in formula (32), KP is the set of key points detected by the point cloud P, KP ′ is the set of key points detected by the changed point cloud P ′ , then R KP represents the same part in KP ′ and KP.
e repetition rate is used to evaluate the robustness of the algorithm to factors such as spatial transformation, noise, and resolution. e noise repetition rate r noise represents the proportion of the same part of the key points detected before and after adding noise. e outlier repetition rate r outlier represents the proportion of the same part of the key points detected before and after adding the outlier. e grid resolution repetition rate r density represents the proportion of the same part of the key points detected before and after the point cloud sampling.
In order to quantitatively set the neighborhood radius r, for each data set, the algorithm first calculates the diagonal length r i of the spatial bounding box of the data set model point cloud P i and traverses all models to obtain the maximum length r max . en, the algorithm reduces all the point coordinates in the point cloud by r max times, and in the subsequent evaluation experiment, the neighborhood radius r is uniformly set to 0.02. e evaluation experiment includes three parts: repetition rate experiment, running time experiment, and descriptor matching experiment. Algorithms and test programs are written in MATLAB language. e repetition rate experiment mainly includes two parts: the parameter change module and the repetition rate calculation module.
(1) e point cloud quality change module has three modes to choose from: adding Gaussian noise, changing the point cloud density, and adding outliner's. ey correspond to the repeatability of the key point detection algorithm to noise, the repeatability of the grid resolution, and the repeatability of the outliers. In order to quantitatively obtain the amplitude of the noise signal conforming to the Gaussian distribution, this paper sets the parameter k and combines the neighborhood radius r used by the key point detection algorithm, sets kr as the maximum amplitude N of Gaussian noise, and calculates noise amplitude distribution by formula (33). Among them, rand(a, b) represents a random number with a value range of [a, b]. 2πrand(0, 1)).

(33)
After the algorithm calculates the noise amplitude n, it chooses random direction angles θ and φ(θ − rand(0, 2π)), φ � rand(0, π/2) and calculates the unit vector l according to formula (34). Finally, the algorithm calculates the coordinate p^′ � p + nl°a fter noise is added, p is the point cloud coordinate before noise is added, and p ′ is the point cloud coordinate after noise is added.
l � (cos(θ) sin(φ), sin(θ)sin(φ), cos(φ)). (34) Outliers are a common type of noise in the point cloud obtained by three-dimensional scanning, which manifests as noise points far away from the surface of the object. Similarly, the algorithm uses random sampling to select a certain proportion of points p o in the point cloud P as outliers. For point p o , the algorithm first calculates the normal vector n o , increases the coordinate value along the direction of the normal vector n o , and sets the increment size to 1 times the radius of the neighborhood. As shown in formula (35), the outlier set p ′°o is obtained. In order to observe more intuitively, it is displayed in the form of a patch.
In addition, in the repetitive experiments of the key point algorithm on the space transformation, the point cloud needs to be spatially transformed. Space transformation can be realized by rotation and translation, as shown in equation (36). By realizing the rotation and translation of all the points in the point cloud P, the transformed point cloud P ′ is obtained.
Among them, P Trans represents the transpose of P, and the rotation matrix R and translation matrix T are obtained by formulas (37) and (38), respectively.

Advances in Multimedia
(2) After completing the detection of the key point KP of the original point cloud and the key point KP′ of the transformed point cloud, it is necessary to calculate the same part of the two key point sets, that is, the repetitive calculation module. Figure 3 of the repetition rate experiment shows that the algorithm selects a data set. In the model of the data set, the repetition rate change of the selected key point algorithm is tested when the selected conditions change. We use the noise repetition rate r noise as an example. We must first add noise to the model point cloud, and then use the key point detection algorithm tested to perform key point detection on the point cloud before and after adding noise, and finally use Algorithm 1 or Algorithm 2 to calculate the repetition rate of the two sets of key points.
Descriptor matching experiments need to be combined with the PRC drawing process in the evaluation of the description ability of feature descriptors. e calculation process of PRC covers the complete target recognition process including feature matching, verification, and other steps. After selecting different key point detection algorithms and combining them with the same descriptor, the higher the accuracy of feature matching, the higher the quality of the extracted key points. First, we need to detect the key points of the scene point cloud P scene and the model point cloud P model , respectively, and establish a feature descriptor F scene , F model . en, we establish the corresponding relationship between the scene feature and the model feature and calculate the matching accuracy rate. According to the change trend of the PRC curve, we can determine the descriptive strength.
at is, we know the effect of the key point detection algorithm and the feature descriptor.

Spatial Transformation Repetition Rate r trans .
e original point cloud is rotated along the three coordinate axes of x, y, z by π/8, π/4, π/3, π/2, π3π/2, 2π, the key points before and after the rotation are detected, and the repetition rate is calculated. e repetition rate calculated on the 4 data sets is shown in Table 1. e rotation angle-repetition rate curve shown in Figure 4 is drawn based on the average value of r noise on 4 data sets. Table 1 is the average value of the spatial e six algorithms tested have a repetition rate of 1 in all tests. at is, they have spatial transformation invariance. Some of these six algorithms realize the invariance of space transformation, and some of them are described in the next step by selecting features that have nothing to do with the choice of coordinate system. For example, the ISS algorithm uses the three eigenvalues of the covariance matrix, 3D-SIFT uses the layer constructed by curvature, and the normal vector angle is used by HoNO. Others describe features directly in the local reference coordinate system. For example, LSP and 3D-Harris both fit the quadric surface in the local reference coordinate system.

Gaussian Noise Repetition Rate r noise .
e original point cloud adds Gaussian noise according to the maximum noise amplitude of 0.05r, 0.08r, 0.1r, 0.15r, and 0.2r. e key points before and after adding the noise are detected, and the repetition rate is calculated. e r noise on the 4 data sets are shown in Tables 2-5. e Gaussian noise-repetition rate      Advances in Multimedia variation curve shown in Figure 5 is drawn based on the average value of r noise on 4 data sets. From the above experimental results, it can be seen that 3D-Harris uses the parameter characteristics of local quadric surface fitting, and 3D-SIFT uses the parameter characteristics of the covariance matrix. Both methods have a certain inhibitory effect on Gaussian noise. e ISS algorithm also uses the eigenvalues of the covariance matrix to determine the key points, but it does not use the scale space to further smooth it like 3D-SIFT, so it is more sensitive to noise. e LSP algorithm and the KPQ algorithm use the combination of principal curvature and Gaussian curvature at the key points to form the shape index value Si and the key point quality Q as the rating indicators. Like the ISS algorithm, the neighborhood distribution characteristics of features are not considered when selecting key points, and there is no smoothing process for noise, so the repetition rate is lower when the noise intensity is high.

Resolution Repetition Rate r density .
e original point cloud is reduced according to the reduction rate of 50%, 70%, 80%, 90%, and 95%. e key points before and after the reduction are detected, and the repetition rate is calculated. e r density on the 4 data sets are shown in Tables 6-9, respectively. e reduction rate-repetition rate curve shown in Figure 6 is drawn based on the average value of the r density on the 4 data sets.
From the data in the table, it can be found that increasing the neighborhood radius can make the algorithm more robust to resolution changes.
e key points before and after the outlier are added, and the repetition rate is calculated. e r outlier on the 4 data sets are shown in Tables 10-13, respectively. e reduction rate-repetition rate curve shown in Figure 7 is drawn based on the r outlier average value on the 4 data sets.
In the above experiment, the key point detection algorithm decreases more as the proportion of outliers increases,      indicating that the method of covariance matrix eigenvalue decomposition has stronger stability to outliers. e above research verifies that the sports video moving target detection and tracking based on the SIFT algorithm proposed in this paper has good results.

Conclusion
In the sports behavior recognition system, many algorithms lack timing information. In order to make up for the deficiencies of these algorithms, many researchers make up for the lack of time information by establishing a time series model for feature descriptors to further describe the behaviors and reflect the information that different types of behaviors change in chronological order so that the expression of features is more accurate and distinguishable, and the classification accuracy is improved. is paper uses SIFT to perform the recognition and analysis of sports video images and uses SIFT algorithm to recognize and track moving targets. Finally, this paper verifies through experiments that the sports video moving target detection and tracking proposed in this paper based on the SIFT algorithm has good results.

Data Availability
e labeled dataset used to support the findings of this study is available from the corresponding author upon request.

Conflicts of Interest
e authors declare no competing interests.