Online Education Classroom Intelligent Management System Based on Tensor CS Reconstruction Model

To study a high-efficiency online classroom intelligent management system, this article builds an artificial intelligence classroom management system based on the tensor CS reconstruction model. Moreover, this study uses the cosine function to represent the data energy fitting of the traditional active contour model and proposes a local cosine fitting energy active contour model based on partial image restoration, which is used for image and composite image segmentation. Simultaneously, this study proposes a new type of super-resolution algorithm. This algorithm performs Fourier transform of a low-resolution image into a frequency range and then performs an inverse Fourier transform on the image expanded in the frequency range to obtain the initial high-resolution image and finally reconstructs a new super-resolution image using the frequency-domain compressed data of the high-resolution image. Finally, this study verifies and analyzes the performance of the model through experiments. The research results are basically consistent with the expectations of the model.


Introduction
Smart teaching is the inevitable trend of future education.
ere is a certain difference between the classroom management of smart teaching and traditional classroom management, and the corresponding teaching system needs to be combined with the smart classroom teaching mode. Based on this, this article builds an online classroom intelligent teaching management system based on machine learning algorithms [1]. e difference between the traditional classroom and the smart classroom is that the smart classroom makes the traditional classroom more "smart" [2].
at is, a smart classroom can intelligently perceive situational information such as the classroom, students, and environment through related equipment and can make corresponding "actions" by judging and processing the feedback situational information, such as various reminders to students, online questions from students, teachers' timely feedback on students' questions, and recommendations to students that are equivalent to their learning level [3]. Regardless of whether it is a smart classroom or a traditional classroom, the classroom is used as a place for students to attend classes, as well as a place for students and teachers to exchange knowledge and academics with each other. With the differences in the location and learning status of students, classroom situation information can be roughly divided into the following parts: classroom location situation information, classroom time situation information, classroom environment situation information, student situation information, and classroom equipment situation information [4].
With the popularization of computers and networks, my country's online education has also been fully developed, and many universities in our country have successively opened online education classes, providing a sufficient foundation for the development of distance online teaching. At the same time, major colleges and universities have begun to develop some teaching software that suits the characteristics of their colleges and universities, which has also accelerated the development of online education to a certain extent [5]. e network teaching platform is the concrete manifestation of modern informatization in teaching. In fact, it is a kind of teaching environment. It includes not only various computer equipment and multimedia equipment in hardware but also teaching software and operating system in software application [6]. e purpose is to assist in daily teaching. e content that it contains mainly includes course introduction and inquiry, teaching arrangement, and announcement. Moreover, it is a comprehensive teaching system that can achieve multiple functions. Nowadays, various industries have related online teaching platforms, including schools, hospitals, and enterprises. Moreover, with the development of technology, these teaching platforms are constantly updated and upgraded, and they have achieved considerable development in terms of function and performance.

Related Work
Low-rank models have richer mathematical properties than sparse models. In high-dimensional data, the rank of a matrix indicates the number of nonzero singular values of the matrix, and low rank means that fewer vectors can be used to represent the structure of the matrix. e literature proposed a low-rank representation-based subspace clustering algorithm (LRR) [7]. is model considers the joint multisubspace clustering problem, divides the sample data into corresponding representative subspaces, and combines subspace segmentation and noise recognition in a framework. Sparse representation SR and low-rank representation LRR are the two most important ways of matrix representation. In data mining, SR is often combined with clustering. LRR can not only be used for clustering but also commonly used in matrix recovery applications [8]. At present, sparse and low-rank subspace clustering algorithms have been extensively studied.
ere are dozens of subspace clustering algorithms based on sparse and low-rank representations. On the basis of SSC, the literature extended the a priori condition of subspace independence to subspace disjointness and proposed a sparse subspace segmentation algorithm [9]. e literature required the coefficient matrix to be sparse while satisfying the positive definite condition and proposed a quadratic programming subspace division algorithm (SSQP) [10].
According to the development process of feature selection algorithms, the current development of feature selection algorithms tends to be the combination of feature correlation and multiple algorithms. e more classic cluster-based feature selection algorithms are as follows. e literature proposed the multi-cluster feature selection method (MCFS). is method uses all the input features to represent the data structure, then embeds high-dimensional data into low-dimensional space through sparse features, sorts the features according to the regression method, and selects features that are easy to maintain the local popular structure [11]. Literature proposed an unsupervised discriminative feature selection (UDFS) method to make a partial judgment on each sample and obtain the feature subset with the highest score by solving the normalization problem [12]. However, when expressing the relationship between data, this method uses the distance function between samples, and once the function parameters are determined, all relationships use the same function, which does not conform to the law of data distribution. To solve this problem, a local learning clustering feature selection method (LLCFS) is proposed, which introduces correlation features into a regularized local learning model to enable the model to be optimized iteratively [13]. However, this method actually optimizes the two objective functions of structure learning and feature selection, and its theoretical convergence and practical results are not good. Most of the existing unsupervised feature selection methods cannot accurately estimate the data structure. On the one hand, the real structure of the data is required for feature recognition, and on the other hand, the feature is required to accurately estimate the real data structure. Based on this, an adaptive learning feature selection method is proposed. is method first extracts the global and local structure of the data, then obtains relevant features through unsupervised feature learning, and finally builds a sparse map through the obtained relevant features [14]. e adaptive learning feature selection method algorithm integrates structural features and unsupervised learning into the same framework. It is a feature selection algorithm that can be improved adaptively according to the data structure [15].

Construction of Adaptive
Learning Dictionary e ith noise-containing image block u i in the monitored image Y with an assumed size of is considered, and u i is arranged into a column vector y i ∈ R n . To establish a sparse model, it is necessary to construct an over-complete dictionary D ∈ R n×k ; among them, there is k ≫ n. For image Y, the sparse representation coefficient α ij of dictionary D is obtained by the following formula: (1) Among them, i and j are the data row and column directions, respectively; that is, the image block is located at the (i, j) position. e first item on the right is the similarity between the noise image Y and the denoised image X. Among them, λ is a parameter that controls the degree of punishment of the regularization item. e second term is the sparse constraint, where μ ij is the penalty factor. In the third item, R ij is a cropping operator used to extract small image blocks at pixel (i, j), and R ij X is a calculation formula for extracting and converting small image blocks into column vectors. Dα ij is the small image block reconstructed by the column vector corresponding to the small image block R ij X, and the difference between Dα ij and R ij is as small as possible.

Solving Sparse Model.
We assume that the dictionary D in the above formula is known, and the two unknowns of the output image X and the sparse coefficient α ij cannot be calculated at the same time. erefore, to solve the equations of α ij and X, we need to initialize X � Y first and then find the optimal coefficient α ij : 2 Computational Intelligence and Neuroscience In the above formula, the coefficient column vector α ij of R ij x is calculated by selecting the value of μ ij . Once α ij is obtained, X is updated to the following form: e above formula is a simple quadratic equation, and its closed solution has the following form: Among them, I is the identity matrix.

Dictionary
Learning. Once the dictionary D is known, the orthogonal matching pursuit (OMP) algorithm can be used to solve equation (2) to estimate the optimized coefficient matrix α { } M j�1 . Moreover, the dictionary D can be adjusted according to the different characteristics of the band images that may contain noise.
To build a dictionary, formula (1) is redefined as follows: First, the dictionary D and the image X to be denoised are initialized. en, the OMP algorithm is used to estimate the sparse representation coefficient α ij , as shown in formula (2). Secondly, on the basis of the estimated coefficient α ij and the initial denoising image X, the dictionary D in the above formula is updated using the K-SVD algorithm. Finally, formula (4) is used to update the denoised image X.
All visible light and near-infrared bands of 13 Landsat − 7ETM + SLC − ON images are used, and the K − SVD algorithm is used for redundant dictionary training. To save space, each band selects only 5 of the 13 images to be displayed. In Figure 1, from left to right, the first to fifth columns correspond to the Flathead Lake area of Montana, USA, 08/03/1999, 09/20/1999, 10/08/2000, and 07/07/ ETM+ image obtained on October 14, 2002 and2001. For each column, the rows from top to bottom correspond to bands 1, 2, 3, 4, 5, and 7, respectively. In the last column, 6 dictionaries corresponding to the 6 bands of the Landsat − 7 image obtained after training are displayed.

Image Restoration Model Based on Sparse Representation
To use the observed image y ∈ C M to repair the missing data, the image pixel missing model can be expressed as follows: Among them, Φ ∈ C M×N is the missing operator, x is the image without any data loss or repaired image, and y is the observed data missing image. e missing pixel data are effectively estimated using the nonlocal information and global information of the Landsat − 7ETM + SLC − ON image. At the same time, a mathematical model is established through the relationship between the image y to be repaired and the image x after the repair. e sparse model based on image restoration can be expressed in the following form: Among them, ε is additive noise. For the over-complete To repair the image, the dictionary D can be sparsely decomposed into the following form. e 6 dictionaries are for Landsat − 7ETM + SLC − ON image training, and the training area is Landsat − 7ETM + SLC − ON image in Flathead Lake, Montana, USA. e rows from top to bottom are the 1, 2, 3, 4, 5, and 7 band images used to train the dictionary and the trained dictionary. For each row from left to right, the first to fifth images are 5 of the 13 images used to train the dictionary, which are displayed in the last column: Among them, α ∈ C M is the coefficient of sparse representation (SR), and its sparsity can be represented by ‖α‖ 0 . Among them, ‖ · ‖ 0 is the pseudo-norm of l 0 , which is the number of nonzero elements in the coefficient α. Substituting the above formula into formula (7), the following formula is obtained: Among them, k is the threshold of sparsity constraint. However, because the coefficient α is a non-convex and nondeterministic polynomial difficult optimization problem, it is more difficult to solve the l 0 norm minimization problem of the coefficient α. At the same time, when the image contains noise, the solution of the above formula is unstable. To avoid this problem, the convex l 1 − norm is used to replace the non-convex l 0 − norm; that is, the non-convex optimization problem of the above formula is transformed into the following convex optimization problem for solution: Algorithms such as alternating direction method (ADM) can effectively optimize the above formula. e above formula can be equivalent to the following unconstrained optimization problem by properly selecting the regularization parameters: Computational Intelligence and Neuroscience According to the expanded form of the above formula, the structured sparse (SS) image repair model can be realized by the following formula: Matrix A is the sparse coefficient matrix, A is the estimated sparse coefficient matrix, and Y is the restored image. Algorithms such as alternate direction method of multipliers (ADMM) and split Bregman can effectively minimize the above formula. In image restoration theory based on structural sparse representation, nonlocal self-similar methods are often used, such as the nonlocal method of Dong et al. [16] and the simultaneous sparse coding of Banerjee and Chatterjee [17] (SSC) method.

Image Restoration Algorithm Based on Nonlocal Low-Rank Regularization
For Landsat − 7ETM + SLC − OFF images, a non-convex nonlocal low-rank regularization model is proposed. e non-convex regularization model contains a set of selfsimilar feature blocks and a low-rank approximation of sparse representation. Nonlocal self-similarity is to intercept a window in the image and select a small image block as a sample image in the window. e sample image block is compared with other image blocks in the window, and m − 1 most similar sample image blocks are found, so that there are m very similar image blocks in the entire window. Sample image patches and m − 1 similar image patches are converted into column vectors, and all column vectors are arranged into a matrix, and then, the rank of this matrix is very low. e low rank of the matrix is very important prior information, which is of great significance to the establishment and solution of the image restoration model.
Since the US Landsat − 7ETM+ image data are a large data set, there are a sufficient number of similar image patches of size � n √ × � n √ in the image x, and the given sample image patches x \Ω i are grouped according to similarity. Among them, \Ω represents the effective part; that is, the value of the missing pixel in the vector x i is set to 0, and the rest of the set of nonzero elements is denoted as x \Ω i . e effective part of the image patch does not need to be updated in the image patch repair. A given sample image small block should contain no more than 3-pixel missing data. Among them, the missing pixel value is set to 0.
In the image window x, small image blocks are intercepted. If the pixel data corresponding to the effective part of the sample image small block are found to be similar to the effective part of the sample image small block at position j, then the similar image small block is represented by x \Ω i j ∈ C n . e pixel data in the position of the non-corresponding effective part in the image small block still have the pixel value set to 0. For each sample image small block x \Ω i in a local window, for example, in 70 × 70 or 90 × 90, the K-nearest neighbor (KNN) algorithm is used for preliminary classification, as shown in the following formula: Among them, T is the similarity threshold, and G i is the position of the image patch similar to the sample image patch x \Ω i . For there are multiple image patches similar to the sample image patches, only the m − 1 most similar image patches are selected. erefore, we obtain a matrix.

Wireless AP1
Wireless AP2 Wireless AP1 Wireless AP2 Classroom1 Classroom2  Computational Intelligence and Neuroscience Among them, it includes H and m − 1 most similar patches of the sample image. Among them, x \Ω i is the effective part of the matrix X i , X i ∈ C n×m . It is easy to find that each column of x \Ω i represents a small image block, which is similar to the sample image small block x \Ω i . Since the amount of data of Landsat − 7ETM+ image may be too large, for the efficient operation of the subsequent singular value decomposition (SVD) method, the matrix x \Ω i needs to be composed of low rank; that is, several adjacent image patches of the local window are very similar. erefore, the following similarity discrimination method is established: Among them, T 1 is the similarity threshold, and H i is the position of two similar sample patches. After regrouping, similar image patches are formed using the following method; that is, if the following formula is satisfied, two image patches can also be defined as similar: Among them, x \Ω i j and x \Ω j k are image patches in X i and X j , respectively. At the same time, similar image patches meeting the following conditions are searched: Among all image patches that meet the above conditions, only the most similar m patches are selected. If there is no small image block that meets the above conditions, the matrix is not merged. After regrouping the image patches, the merged matrix of the two sample image patches x \Ω i and x \Ω j is obtained: Among them, there is Y i ∈ C n×m , and this process is repeated until all small image blocks that satisfy the above formula are merged. e merged image blocks have similar structural features, and the matrix Y i is low rank.
In actual situations, Y \Ω i may be corrupted by noise, which leads to deviations from the expected low-rank constraint. e method used here is to decompose matrix Among them, Z \Ω i and W \Ω i represent low-rank matrix and Gaussian noise matrix, respectively. en, by solving the minimization optimization problem of the following formula, the low-rank matrix Z \Ω i can be calculated. (21) In the formula, ‖ · ‖ 2 F represents the Frobenius norm, and σ 2 w is the Gaussian noise variance. However, since the minimization in the above equation is a NP − hard problem, this equation cannot be solved directly. To obtain the approximate solution of the above formula, the convex kernel norm ‖ · ‖ * (singular value sum) regularization model is usually used instead of the low-rank minimization problem. Although the convex kernel norm ‖ · ‖ * model is theoretically mature, many references prove that a smooth and not convex low-rank replacement model will produce better denoising results. Recently, a logdet regularized non-convex approximate replacement model was proposed. e comparison between the nonconvex replacement function logdet and the kernel norm replacement function under standard conditions shows that when solving the rank minimization problem, the non-convex logdet replacement function model can approximate the low-rank function better than the kernel norm model.
Generally speaking, matrix Z \Ω i is neither a square matrix nor a positive semi-definite matrix. e above formula can be rewritten as follows: Among them, is a diagonal matrix, and the diagonal elements of are the eigenvalues of matrix Since 1/2 is a diagonal matrix, its diagonal elements are the singular values of matrix Z \Ω i . To solve for Z \Ω i , Z \Ω i 's logdet model is used. erefore, a low-rank approximate model can be obtained as follows: In fact, the minimization problem of the above formula can be equivalent to the following unconstrained optimization problem: Computational Intelligence and Neuroscience For each sample image block in the monitoring image, the low-rank matrix Z \Ω i of the approximate matrix Y \Ω i can be obtained by solving the above formula.

Model Building
As the gateway layer of the entire system, the wireless router is responsible for the identification of mobile devices, the judgment of entering and leaving the classroom, the recording of attendance time, and the transmission of attendance data.
e MAC address is used as the unique identifier of each device, which can be associated with the MAC information of the student and the mobile phone, and the student's identity can be confirmed by obtaining the MAC information of the device. In this system, a wireless router installed with OpenWrt system is used to capture the detection request frame sent by the device at the system data link layer, and the MAC information can be parsed according to the frame format.
e wireless network structure model of this system is shown in Figure 1.
e camera module uses Hikvision DS-2CD893PF-E camera. e camera is directly connected to the router, and the camera network is set. e Linux system timer controls the camera to take pictures of the students in the classroom during class time and stores the pictures on the server. e overall structure of the camera module is shown in Figure 2. e overall server architecture is shown in Figure 3, which is mainly divided into Web server, Linux server, and database. e tomcat server is built on the Web server side, and all data interaction is in the form of a URL interface. e Linux server uses the Ubuntu system, which is mainly used to control the camera snapshot storage. Using the crontab timer, the start and end time of the course are used as the start time of the timer and write timer for camera snapshot storage task. During class time, the timer is started to start to control the camera to take pictures and upload the picture information to the Web server. e Web server accurately cuts out students' classroom pictures by querying the location information of students' mobile phone attendance, generates personal attendance picture information, and stores it in the database. e relational database MySQL is used as the server database to store student information, teacher information, course information, course selection information, attendance information, picture information, etc.
Around the classroom attendance management, router attendance is designed, which mainly records the time when students go to and from get out of class. e generated attendance information is stored in the database, the timer controls the camera to take pictures during class time, and the pictures are stored in the database. e server calculates the student's course conduct scores based on the router's attendance data and crops the camera shots according to the student's mobile phone sign-in location. Moreover, it makes the traditional classroom attendance more streamlined and standardized, and it conducts all-round classroom management. e data processing relationship between each summary is shown in Figure 4 module relationship.
e main process receives the message, obtains the MAC information, queries the student information table stdinfo in the database, and queries the student ID through the MAC information to confirm the student's identity. After that, the main process uses the structure linked list to store the association information between the MAC information and the student's student ID and the scan status of the student's mobile phone. e main process traverses the scanning status linked list every ten seconds to determine whether the student is in attendance or sign-out status. After meeting the requirements, the attendance or sign-off information is generated, and the information is sent to the server using CURL, and the information is stored by the server in the attendance information table signinfo in the database. e router obtains the mobile phone MAC information and generates the attendance information as shown in Figure 5. e client function is completed through the management module, the teacher-student interaction module, and the recommendation reminder module. e server-side function is mainly the system administrator's maintenance function of the system, which is completed by the information management module. e specific overall system function module diagram is shown in Figure 6, and the detailed description of each module is as follows: (1) management module: the management module manages the involved user information and user-related information and includes the registration, modification, and maintenance of user information; the designation, modification, and recording of attendance information; the calculation and viewing of score information; and the arrangement, submission, and inspection of job information.
(2) Teacherstudent interaction module: the teacher-student interaction module is the interaction between students and teachers in the classroom, including teacher teaching behavior and student feedback behavior. (3) Recommendation module: the recommendation module is the recommendation reminder service introduced above, which mainly includes time reminder, location reminder, learning efficiency reminder, and learning material recommendation. (4)

Computational Intelligence and Neuroscience
Information management module: the information management module mainly refers to the management and maintenance functions of the system management personnel to ensure the stable, safe, and real-time operation of the system. It mainly includes the entry, modification, and deletion of class information, course information, and user information, as well as certain maintenance of the system.

System Performance Verification
Next, this article analyzes the performance of the model constructed in this article. e model constructed in this study mainly uses image recognition to identify the characteristics of students and make corresponding strategies based on the recognition results. First, this article analyzes the accuracy of student image feature recognition through 96 sets of data. e results are shown in Table 1 and Figure 7.
It can be seen from the above chart that the model constructed in this study performs well in the accuracy of teaching image recognition. Next, this article scores the model decision effect, and the results are shown in Table 2 and Figure 8.

Conclusion
is study uses the tensor CS reconstruction model to construct an online education classroom intelligent management system, uses the cosine function to represent the data energy fitting of the traditional active contour model, and proposes a model based on partial image restoration to fit the energy activity contour of the local cosine, which is used for image and composite image segmentation. e model can segment the composite image with uneven intensity and extract the region of interest in the image. e proposed model is compared with the convex model (CVMST) of Mumford-Shah and the threshold model, the local binary fitting model (LBF), and the L0 regularized Mumford-Shah (L0MS) model. e results show that the model has higher efficiency and robustness for the segmentation of noisy images and blurred images, and the calculation time is close to or faster than these advanced models. In addition, this study uses a discrete form to describe the model, which makes it easier to add a regular term to control the segmentation. Finally, this study uses the improved algorithm proposed to segment the image and obtains the three-dimensional visualization results. e experimental results show that the algorithm proposed in this study has a certain teaching effect.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that they have no conflicts of interest. Computational Intelligence and Neuroscience 9