Identification of Coupled Map Lattice Based on Compressed Sensing

A novel approach for the parameter identification of coupled map lattice (CML) based on compressed sensing is presented in this paper. We establish a meaningful connection between these two seemingly unrelated study topics and identify the weighted parameters using the relevant recovery algorithms in compressed sensing. Specifically, we first transform the parameter identification problem of CML into the sparse recovery problem of underdetermined linear system. In fact, compressed sensing provides a feasible method to solve underdetermined linear system if the sensing matrix satisfies some suitable conditions, such as restricted isometry property (RIP) and mutual coherence. Then we give a low bound on the mutual coherence of the coefficient matrix generated by the observed values of CML and also prove that it satisfies the RIP from a theoretical point of view. If the weighted vector of each element is sparse in the CML system, our proposed approach can recover all the weighted parameters using only about M samplings, which is far less than the number of the lattice elements N. Another important and significant advantage is that if the observed data are contaminated with some types of noises, our approach is still effective. In the simulations, we mainly show the effects of coupling parameter and noise on the recovery rate.


Introduction
Coupled map lattice (CML), developed by Kaneko [1][2][3], is a dynamical system that can be described as an array of smaller finite-dimensional subsystems endowed with local interactions.The evolution of the subsystem at one lattice element depends not only on the states of the element itself, but also on the state of the other elements.If the coupling interaction is a function of all the elements, then the CML is called globally coupled map lattice.So far, a very large number of remarkable results about this topic were obtained by researchers working in different areas of physics, biology, mathematics, and engineering.However, most of the papers mainly concentrated on analysing the evolutionary behaviours and properties of CML [4][5][6].And the topic about the parameter identification or reconstruction of CML from measured data has received relatively little attention.Over the past fifteen years, several methods for the identification problem have been presented [7][8][9][10][11].Billings and Coca [8] introduced a novel approach to the identification of CML and the method exploited the regularity of the CML model.Subsequently, Billings et al. [10] pointed out some drawbacks of the method [7,9] and presented a multiresolution approximation for the underlying spatiotemporal dynamics using B-spline wavelet and scaling functions.Ning et al. [11] first transformed the nonlinear spatiotemporal system into a class of multi-input-multi-output (MIMO) partially linear systems and then proposed an effective online identification algorithm for this nonlinear spatiotemporal system.Subsequently, Ning et al. [12] proposed a novel kernel-based learning algorithm and it can be used to effectively achieve accurate derivative estimation for nonlinear functions in the time domain.Kohar et al. [13] studied the usefulness of coupled redundancy as a mechanism for reduction in local 2 Mathematical Problems in Engineering noise in coupled map lattices and investigated the role of network topology, coupling strength, and iteration number in this mechanism.But sometimes in order to improve the efficiency of parameter identification, we need to reduce the number of observations or samplings under the condition of accurate identification.In addition, from an engineering perspective, we cannot ensure that the received data are truly generated by the CML system because our sampled data may be contaminated with noise when we perform some indispensable tasks (e.g., data sampling and data transmission).To the best of our knowledge, most of papers about this topic (including the papers mentioned above) did not consider the above two fascinating and significant problems.And the parameter identification of CML using a very few number of observations or samplings is still regarded as a very intractable problem.Another challenging problem is that if the observed data are contaminated with noise, how can we accurately identify all the weighted parameters?
The compressed sensing theory, introduced in [14-17], breaks the limitation of Nyquist-Shannon sampling theorem and is widely used in various fields, such as information theory, medical imaging, earth sciences, and astronomy.The standard model of compressed sensing is an underdetermined linear system y = Ax, where A ∈ R × , x ∈ R  , y ∈ R  , and  < .It is clear that, given a coefficient matrix A and an observation vector y, there exist infinite x such that y = Ax.In order to recover the unique vector x from the observed vector y, we need two conditions; that is, the vector x is sparse and the matrix A must satisfy some special properties, such as restricted isometry property (RIP) and mutual coherence [18].In other words, if the matrix A satisfies the RIP, then we can recover the -dimensional sparse vector x from -dimensional observed vector y.Further, the RIP is sufficient for a variety of algorithms to be able to successfully recover a sparse vector from noisy samplings.These two useful theoretical results about compressed sensing provide insights to solve the above stubborn problems about the identification of CML.
Different from the existing methods about parameter identification of CML system, we independently present a new approach using the compressed sensing theory in this paper.Specifically, we first transform the parameter identification problem into the reconstruction problem of underdetermined linear system through simple mathematical transformation.Subsequently, we give a low bound on the mutual coherence of sensing matrix using the inverse formula of Cauchy-Schwarz inequality and then prove that it satisfies the RIP with suitable parameters.At last we exploit the classic Orthogonal Matching Pursuit algorithm in compressed sensing to recover the weighted parameters of CML system.The result shows that if the weighted vector of every lattice element has at most  nonzero entries, then we just need  observations, which is far less than , to accurately identify the weighted vector.Another critical advantage of our approach is that the reconstruction algorithm can still work if the sampled data are contaminated with some types of noises (e.g., Gaussian noise, uniform noise).We investigate the effects of coupling parameter and noise on the recovery rate in our simulations.We found an interesting fact that when the sampled data are contaminated with uniform noise, the recovery effects are better than that if the observed data are authentic.
The remainder of this paper is organized in the following manner.We recall the standard CML model in Section 2. In Section 3 we introduce some backgrounds about the compressed sensing theory.We present the theoretical results about our proposed identification method based on compressed sensing in Section 4 and show our simulations in Section 5. Finally, we draw our conclusions in Section 6.

The CML Model
We give a brief review of standard CML model in this section.Generally speaking, the widely used CMLs can be divided into two categories, that is, the diffusive (DCML) model and the global (GCML) model.The difference between these two models is whether the evolution of the subsystem at any lattice element depends on the states of all the other elements.In a standard GCML model, if we set some weighted parameters to 0, then it can be seen as a DCML model.Without loss of generality, we consider the following GCML model: where   () defines the state of the lattice element  (1 ≤  ≤ ) at a discrete time step  (1 ≤  ≤ ),  ∈ (0, 1) is the coupling parameter,  and  are maps with respect to the local dynamics and nonlocal system, and c  = ( 1 ,  2 , . . .,   ) ∈ {0, 1} 1× is the unknown weighted vector of the lattice element .The two widely used forms of (), namely, () = () and () = , are corresponding to synchronous and asynchronous updates of the lattice, respectively.For the sake of simplicity, we just consider the synchronous case in our paper; that is, () = ().In many scenarios, () is the standard logistic map, which can be written as where  ∈ (0, 1) and the range eventually lies in the interval (0,1).

Compressed Sensing
In the field of compressed sensing, the researchers mainly concentrate on designing the excellent sensing matrices and the efficient reconstruction algorithms.Compressed sensing provides a method to obtain a unique solution from an underdetermined linear system taking advantage of the prior knowledge that the true solution is sparse.In mathematical terms, we consider the following underdetermined linear system: where A ∈ R × ( < ) and x ∈ R  .This equation depicts the process that the original -dimensional data x is compressed into an -dimensional data y.In this formula, A denotes the sensing matrix and we call y observed vector or observed values.A vector x is said to be -sparse when it has at most  nonzero entries.For a -sparse original vector x ∈ R  , if the sensing matrix satisfies some suitable condition, then there exist some algorithms to recover the vector x from the observed vector y.Next, we recall some fundamental results about sensing matrices and reconstruction algorithms in the following two subsections.

Sensing Matrix.
In this subsection, we address two significant concepts frequently used in compressed sensing theory, that is, RIP and mutual coherence.The famous RIP was first presented by [14], which is defined as follows.
Definition 1.A matrix A satisfies the RIP of order  if there exists a   ∈ (0, 1) such that holds for all -sparse vectors k.
In fact, the RIP sufficiently guarantees that the original vector can be exactly recovered from the observed values.Furthermore, if the matrix A satisfies the RIP, then it is also sufficient for a variety of algorithms to be able to recover the spare vector from observed data contaminated with noise.The following lemma [19] denotes noiseless recovery if the matrix A satisfies the RIP.

Lemma 2.
Let A be a sensing matrix in (3) that satisfies the RIP of order 2 with  2 < √ 2 − 1.Then the original sparse vector can be exactly recovered.Baraniuk et al. [20] have proved that the random sensing matrix, whose entries are identically and independently sampled from a Gaussian distribution or a sub-Gaussian distribution, satisfies the RIP with overwhelming probability.Similar to a Gaussian random matrix, a Bernoulli matrix or a Random Discrete Fourier Transform (RDFT) matrix can also be used as sensing matrix.However, given a specific matrix A, it is very hard to verify whether A satisfies the RIP or not.The concept of mutual coherence provides a more simple choice [18,21].Definition 3. Given a matrix A ∈ R × ( < ), a  denotes the th column of A. The mutual coherence of the matrix A, (A), is the largest absolute normalized inner product between any two columns of A. Namely, From Cauchy-Schwarz inequality |⟨a  , a  ⟩| ≤ ‖a  ‖ 2 ‖a  ‖ 2 , it is clear that the mutual coherence of any matrix A is bounded by 1.In order to complete our proof, we will review the Gershgorin circle theorem [22] here.

Recovery Algorithm.
A straightforward approach to obtain the original -sparse vector x from the above underdetermined system (3) can be viewed as the optimization problem of which is called  0 optimization problem.This problem is very difficult to solve and it is NP-hard in general case [14].Since the convex property of  1 norm, a classic method used in compressed sensing is to replace ‖x‖ 0 with ‖x‖ 1 ; that is, In fact, Candès [19] has pointed out that the solution to ( 6) is that of (7) when A satisfies the RIP with  2 < √ 2 − 1.Although some convex optimization techniques are powerful for sparse vector reconstruction, there also exist a variety of greedy algorithms [23][24][25] for solving this problem.We describe the classic Orthogonal Matching Pursuit (OMP) algorithm as follows because our simulations need to use it.
In Algorithm 1, Λ denotes the support set and x Λ is the vector obtained by collecting the entries of x corresponding to the set Λ. A Λ is the submatrix of A composed of the column vectors of A corresponding to the support set Λ.During each pass through the loop, the algorithm first chooses the optimal index  and updates the support set Λ and then computes the vector x which minimizes ‖Ax − y‖ 2  2 such that supp(x) = Λ.

Parameter Identification of GCML Based on Compressed Sensing
In this section, we first transform the parameter identification problem of GCML to the reconstruction problem in compressed sensing.Then we give a low bound on the mutual coherence of the corresponding sensing matrix using the inverse formula of Cauchy-Schwarz inequality.At last, we prove that it satisfies the RIP.
Theorem 5. Given a GCML model described as (1) and  observed values   () of every lattice element , where 1 ≤  ≤ , 1 ≤  ≤ , then the identification problem of ( 1) can be transformed into the reconstruction problem of an underdetermined linear system.

𝑐 𝑖𝑁
) . ( If we sample  times for every element  in the lattice, therefore we have where  = 1, 2, . . ., .So Denote Then ( 11) can be expressed as an underdetermined linear system Y = BC.So the identification of GCML can be transformed into the reconstruction problem of an underdetermined linear system.
Let b  be the th column of the matrix B. Therefore, we define the sensing matrix related to the GCML as A = (1/)B, where  = max 1≤≤ ‖b  ‖ 2  2 .Note that the scaler 1/ is used for normalization.In what follows, we assume that each column of C has at most  nonzero entries.In order to give a low bound on the mutual coherence of A, we need the following two useful lemmas.Lemma 6.Let   and   , 1 ≤  ≤ , be positive real numbers, where Proof.For any , we have Namely, That is, Thus, Summing  from 0 to , we obtain the results.
Lemma 7 (the inverse formula of Cauchy-Schwarz inequality).Let   and   , 1 ≤  ≤ , be positive real numbers, where Proof. Consider Thus, From Lemma 6, we obtain Squaring both sides, we complete the proof.
Theorem 8. Let max A be the maximal element in A and let min A be the minimal element in A. Then Proof.Let a  be the th column of the matrix A. Thus, From ( 1), we can know that each element (  ()) in matrix A is obviously positive.Figure 1    2 = max 1≤≤ (  ()),  1 = min 1≤≤ (  ()), and  2 = max 1≤≤ (  ()).From ( 21), we obtain If the th column contains the maximal element max A in A and the th column contains the minimal element min A , that is,  2 = max A and  1 = min A , where If the th column contains the minimal element and the th column contains the maximal element, the result also holds.Else, if  = that is, the maximal element and the minimal element are both in a column of A-then ⟨a  , a  ⟩/‖a  ‖‖a  ‖ also greater than 2 min A √min A max A /max A (max A + min A ). Thus, we complete the proof.

Theorem 9. Given a sensing matrix A defined by the above, then A satisfies the RIP of order 𝑠 with 𝛿 𝑠 ((2min
Proof.For any -sparse vector k, let Λ be the support set and |Λ| = .A Λ denotes the submatrix of A composed of the column vectors of A corresponding to the support set Λ and k Λ is the vector obtained by collecting the elements of x corresponding to the set Λ. Thus, We consider the Gram matrix G = A  Λ A Λ .Evidently it is positive semidefinite.We denote the minimal eigenvalue by  min and the maximal eigenvalue by  max .From Lemma 4, there exist ℎ,  ∈ Λ such that Therefore, Because  = max 1≤≤ ‖b  ‖ 2 2 , (B) = (A), and |Λ \ ℎ| = |Λ \ | =  − 1, we can get If ( − 1)(A) < 1, then we let   = ( − 1)(A).Therefore, when  < 1/A + 1, there exists   such that (1 −   )‖k‖ 2 2 ≤ ‖Ak‖ 2  2 ≤ (1 +   )‖k‖ 2 2 .Note that, from Theorem 8, we know that (2 min A √min A max A /max A (max A +min A ))(−1) ≤   < 1.Thus, we complete the proof.

Simulation Results
If  < 1/(A) + 1, Theorem 9 shows that the sensing matrix of GCML satisfies the RIP of order  and thus the weighted vector c  of every element  can be exactly recovered.In this section, we give the experimental results of parameter identification of this special type of CML using the above OMP algorithm.We state that the weighted parameter matrix C is -sparse in our model.Namely, every column vector of C contains at most  nonzero numbers, where  is much smaller than .Simulations can be classified into the following two types: noiseless recovery and noisy recovery.

Noiseless Recovery.
In this subsection we consider the noiseless model; that is, the observed data are truly produced by (1). Figure 1 investigates the distribution of   (), where the weighted matrix is 25-sparse and all the nonzero entries of the matrix are independently selected from a normal distribution with mean 0 and variance 0.5.The result shows that it can be roughly regarded as a uniform distribution whose possible values are from 0 to 1.In Figure 2, we randomly choose two lattice elements and study the identification accuracy of the weighted vector.Figures 2(a) and 2(b), respectively, depict the identification effects about the 35th and the 162nd lattice elements in our GCML model.The experimental results show that it can accurately identify the parameters using the reconstruction algorithm in compressed theory.Figure 3 depicts the overall relationship among the sparsity, the required sampling times, and the recovery rate.In our simulations, the recovery rate is defined as SE/ 2 , where SE stands for the number of the same entries between the original and the reconstructed weighted matrix.In Figure 3, the recovery rate is close to 1 due to the sparsity of the weighted matrix.However, when  < 2, the recovery rate is equal to 0. This particular phenomenon is reasonable because if  < 2, then the underdetermined linear system y = Ax does not have a unique sparse solution.In Figure 4, we study the effects of the sparsity, the sampling time, and the coupling parameter on the recovery rate.Figure 4(a) shows that, whatever the coupling parameter we choose, it only needs less than 100 samplings to completely recover the whole weighted matrix.Similarly, Figure 4(b) shows that the sparsity has a negative effect on the recovery rate.In general, the smaller the coupling parameter is, the higher the recovery rate is.

Noisy Recovery.
It is very difficult to obtain the accurate observations in actual operation.So we consider the case that the observed data are contaminated with noise.Here we mainly consider two classic types of noises, that is, Gaussian noise and uniform noise.Figure 5 shows that if the observed data are contaminated with Gaussian noise, then the recovery rate is lower than that of accurate observations.However, if the data are contaminated with uniform noise, then the recovery rate is surprisingly higher than that of accurate observations.From a statistical point of view, the reason why this counterintuitive phenomenon happens is that the data contaminated with uniform noise are more uniform and random.In fact, the sensing matrix produced by this mixed data has stronger restricted isometry property.

Conclusions
In this paper we establish a bridge between the parameter identification of CML and compressed sensing theory.Specifically, we first skillfully transform the identification problem into the sparse reconstruction problem of an underdetermined linear system.And then we prove that the sensing matrix A of CML satisfies the RIP if the sparsity of original signals is smaller than 1/(A)+1.In our simulations, we consider the effects of various factors on the recovery rate, such as sampling time, sparsity, coupled parameter, and noise.In general, our proposed approach has the following advantages.Compared with some existing identification approaches, our approach only requires a very few number of observations for accurate identification.In addition, if the sampled data are contaminated with some types of noises, it will still be able to work.

Figure 3 :
Figure 3: The relationship among the sparsity, the sampling time, and the recovery rate. = 0.01 and  = 128.