Pattern Synthesis for Sparse Arrays by Compressed Sensing and Low-Rank Matrix Recovery Methods

Antenna array pattern synthesis technology plays a vital role in the field of smart antennas. It is well known that the pattern synthesis of homogeneous array is the key topic of pattern synthesis technology. But this technology needs plenty of homogeneous array elements to meet the antenna requirements. So, a novel pattern synthesis technology for sparse array based on the compressed sensing (CS) and low-rank matrix recovery (LRMR) methods is proposed. The proposed technology predominantly includes the design of sparse array, the recovery of homogeneous array, and the synthesis of antenna array pattern. The simulation result shows that an antenna array with low gain and strong directivity can be arbitrarily built by the use of a small amount of sparse array elements and it is useful for the miniaturization and economical efficiency of the antenna system.


Introduction
Antenna is the device to radiate and receive radio waves.The array antenna has the characteristics of strong directivity and high gain, which can satisfy radiation character and is appropriate in engineering application.In the array antenna, a set of the same antenna elements is distributed on a line or a plane equally or unequally according to the actual requirement; the unit to constitute the antenna array is called antenna element.Compared with a single antenna, the array antenna has advantages including realization of multiple beams and the pattern synthesis of low side lobe.The emergence of an array antenna is a great progress in the history of antennas.
The antenna array pattern synthesis is the technology which satisfies various indicators of an antenna system pattern by changing antenna characteristics based on specified requirements using various optimization algorithms.As an indispensable part of the antenna technology, pattern synthesis technology plays an important role in the field of traditional antenna [1].
The antenna pattern shows the antenna performance characteristics of spatial distribution of the electromagnetic field energy; the radiation characteristic of the antenna is always described by the pattern of field intensity power.In the field of radar and communication, antenna pattern synthesis is used to satisfy the performance index including narrow beam, deep null steering, and low side lobe [2].
Sparse array is obtained by removing some antenna elements from the homogeneous array randomly.Compared with the homogeneous array, the sparse array has the advantages of lower side lobe, less array elements, lower failure rate, and lower cost [3].After the sparse distribution of array elements, the width of the main lobe being essentially unchanged, the mutual coupling effect between antenna arrays is reduced, which reduces the failure rate and cost of the antenna system.In theory, the sparse array has bigger solution space and freedom and the optimal combination of the position of array elements is closer to the optimal solution, which can reduce the level of the side lobe obviously; meanwhile, the level of the side lobe could be optimized again by optimizing the position and amplitude of array elements at the same time.After that, the optimization algorithms are used to restore the homogeneous array from the sparse array and the result of pattern synthesis is obtained later.
In general, the antenna pattern has two or more lobes: the lobe which has the biggest radiation intensity is called the main lobe and other lobes are called side lobes.In order to control the null steering and suppress the radiation intensity of side lobes, PSO algorithm is applied to the pattern synthesis of the linear array [4] and the Chebyshev algorithm is applied to the pattern synthesis of the rectangular array [5], which can obtain the best side lobe radiation intensity when the main lobe radiation intensity is fixed.
The problem of uniform linear pattern synthesis was the earliest research field involved in pattern synthesis technology.Wang et al. [6] put forward several kinds of uniform linear array low side lobe pattern synthesis algorithms.Shi et al. proposed an array pattern synthesis based on the principle of adaptation [7].The modified algorithm in the literature improves the convergence speed and accuracy.However, a lot of array elements are needed to satisfy the requirements of pattern in uniform linear arrays.
The rise of compressed sensing (CS) [8] and low-rank matrix recovery (LRMR) [9] theories in recent years provides a new solution.These developments contribute to making sparse representation a more effective and popular way for data representation.Compared with that of the traditional subspace learning model, sparse representation is more robust for the data with big noise [10].According to the CS method, we can recuperate the homogeneous array from the sparse array and then finally obtain the pattern synthesis.In LRMR algorithm, the data matrix is expressed with the sum of the low-rank matrix and sparse noise matrix first and then, the original matrix was recovered by solving the problem of nuclear norm optimization.As a result of this application, we can get the effect of pattern synthesis of the homogeneous array with sparse array and it is useful for the miniaturization and economical efficiency of the antenna system.
The compressed sensing theory shows that we can obtain sampling signal by the least number of observations and ensure the enough information to restore the original signal at the same time so as to realize the dimension reduction of the original signal, which means that substantial cost of sampling and transmission is saved.
In this paper, we apply compressed sensing (CS) and lowrank matrix recovery technology (LRMR) to pattern synthesis in this paper.CS is applied to sparse linear array and LRMR is applied to sparse matrix array.The sparseness of one dimension is used in CS, and the low rank of a matrix is used in LRMR, which means the sparseness of singular values of a matrix.In essence, sparseness and low rank mean that signal can be represented in a simple way.
Here, CS is applied in the linear array and the LRMR method is applied in the rectangular array, while the 2D pattern is obtained by particle swarm optimization (PSO) algorithm and the 3D pattern is obtained by Chebyshev algorithm.
The rest of the paper is organized as follows.Section 2 describes some related works.Section 3 formulates the problem.In Section 3, we propose the method.Section 4 describes the experiments and simulation results.Section 5 concludes the paper with the future direction of research.

Related Works
In the past half century, the antenna array pattern synthesis has attracted many scholars to study about it; many different calculation and optimization methods were proposed.Different methods have different advantages and disadvantages.For antenna pattern synthesis, the methods are divided into three types: analytical method, swarm intelligence algorithm, and convex optimization algorithm.First, the traditional analytical method includes Chebyshev algorithm and Kaiser-Taylor synthesis algorithm.Chebyshev array could obtain the narrowest width of the main lobe under the same level of the side lobe or obtain the best level of the side lobe under the fixed width of the main lobe, but the disadvantage of the Chebyshev array is that the difficulty of the feed system will be increased when the number of array elements is big.Kaiser-Taylor synthesis algorithm overcomes the disadvantage successfully; the side lobe of the Taylor array is not optimal under the fixed width of the main lobe, but it is close to optimal and the level of the side lobe reduces as the distance from the main lobe increases.Bucci et al. used the density cone method in pattern synthesis, and Kumar and Branner proposed a simple inverse algorithm [11] to optimize the result of pattern synthesis.Second, with the development of the Visi electronic computer from the 1990s, intelligent optimization algorithms have been increasingly applied to solve the problem of array antenna synthesis, such as the genetic algorithm based on the biological evolution, the particle swarm optimization (PSO) algorithm which imitates the birds finding food in the community, and the simulated annealing algorithm based on steel annealing.Genetic algorithm is a kind of nonlinear global optimization algorithm, which can realize the constraint of array element excitation amplitude and phase [12]; the genetic algorithm could be applied to the array element stimulation optimization of the antenna array to obtain the pattern with deep nulling.The stochastic optimization algorithm PSO was proposed by Kennedy and Eberhart based on the strategy of birds finding food in the community.And the PSO algorithm is constantly being improved and applied to the linear pattern synthesis with the control of deep nulling and side lobe.Third, using convex optimization algorithm to solve the problem of pattern synthesis is also a research focus in recent years.When the width of the main lobe is fixed, the pattern synthesis for any kind of antenna array, including homogeneous array and inhomogeneous array, could be expressed as a convex optimization problem.Lebret and Boyd applied the convex optimization algorithm to the problem of pattern synthesis for the first time.Compared with the stochastic optimization algorithm, the optimal solution of target function could be found by convex optimization algorithm and the optimal solution is the global optimal solution [13].
Nowadays, most of the sampling system based on the traditional Shannon sampling theorem could express the signal accurately, but there is a lot of waste.So, maybe we can find a 2 International Journal of Antennas and Propagation better method to avoid the waste.The number of elements in the antenna array has influence on the performance, amount of calculation, and cost [14].So, we want to realize the sampling of information under the frequency lower than Nyquist sampling frequency.The CS theory tells us that we can obtain the sampling signal with the minimal number of observations and ensure that no loss of information is required to restore the original signal at the same time, so as to realize the dimension reduction of signal processing.So, we can realize sampling and compression at the same time and save the cost of sampling and transmission obviously.In 2004, the CS theory was proposed by Candes [15] and it received great attention form the researchers in various fields.Later, LRMR algorithm generalizes the sparse representation of a vector to the low-rank matrix, which becomes an important way to obtain and represent information.The low rank of a matrix means that the rank of a matrix is smaller compared with the number of rows and columns.The singular value decomposition of the matrix is used to construct a vector with singular values; then, the sparse of the vector is corresponding to the low rank of a matrix.So, the low rank of a matrix could be seen as the expansion of sparse on the matrix.The minimization of the matrix rank means that reconstruction of the matrix using the low rank of the original matrix is needed, which involves the minimization of rank function of the matrix.And the LRMR algorithm means that reconstructing the data matrix using the low rank of the data matrix and the sparse of the noise matrix at the same time is required.The problem could be converted into a convex optimization problem, and the condition and theory to solving the problem have been proved [9].Because of the high degree of freedom of the sparse array, and the advantages compared with that of the homogeneous array, the sparse array obtains a high speed of development in recent years.In this paper, a sparse antenna array could be obtained by removing some elements from the array, which can satisfy the design criterion with less elements and reduce production cost obviously.According to CS and LRMR theories, the sparse array could be resumed into a homogeneous array and the result of pattern synthesis could be obtained later, which means that the approximate result of homogeneous array pattern synthesis could be obtained by less sparse elements and it has good effect on the miniaturization of the equipment and the cost.To the best our knowledge, this is the first work to incorporate both compressed sensing and low-rank matrix recovery methods for pattern synthesis to obtain the results with less sparse elements which is the main requirement for smart antennas.

Problem Formulation
The homogeneous array means that the distance between the adjacent array elements is fixed and all the array elements have the same amplitude and phase.The sparse array can be obtained by removing some array elements from the homogeneous array.It is required that the distribution of array elements should meet a rule, such as normal distribution, and the distance between the adjacent array elements is half or a quarter of the wavelength, but whether we remove a specific array element is random.
The pattern of the antenna array could be decided by five parameters, namely, the number of array elements, the form of distribution, the distance between array elements, amplitude, and phase.We can obtain the pattern by changing the five parameters, and the required pattern is obtained by optimizing the five parameters.According to the arrangement mode, antenna array could be divided into linear array, plane array, and stereoscopic array and the linear array and rectangular array are considered in the paper.
The problem of the pattern synthesis for the sparse arrays can be described as follows.Consider the homogeneous linear array with 2N elements.Set the current phase difference to 0, and the current amplitude is center symmetric.It can be proved that the pattern synthesis function is In the function, I n is the current amplitude of antenna elements, d is the space between elements, k is the wave number, k = 2π/λ, λ is the wavelength, and θ is the included angle of the normal and incident rays [16].The current amplitude is symmetric.When we only set the request on the side lobe level, the fitness function is where MSLL is the abbreviation of the mean side lobe level,

MSLL = max
θ ∈ S F θ , SLVL (side lobe valid level) is the expected level, 2θ 0 is the main lobe width, S = θ | 0 ≤ θ ≤ 90 ∘ − θ 0 or 90 ∘ + θ 0 ≤ θ ≤ 180 ∘ , and it is the side lobe region of the antenna pattern.F(θ) represents distributing power into several elements in the antenna array.The fields created by elements have mutual interference.The radiation in some directions enhanced and the radiation in other directions reduced and maybe the value could be 0, which achieves the redistribution of the radiation power in space; the main lobe and side lobe are created then.Fitness function represents the absolute value of difference for MSLL and SLVL.The smaller the value of fitness function, the closer the real value of the side lobe level is to the excepted value of the side lobe level.
Sometimes, we also want to obtain the deep nulling in several specific directions with the value of NLVL (nulling lobe valid level).In the antenna array, sometimes, there will be noise interference in some directions.To reduce the interface, we hope to minimize radiation values on this direction, which is called deep nulling.NLVL represents the side lobe level that we hope in the direction of deep nulling.And then, the fitness function is The array response at each angular location depends on the fitness function.Different values of the function put different emphasis on array responses at pertinent directions and, therefore, would result in a different array pattern.By making the function small, it is possible to ensure that side lobe peaks are below a certain value, which means that we can receive signal from the main lobe clearly and the noise on other directions is restrained.
When we expand the problem into the rectangular array, the pattern synthesis problem could be analyzed by the function of the linear array and the result is the product of two linear arrays on the x-axis and y-axis.

Proposed Method
The CS and LRMR methods are applied to sparse array pattern synthesis.In the greedy tracking algorithm of CS, we choose a local optimal solution in every iteration to be close to the original signal increasingly; it includes multipursuit (MP) algorithm, orthogonal multipursuit (OMP) algorithm, and regularization orthogonal multipursuit (ROMP) algorithm.In every iteration of MP algorithm, we choose an atom from the excessively complete atom library which matches the signal best and obtain the residual after the iteration.Next, we choose the atom that matches the residual best until the residual meets the accuracy requirement, and then, the signal could be represented linearly by several atoms.The disadvantage of MP algorithm is that the map of the signal on the selected atoms is not orthogonal, which means that we can not ensure to obtain the optimal solution in every iteration, and the number of iterations increases to meet the convergence condition.And then, OMP algorithm arises naturally.
OMP algorithm [17] is a classical compressed sensing reconfigured algorithm.In OMP algorithm, we first choose an atom from a super complete atom set, which matches the current atom best.The selected atom will manipulate the orthogonalization process in each iteration and then project the sampling value of the space composed of orthogonal atoms to get the weight and residual based on the orthogonal atoms.After that, resolve the residual by the same method and the residual could be reduced rapidly as the decomposition process, which can reduce the number of iterations.
An improved compressed sensing reconfigured algorithm is proposed in this paper in accordance with the principle of multiple matching-orthogonal multimatching pursuit (OMMP) algorithm.The idea is to select an atom with the best current linear independent set and the previous candidate set.We set Λ as the index set of the column vector in the measurement matrix Φ, so that Λ k is the index set of the selected atom after K iterations.In the process of each iteration, the new atom is obtained after two different stages of calculations by OMMP algorithm.Compared with those in OMP algorithm, the correlation of current residual and the remaining atoms in OMMP algorithm is obtained by the calculation of the inner product of current residual and remaining atoms in the first stage.
The steps of OMMP algorithm are shown in Algorithm 1.
The iteration of OMMP algorithm stops when the number of atoms in Λ k reaches a fixed value.The results of experiments show that the iteration should be stopped when the number of atoms in Λ k reaches the half number of columns of the measurement matrix.
Through the basic steps above, we see that the core difference between OMP and OMMP algorithms is that OMP Input: the measurement matrix is Φ, the dimension is M × N, M is the number of atoms in Φ, N is the length of an atom, sparse degree is K, and sampling signal is y.Output: index set is Λ i , residual is r k , and reconfiguration of the signal is x.
(1) The initial residual is r 0 = y, the estimate of the signal sparse degree is K, the number of iterations is n = 1, Λ is the index set of all the atoms, and the index value set is A 0 = ϕ, J = ϕ (2) Calculate the inner product of current residual and every atom in the index set to choose a candidate set: Here, α is definitively the relaxing factor, 0 < α ≤ 1, and the value is close to 1 typically.
(3) Calculate the intersection of the current candidate set and the candidate set of the last iteration, (5) Reset the index value and update the residual: International Journal of Antennas and Propagation algorithm chooses an atom of larger inner product in every iteration but OMMP algorithm chooses an atom of larger correlation in two successive iterations.In this manner, OMMP algorithm has higher robustness than OMP algorithm and improves the speed when selecting the atom.
In the problem of antenna array pattern synthesis, compressed sensing technology is suitable for the onedimensional linear array; if the problem is expanded to two-dimensional rectangular arrays, the advantages of lowrank recovery technology is emerging [18].We extend the problem form one dimension to two dimensions and recover the homogeneous matrix from sparse matrix by LRMR and get the pattern synthesis effect of homogeneous elements with fewer sparse elements, which has a great impact on simplicity and economy of equipment.
Low-rank matrix recovery is also known as the decomposition of sparse matrix and low-rank matrix.In the LRMR algorithm, first, the data matrix is divided into low-rank matrix and sparse noise matrix and then, we can restore the original data by solving the problem of nuclear norm optimization.All the elements which are lost or destroyed could be identified automatically and recovered in the low-rank matrix.Also, the original matrix is low rank, only a few elements are lost or destroyed, which means that the noise is sparse.Now, we can use optimization problem to represent the low-rank matrix recovery: E represents the noise matrix, A represents the low-rank matrix, and D is the given data [19].In the formula, E 0 is the norm of the sparse matrix, which represents the number of nonzero elements in the matrix.The formula is a computationally intensive NP-hard problem; so, we need to find a suitable method to approximate this optimization problem.Candes has theoretically proved that (the sum of the absolute values of elements in the matrix) the solution obtained by the minimization of norm l 1 is very close to the one obtained by the minimization of norm l 0 ; so, we can relax the norm minimization problem of l 0 to the norm minimization problem of l 1 .The above function of rank is a nonconvex discrete function; so, we can approximate the rank function of the matrix as the kernel norm.
The function A * = ∑ n k−1 δ k A in the formula is the kernel norm of the matrix, in which δ k A is the k singular value of the matrix and λ is the weight parameter.8λ usually is set as λ = 1/ max m, n .The essence of ( 5) is to replace the equation with multiple solutions with a unique solvable equation and replace the rank of the matrix with the sum of all the elements on the main diagonal of the solution matrix.Recht et al. has theoretically proved the feasibility of using convex optimization to solve problems, and we can think that most of the interference and noise are sparse relative to the data itself.
The LRMR algorithms include APG (accelerated proximal gradient) and ALM (augmented Lagrange multipliers) [20]; ALM is divided into two algorithms, EALM (exact augmented Lagrange multipliers) and IALM (inexact augmented Lagrange multipliers), and the three algorithms are described below.

Accelerate Proximal Gradient (APG).
The APG algorithm can obtain the result quickly by the introduction of gradient of A, E, and μ.
Approach the LRMR algorithm unconstrained: L A, E, μ = g A, E, μ + f A, E , where g A, E, μ is not differential and f A, E has Lipschitz continuous gradient and is smooth.
And ∇f A, E is the Frechet gradient [21] of f A, E and L f = 2 for simulation.
Next, do quadratic approximation for Lagrange function And when

International Journal of Antennas and Propagation
To obtain the step length for updating Y A and Y E , t k+1 should be decided: So, the iterative formulas for Y A and Y E are 12 The iterative formula for μ is μ is a positive integer predetermined, 0 < η < 1.

Augmented Lagrange Multiplier (ALM).
The Lagrange multiplier Y is introduced into ALM algorithm, Y has the same dimensions of A, and Y 0 = 0. Obtain the augmented Lagrange function: Set A Update μ at last: Set ρ > 1 and ε > 0.
IALM algorithm is proposed after updating the EALM algorithm [22].The exact solution of min not necessary in IALM, and the iterative formulas of A and E are The experiments show that the IAML algorithm is faster than the EALM algorithm obviously.
Table 1 shows the meaning of all the notations used in this paper.

Experiments and Simulation Results
We simulate some pattern synthesis examples using our algorithms.For the simulation environment, we use the following platforms: the microcomputer type Lenovo G460, Intel (R) Core (TM) i3-380 -m 2.53 GHz CPU, 2.00 GB memory, 500 GB hard drive.For the software platform, we use Windows 7 (32 bit operating system), and the simulation software Matlab 7.0.The simulation experiment is divided into two parts.The first part shows the sparse linear synthesis pattern by the CS theory.Then, we apply OMP and OMMP algorithms to obtain the recovered linear array.The second part shows the sparse rectangular array synthesis pattern by the LRMR theory, and the APG, EALM, and IALM algorithms are used to obtain the recovered rectangular array.
In the sparse linear array, we focus on SLVL and NLVL, where the expected values of SLVL and NLVL are −35 dB and −80 dB, which are the expected values of the side lobe.Due to the usage of the relative amplitude, the amplitude value of the main lobe is 0. We compare OMP and OMMP in terms of null steering and side lobe.
In the sparse rectangular array, we focus on SLVL and the expected value is −30 dB.We are going to compare the three algorithms (APG, EALM, and IALM) in terms of number of sparse elements, running time, iterations, and average error.
In order to control the null steering and suppress the radiation intensity of side lobes, we apply the PSO algorithm for the pattern synthesis of the linear array and the Chebyshev algorithm for the pattern synthesis of the rectangular array.
Second, the reconfigured algorithm is used in pattern synthesis; the similar pattern synthesis effect of homogeneous linear array can be obtained with sparse linear array, which has only half of array elements than the homogeneous linear array.
The PSO (particle swarm optimization) algorithm is used to obtain the pattern synthesis of the linear array antenna.
From Figure 1, we can observe that the highest side lobe of low side lobe pattern of weight reconfigured by OMMP algorithm is −35 dB, while the highest side lobe of the low side lobe pattern of weight reconfigured by OMP algorithm is −20 dB.The OMMP algorithm reaches the requirement, but OMP algorithm does not.
From Figure 2, we can observe that the deepest nulling of the deep nulling pattern of weight reconstructed by OMMP algorithm is −60 dB, which is similar to the result of OMP algorithm, while the OMMP algorithm is better than OMP algorithm on the level of the side lobe.

Design on Sparse Rectangular
Array by LRMR.We make the homogeneous matrix with 10 × 10 array elements and make the matrix sparse randomly and then reconfigure the array based on the LRMR technology.With the help of the three algorithms, the Chebyshev pattern [14] result can be obtained (Figure 3).We use the sparse rectangular antenna array to obtain the Chebyshev pattern of the homogeneous rectangular antenna array.SLVL = −30 dB, d = λ/2, and weight is obtained by the Chebyshev method.Remove elements from homogeneous array randomly and then recover the array by three different low-rank reconfigured methods and obtain the pattern synthesis result until the remaining elements just meet the requirements.
The sparse effect is as follows: Here, a, b, c stands for the array elements removed by EALM, IALM, and APG, respectively, and 1 stands for the array elements that can not be removed.And the results are as follows: Figures 4-6 show the result of three LRMR algorithms on the sparse rectangular array.The APG algorithm obtains the In Table 4, we can observe the comparison of three algorithms which can reconstruct the homogeneous array from the sparse array and obtain the pattern to meet the requirements.We can see that the APG algorithm has more advantages in accuracy and running time and the IALM algorithm is faster than the EALM algorithm obviously.
Low-rank reconfigured methods are applied in sparse rectangular array pattern, and 20%~30% elements are deleted to obtain the similar effect of pattern of the initial array, so that the application of low-rank reconfigured methods in sparse rectangular array pattern is the suitable one.

Conclusion
Pattern synthesis for sparse arrays of the antennas is a challenging issue.The improved OMMP-compressed sensing algorithm which uses a half number of array elements from the homogeneous array can obtain the pattern synthesis effect of the homogeneous linear array with the sparse linear

( 6 ) 2 2
y − P span φ l l∈Λ k y Compare the error parameters which is setup previously with the r k , or calculate whether the atom number of the index set meets the required value.If it does not meet the stop condition, n = n + 1 and return to step 2. Algorithm 1 4

F 2 14
When Y = Y k and μ = μ k , we can solve the question min A,E L A, E, Y k , μ k by iterating A and E alternately until meeting the termination conditions in EALM algorithm.When E = E

j+1 k+1 and E j+1 k+1
to converge to A * k+1 and E * k+1 separately.Then the update formula of Y is and then put the value of F k into the new index (4) If H k ∩ H k−1 = ϕ, select an atom from H k by calculating the minimum residual and join the index set Λ k i k = arg min

Table 1 :
The meaning of all the notations used.

Table 2 :
Comparison of the reconstruction results of two algorithms.International Journal of Antennas and Propagation lowest level of the side lobe, which is −30 dB, and the result of EALM and IALM algorithms is −25 dB on the level of the side lobe.

Table 3 :
Reconfigurable sparse linear array element position distribution.

Table 4 :
The result of three algorithms.