A High-Efficient Method for Synthesizing Multiple Antenna Array Radiation Patterns Simultaneously Based on Convolutional Neural Network

. Tis paper proposes a high-efcient method that utilizes deep learning technology for synthesizing multiple antenna array radiation patterns simultaneously. More in details, the mathematical feasibility of using neural networks to optimize and synthesize radiation patterns of antenna arrays is demonstrated. Boundary functions are designed to reshape the important characteristics of target radiation patterns and transform them into a two-channel mask matrix, allowing for the simultaneous input of multiple target radiation patterns into the neural network without sacrifcing computational efciency. During training, the cost function is designed to represent the diference between each synthesized radiation pattern and the corresponding target radiation pattern, guiding self-learning. Te main framework of the method is a convolutional neural network, where the convolutional layer is used to reduce the expansion of input parameters due to the simultaneous input of multiple mask matrices. Simulation experiments have been conducted to synthesize multiple incoherent target radiation patterns simultaneously on a patch antenna array layout, and the computation time is compared with the combined time required to compute each one individually. Te results demonstrate that this method ofers the advantage of computational efciency for simultaneous synthesis of multiple incoherent radiation patterns.


Introduction
Antenna arrays are an efective means for radar and communication electronic systems to obtain antenna beams with strong directionality, low sidelobe, and easy scanning and beamforming capabilities [1][2][3][4].Te radiation pattern of an antenna array with excellent performance can be obtained by solving the excitations of each antenna element.Based on this, many radiation pattern synthesis methods are proposed and developed, including mathematical optimization synthesis methods [5][6][7], global optimization algorithms [8][9][10], and convex optimization algorithms [11][12][13].Te rapid development of deep learning technology [14][15][16] has drawn the attention of numerous antenna researchers, resulting in the creation of several neural network frameworks aimed at addressing challenges in the feld of antenna [1,17].When it comes to radiation pattern synthesis, a variety of neural network methods have been proposed and developed [18][19][20], which can be broadly classifed into two main types.Te frst category is the numerous training samples driven method.Te neural network is pretrained on a large dataset to learn the relationship between the target radiation patterns and the element excitations.As a result, the trained neural network gains the ability to generalize and solve problems related to radiation pattern synthesis.Te second category is the optimal approximation method for radiation patterns.In this method, the neural network constantly adjusts its own parameters during training to ultimately approximate the mapping between the target radiation patterns and the element excitations.
For synthesizing multiple radiation patterns, almost all optimization methods and synthesis techniques require each pattern to be synthesized individually, and the computational efciency and results will be afected by the initial evolutionary sample.Inspired by the fact that neural networks can efectively approximate almost any mapping, a high-efcient method based on convolutional neural networks is proposed for synthesizing multiple antenna array radiation patterns simultaneously.Te universal approximation theorem of nonlinear input-output mapping is utilized to demonstrate the mathematical feasibility and establish a link between deep learning networks and solving the problem of radiation pattern synthesis, which is rare in similar research.Moreover, a multiparameter control expression is proposed to restrict the radiation characteristics of the target radiation pattern, so that multiple targets can be input simultaneously without compromising the computational efciency.Trough simulation experiments, we have successfully synthesized multiple incoherent radiation patterns on a patch antenna array layout and compared the calculation time with the combination time required to calculate each one individually.

Feasibility Proof of Solving Radiation Pattern Synthesis by Neural Network
Assuming that an antenna array consisting of n radiation elements is arranged in the XOY plane, the far-feld pattern FF(θ, φ) in any direction (θ, φ) is as follows: where the excitation weight vector is denoted as w and a i (θ, φ) represents the element pattern of the i-th antenna.Te array factor AF i (θ, φ) is determined by the free-space wave number β and the position (x i , y i ) in the coordinate system.If the array factor and element pattern of each antenna are predefned, varying the excitation weight vector w can generate diferent far-feld patterns FF(θ, φ).Terefore, the problem of synthesizing the antenna array radiation pattern can be rewritten as fnding the excitation weight vector w that maps to the target radiation pattern FF(θ, φ), which can be achieved through the inverse function w = F −1 (FF(θ, φ)).It is evident that there exists a general mapping between the excitation weight vector w and the desired radiation pattern FF(θ, φ).Interestingly, a trainable neural network structure can be used as a tool to implement this general mapping [21].Hence, a neural network method is proposed to solve the inverse problem mentioned above.According to the universal approximation theorem of nonlinear input-output mapping, let σ(•) be a nonconstant, bounded, and monotone-increasing continuous functions.Let I m0 denote the m0-dimensional unit hypercube [0, 1] m0 .Te space of continuous functions on I m0 is denoted by C(I m0 ).Ten, given any function f ∋ C(I m0 ) and ε > 0, there exists an integer m1 and sets of real constants a k , b k , and ω kq , where k = 1, . .., m1 and q = 1, . .., m0 such that we may defne the following: as an approximate realization of the function f(•); that is, for all that lies in the input space s 1 ,. .., s m0 .Te linear calculation process in ( 3) is analogous to the forward superposition calculation principle used for synthesizing radiation patterns.Universal approximation theorem is directly applicable to neural network [21].In the neural network, we note σ(•) as activation function and note that (3) represents the output of a neural network as follows: (1) the network has m0 input nodes and a single hidden layer consisting of m1 neurons; the inputs are denoted by s 1 ,. .., s m0 ; (2) hidden neuron k has synaptic weights ω k1 ,. .., ω m0 and bias b k ; and (3) the network output is a linear combination of the outputs of the hidden neurons, with a 1 ,. .., a m1 defning the synaptic weights of the output layer.
Te depth of the neural network can be enhanced by increasing the number of hidden layers, and (2) can be expressed in the following form of nested activation functions: In this paper, the target pattern FF(θ, φ) is treated as the function f(•) and the excitation weight vector w is taken as the output of the neural network.Ten, by substituting ( 4) with (1), the output radiation pattern FF R (θ, φ) of the neural network during training is obtained: Based on the universal approximation theorem and (3), the radiation pattern FF R (θ, φ) output by the neural network after training will be the approximate realization of the target radiation pattern FF(θ, φ).

Design of the Neural Network Input and Cost Function.
In the target radiation patterns, the main beam direction, sidelobe level, and main beam width are typically the most crucial radiation characteristics.Terefore, it is not necessary to restrict all scanning angles, but only the scanning 2 International Journal of Antennas and Propagation angle region where the radiation properties of interest are present to limit.In this paper, we express the target radiation pattern FF(θ, φ) as a function f(θ 1 , φ 1 , θ 2 , φ 2 , . . ., θ d , φ g ) and relax it to a space bounded by the upper boundary UB(θ, φ) and lower boundary LB(θ, φ).Te upper boundary UB(θ, φ) confnes the desirable upper radiation features, such as the sidelobe level SLL and -3 dB width of the main beam 2θ w , while the lower boundary LB(θ, φ) limits the desirable lower radiation features, including the main beam direction (θ 0 , φ 0 ) and, if necessary, the lowest level value V, which are defned by various parameters.To facilitate parallel data processing and improve computational efciency, the two boundaries are transformed into a two-channel mask matrix with θ and φ as coordinate axes, as shown in Figure 1.Under these conditions, the neural network's output radiation pattern FF R (θ, φ) will satisfy If there are S desired radiation patterns that need to be synthesized simultaneously, the s-th cost function l s (w) is computed as the diference between the radiation pattern FF s R (θ, φ) and its corresponding mask matrix: where D and G are the total number of sampling directions in the angular space (θ, φ).Diferent attention weight coefcients ] 1 and ] 2 are assigned to Uloss and Vloss for effective self-learning.
Te pooling layer provides spatial and translational invariance to convolutions, allowing features to be detected regardless of their location in the input.However, for radiation pattern synthesis, the main beam direction is the critical information that the neural network must learn.As such, the output must be sensitive to the main beam direction information, which is described in the input.Tus, pooling layers must be removed between convolutional layers.After multiple convolutional layers, the critical information of the input is extracted and output.Finally, a few dense layers are constructed to integrate the global information output of the convolutional structure and map them to the sampling space.Te detailed structure of the neural network is presented in Figure 1.
Te number of convolution layers and dense layers can be chosen based on the following factors: (1) Te characteristics of the antenna array are the number of array elements, array spacing, and so on.For complex antenna arrays, more convolution layers may be necessary to extract additional information, and more dense layers may be required for information mapping.(2) Design objectives of the radiation pattern: For the design objectives with high accuracy, such as the beam direction and peak radiation intensity of the antenna array, more convolution layers may be necessary to extract more precise information.(3) Computing resources: Te number of convolutional and dense layers must also consider the limitations of available computing resources.More convolutional layers and dense layers may require more computing resources, such as GPUs and longer training times.
In conclusion, the selection of the number of convolutional and dense layers should be based on the specifc antenna design problem.Multiple experiments and adjustments may be required to determine the optimal number of convolutional and dense layers and other hyperparameters.
In this work, the structural parameters of the convolutional layer and dense layer are based on those of the AlexNet neural network, but all pooling layers are removed.Te number of neurons in the output layer is related to the number of antenna units.Additionally, a batch normalization layer is added after each convolutional layer to enhance learning.To accelerate convergence and prevent gradient vanishing, the activation functions of all layers are set to "relu" except for the output layer which uses the "sigmoid" activation function to control the excitation weight vector w range.
It is noteworthy that in the training process, the function of backpropagation technique is to reduce the diference between the output radiation pattern FF R (θ, φ) and the target radiation pattern FF(θ, φ).During the calculation, the output includes the amplitude and phase excitations of all antenna elements in the s-th group (s � {1, . .., S}), as shown in Figure 1.Forward propagation entails computing all the cost function l s .Te neural network continuously updates the network's parameters by identifying the direction of gradient reduction of all S-cost functions.Ultimately, all S synthesized radiation patterns meet their respective predefned requirements.For the backpropagation process, we used the "Adam" optimization algorithm to update the parameters of the neural network and seamlessly integrate it into the TensorFlow framework.

Numerical Examples
In this section, we demonstrate the successful simultaneous synthesis of fve radiation patterns, each with a diferent main beam direction, using our proposed neural network framework.Te numerical examples were implemented by the TensorFlow framework, which is supported by GPUbased acceleration, resulting in efcient and efective computation.
In the example, the radiation patterns are synthesized in the upper half space of the array for directions (θ, φ) and the layout of the antenna array is shown in Figure 2. Te constructed mask matrices have a resolution of 1 °for both θ and φ and have resolution 1 °in the upper half space, resulting in D � 360 and G � 90.To optimize all radiation patterns, weight coefcients ] 1 � 1 and ] 2 � 500 are applied with a learning rate of μ � 5 * 10 −6 .

International Journal of Antennas and Propagation
Te requirements for the fve target radiation patterns outlined in this paper are as follows: each main beam direction needs to be pointed precisely in a diferent direction, and lower sidelobe levels can be achieved by setting the sidelobe level value to SLL � −18.Te amplitude (normalized) and phase results of the radiation pattern with diferent main beam directions are shown in Figures 3 and 4, respectively, and the main radiation properties of the diferent results are shown in Table 1.
All the aforementioned results were synthesized simultaneously, taking only 87 seconds, which is a half of the combined time required to compute each one individually (185 seconds).Tis demonstrates the high efciency of the proposed neural network framework in synthesizing multiple incoherent radiation patterns simultaneously.

. Discussion
We believe that the reason behind this high efciency is the unique computational approach in neural networks: First of all, backpropagation technology is one of the reasons to ensure the high efciency of neural networks.And another point is that we add the convolutional layer structure as an important improvement over our previous work.Te utilization of weight sharing and local connections in the convolutional layers serves to signifcantly reduce network parameters.Tis, in turn, allows for the deepening of the network while preserving computational efciency.Furthermore, for synthesizing multiple antenna array radiation patterns simultaneously, adding the convolutional layer structure can also reduce the expansion of input parameters due to the simultaneous input of multiple mask matrices and further ensure computational efciency.It is also worth noting that the appropriate selection of the learning rate, determined through several experiments, not only accelerates the learning efciency but also leads to satisfactory results.Moreover, in the target radiation pattern presented in this paper, the direction of the main beams is the most important radiation characteristic, and thus the weight ] 2 associated with controlling the direction of the main beam is given a higher value.By increasing the weight ] 1 of the Uloss, which is related to controlling the sidelobe level, the sidelobe  International Journal of Antennas and Propagation level value in the result can be further reduced.Te proposed neural network framework has the fexibility to synthesize more than fve incoherent radiation patterns.However, increasing the number of radiation patterns synthesized will result in a larger input and output for the neural network framework.Te ability of a neural network to process data concurrently is related to the number of neurons at each layer, and thus the number of neurons in each layer will also increase with the number of radiation patterns synthesized simultaneously.Furthermore, if this method is used for a larger antenna array, the number of neurons in the output layer must match the number of antenna units.Under the above circumstances, there is a greater demand for computing equipment and sufcient computing resources.

Conclusions
In this paper, we present a method for synthesizing multiple antenna array radiation patterns simultaneously, using CNN.Numerical examples demonstrate that the proposed method is highly efcient.Multiple parameters control the upper and lower boundary functions, allowing fexibility to reshape the various requirements of the target radiation patterns.Additionally, the transformation of the mask matrix not only facilitates parallel computation but also ofers a method for the neural network to handle multiobjective optimization problems.Te authors believe that utilizing the proposed neural network to simultaneously synthesize a large number of radiation patterns will ofer higher computational efciency advantages and can be extended to address many other electromagnetic feld problems.

Figure 1 :Figure 2 :
Figure 1: Structure diagram of the neural network and transformation of the mask matrix.

Figure 3 :
Figure 3: Te amplitude results of the normalized radiation patterns.

Figure 4 :
Figure 4: Te phase results of the radiation patterns.

Table 1 :
Main radiation properties of the results.