Ground Attack Strategy of Cooperative UAVs for Multitargets

Humans have a fundamental ability, that is, to share vision among each other to fulll common goals, which cooperative UAVs do not have. e diculties mainly lie in the homologous mathematical description of humans and elusive experimental practice. is paper proposed a parallel multiview splicing on clouds, which rst review both theory and practice studies in UAVs. ese terms are then reconsidered from humans’ vision sharing. Next, a conceptual model of parallel multiview splicing on clouds is proposed and the mathematical deduction if fullled. Furthermore, an experimental cooperative UAVs platform is built to practically implement the algorithms. Both the simulated and practiced results have validated the feasibility of our method. Finally, a general discussion and proposals for addressing future issues are given.


Introduction
e unmanned aerial vehicle (UAV) combat system plays an important role in acquiring information superiority, implementing precise strike, and completing fast combat tasks in current rapid combat and information war [1].Especially the intelligent UAV, which integrates arti cial intelligence to perceiving environment, making attack strategy and assessing task, etc., can lead to the initiative and victory in the war [2].
However, the task accomplishment of single UAV is often unsatisfactory.When a single UAV invades an enemyoccupied area, it often fails to complete e ective attack due to its own load limitation, enemy interference, and interception attack.
erefore, it needs cooperation among multiple UAVs to guarantee the task completion [3,4].
With the development of technology and equipment, the air confrontation between big powers will be in a state of high intensity.
e traditional methods consider manned vehicles as the main body of future aerial combat, until the signi cance of cooperative UAVs (Co-UAVs) is discovered.
e cooperative UAVs show a new type of combat e ectiveness [5], which has the following advantages.
(a) Intelligence advantage: Co-UAVs have distributed sensors, which can cooperate with each other to achieve precise targets positioning.e networked operations can share information among UAVs, achieving "Any one knows, everyone knows" in the swarm.e intelligence sharing lays foundation for the realization of cooperative attack.
(b) Speed advantage: Co-UAVs can automatically decompose tasks online according to battle eld situation and give the subtasks to corresponding vehicles.e assigned UAVs can react quickly and coordinate with other operations such as interference suppression, re strike, and damage assessment, which shortens the "perception-decisionaction" cycle and speeds up the combat process.
(c) Cooperation advantage: the cooperation between UAVs can cooperate autonomously and adaptively, which makes the swarm act as single one.As a result, the uniform intensive attack and dense defense can be achieved.(d) Quantity advantage: Co-UAVs usually use low-cost unmanned platform, which is small in size and large in quantity.It can maintain the high-pressure situation and continuous attack towards the enemy, paralyzing the defense system of opposition rapidly and achieving the operational purpose in the shortest time.
As a subversive modern attack strategy against the enemy, cooperative UAVs have been regarded as the core to triumph.Especially, the swarm intelligence (SI) of Co-UAVs is widely applied as the key technology to win the future combat [6].
In theory, Suresh and Ghose [7] proposed a self-adapting ground attack strategy for UAVs by establishing a path function within the detection range.
ey combine reconnaissance, interference, and autonomous attack to build an adaptive ground attack strategy for Co-UAVs.Luo et al. [8] propose an online-offline integrated cooperation strategy of UAVs, which uses offline expert decision-making to analyze battlefield environment so as to establish the environmental impact map; it uses online robust decision-making model to evaluate the scenarios faced by each UAV so as to adopt the best robust attack action.Wang et al. [9] tries to find the best strategy of Co-UAVs by using the Radial Basis Function Neural Network (RBF-NN) and to evaluate the performance of cooperation.Also, an alterable neural network is introduced to search the precise candidate feasible solution set, which improves the efficiency of the RBF-NN.In [10], an interval consistency model based on an auction algorithm is proposed, purposing on solving the consistency problem of Co-UAVs, making UAVs reach the target at the same time.
In practice, as the Research Laboratory of the United States Air Force (USAF) showed in 2002, the key to success in future complex battlefield is to use multi-UAVs, which includes searching and attacking, investigation and suppression, psychological warfare, and tactical restraint [11].Co-UAVs are the breakthrough point of future unmanned warfare.In the subsequent research of USAF, hundreds of simulation experiments were conducted to simulate the interception of Co-UAVs' attacks to Aegis air defense system [12].e results show that the defense system is difficult to intercept all UAVs and the defense system has been repeatedly broken through, which indicates the superior attack performance of Co-UAVs.In 2015, the Defense Advanced Research Projects Agency (DARPA) published the "Gremilins" project, which plans to develop partially recoverable Co-UAVs for reconnaissance and electronic warfare [13].
e Gremilins can defeat the enemy by suppressing missile defense system, cutting off communication, and attacking the enemy's data network based on a large amount of UAVs.In 2016, China Electronics Technology Corporation (CETC) firstly established Co-UAVs test prototype in China and verified the cooperative principle of 67 UAVs.In 2017, 119 fixed-wing UAVs' flight test was completed by CETC [14].
Both the theoretical and practical research studies indicate that the Co-UAVs have become the winning force of battlefield, which has the ability to change the game rules in the future [15].However, the former research mainly focuses on the preplanned strategies, which means the ground attack strategy is preestablished before the UAVs arrive in the battlefield.It is very hard to preplan all the scenarios exhaustively, for the battlefield is unknown (or partially unknown) in advance.
Here, a human cooperation inspired approach of Co-UAVs is presented.We first take an explanation to goal-oriented cooperation of humans, especially the strategy making based on vision sharing.
en, a human-like model called parallel multiview splicing on clouds (PMVSC) which incorporates these biobehavioral-science insights in a structured cooperative system of UAVs.In addition to the development of PMVSC, we applied the model to a variety of ground attack tasks for multitargets that required mutual cooperation of UAVs.Finally, PMVSC is experimented in a real scenario (in which there are two distinct kinds objects to test the precise processing performance of Co-UAVs for multitargets) based on the experimental multi-UAVs platform.

Goal-Oriented Cooperation of Humans
Based on Vision Sharing e cooperation of humans (CoH) has been illustrated by social psychologist Lewin et al. [16,17].He pointed out that humans' cooperation is a complex group behavior (B) which is affected by internal individuals (I) and external environment (E): where e Lewin CoH model reveals the general principles of human behavior to some extent.However, it is a passive cooperation model with no clear goals.Goal-oriented behavior is the process of seeking to achieve general goals of group.In a cooperative mission, every individual has his own task; they work independently as well as parallel to fulfill the general goal.So, equation (1) can be revised as follows: where G � [G 1 , G 2 , . . ., G n ] T represents the group goals, which is composed with each individual goal.Take a typical scenario, as shown in Figure 1, for example.e general goal is to find all the objects (the red circle in Figure 1) in the environment, but there are obstacles blocking the sight.Each individual can only see local objects and environment (the translucent vision).ey share their visions to get the overall environment so as to consult together to get a proper objects assignment.

Goal-Oriented Cooperation Mechanism
Based on Vision Sharing

Outline of Parallel Multiview Splicing on Clouds.
A graphical representation of our proposed architecture is given in Figure 2, which proposes that perceiving, cognizing, and assigning targets upon UAVs' cooperation system in environment is like in the case of goal-oriented cooperation of humans based on vision sharing, and each individual is responsible for specific target, together to fulfill the overall goal.
In PMVSC, the targets (including the true and the false) are firstly perceived by UAVs and each UAV can only know the targets in its own field of vision (FoV).ere are several 2 Complexity UAVs over the target environment, detecting the targets by onboard cameras.ough an UAV can get local information through the perceive module, it cannot remove repeated targets in group.Each UAV uploads its perceived information in FoV to clouds through the vision sharing module.e vision sharing module preprocesses the detected environment information of respective UAVs, and then the separated FoV are combined to make a full and detailed environment in a single map.Next, the entire map is transferred to the cognize module to distinguish whether the targets are true or false.e valuable and true targets are necessary to be attacked, while disguised and false targets not.Finally, the information of true targets is delivered to the next module, which is responsible for task assignment and path planning.
For the PMVSC architecture, to achieve such complicate processes, a number of components are required to explicate, which are described in the following sections together with mathematical algorithms derivation.

3.2.
e Components and Algorithms of PMVSC.Supposing there are N UAVs to perform the attack task.For the kth UAV in group, the image perceived by the camera is I k (x, y), where (x, y) is the position in direction of x-and yaxis in the perceived image.In the perceived module, the colorful image should be preprocessed to make it more convenient for subsequent processing.
e original image I k from camera is in the red, green, and blue model (RGB-model); each color appears in the primary spectral components of red, green, and blue color.e model is based on the Cartesian coordinate system.e RGB-model has advantages in observation and application.However, as pointed out by Ali et al. [18], the RGB-model has two inferiors compared to the hue, saturation, and illumination model (HSI-model): (a) the three components are used to describe the image together, resulting in a lot of unnecessary information among the components which will increase the calculation.(b) e change of Euclidean distance between point and point in RGB space is not proportionate to the change of actual color.When color separation is carried out, it is easy to make false separation, omit useful information, or mix useless information with useful information.
Figure 3 shows the HSI cylindrical color space model, where f h , f s , and f i represent the value of hue, saturation, and illumination, respectively: where f r , f g , and f b represent the normalized value of red, green, and blue color in the image.e perceive module functions on converting RGB to HSI.In HSI-model, the image features are obvious in its space.After converting RGB space to HSI space, the connection of each information structure is more compact, each component is more independent of each other, and the loss of color information is less, which lays a good foundation for segmentation and target recognition.
After transfer RGB to HSI, the information of I k should be uploaded to the vision sharing module, which purposes on information normalization and image invariance, which is shown in Figure 4.
In image processing, the moment invariant feature can reflect shape information of the image, and it has the ability of translation invariance and scalability invariance [19].For an obtained image I k (x, y), define its (p + q)-order origin moment as follows:  Complexity where M k and N k represents the maximum row and column scale of image I k (x, y) and (x, y) is the position in direction of x-and y-axis in I k (x, y), p, q ∈ 0, 1, 2, . . .{ }.However, the origin moment m k pq responds to changes in I k (x, y).To achieve the invariance of translation and scalability, the m k pq is improved to (p + q)-order central moment: where x and y represent the centroid position of the image, and they can be calculated by the following equation: Because the μ k pq can only keep the translation invariance, so normalized central moment η k pq is defined to obtain the ability of scalability invariance: In a cognize module, as shown in Figure 5 , the main functions are rotation invariance, image mosaic, and targets classification.
From [20], we can infer that the rotation invariance can be obtained by the set of equations ( 8) based on normalized central distance: To the images I k (x, y) and I l (x, y), as shown in Figure 6, the crucial procedure of image mosaic is to find the most similar region in both and to montage the two images based on the common region.Supposing the test region is a square with length L tr , the similarity between two images is defined as sim[(I k I l ), L tr ]. en, the image mosaic can be fulfilled by calculating the minimum value: Once the image is mosaicked ready, all the detected targets are combined in a whole image WI(x, y).en, the targets should be classified to find out the true targets to attack.For the accurate recognition of multitargets, feature extraction and feature classification are the key issues.True and false targets are very similar, and even the distortion of real targets in the process of recognition will lead to recognition errors.A cognitive-based intelligent recognition method is used in this paper to classify target features with similarity constraints to achieve high accuracy of recognition.
Assume there are N targets in WI(x, y).For the ith and jth targets TG i (x i , y i ) and TG j (x j , y j ), a matrix feature space S V ∈ R N×N can be introduced to express the similarity between TG i and TG j : en, the problem of feature classification and recognition for true and false targets can be transformed into the problem of similarity constraints on feature vector TG � TG 1 , TG 2 , . . ., TG N   ⊂ WI.To classify the targets with similar features, that is to minimize the similarity of the same kind of targets equation (11) shows Since equation ( 11) is an optimization problem of matrices, it is necessary to transform it into a singular value matrix in order to obtain the optimal solution.Let the singular values of matrix S V be where P is the transformation matrix and  is the singular value matrix of N × N.
Assuming that  k is a diagonal matrix composed of the first  k singular values of matrix K and S V is a left singular value vector corresponding to P •k , there is a definite solution of min TG ‖S V − TG T TG‖ F : For any orthogonal matrix T, to verify that I k � U k • T is still the solution of min TG ‖S V − TG T TG‖ F problem. erefore, the problem of the original objective function can be rewritten as follows: If U k is used as the input layer and TG as the output layer of the network, the problem can be used as the deep confidence network model to solve, which is similar to the energy function of deep confidence networks [21].Take the u i ∈ U k of each network input layer as a visible variable and TG I ∈ TG as a hidden variable, and the energy function can be defined by using the Gauss-constrained Boltzmann machine model as equation (15) in order to classify the feature data reasonably: where θ � T, d, c, σ { } are the model parameters and D and M represent the number of visible and hidden units in the network, respectively.
By defining the range of value for model similarity constraint parameter σ, the similarity of data eigenvalue classification can be changed.at is to say, it achieves the cognitive recognition characteristics of similar targets, and Complexity finally it can distinguish true targets TG F and false targets TG F from WI(x, y), which is shown in Figure 7.For a target TG j , the feature point is S j v .Supposing the feature points of samples are S F and S T .From equation ( 12), if QS j v Q T � PS F P T , TG j belongs to the false target, or it is true target.
In the last process, the true target assignment of Co-UAVs is studied.Although many intelligent methods have been used to study multiagent cooperative problems [22][23][24][25][26], especially for the unpredictable results of each UAV's behavior in target assignment, it will affect the implementation of all strategies of subsequent UAVs.However, these methods are too subjective, and they are highly coupled with real-time tasks allocation process.It is necessary to introduce some more objective and dynamic methods for targets assignment.In this paper, the Bayesian network is introduced into UAV target assignment task modeling to solve the dynamic adjustment and real-time strategy in target assignment.
e Bayesian network is a directed acyclic graph with probability annotations, which can be used to reveal learning and statistical inference functions for prediction, causal analysis, etc.For multiple UAVs' target assignment task in this paper, its Bayesian network can be expressed as follows: where G � 〈U, S, A〉 is a directed acyclic graph, U � u 1 , u 2 , u 3 , . . ., u N   is a member of Co-UAVs participating in the mission, S is a set of arcs of graph G, and P is a probability annotation of graph G, which is shown in Figure 8.
For any UAV member u k , each element in P represents the conditional probability density of the target node.e rule of probability density is as follows: where the calculation of probabilistic P(s) needs 2 m− 1 probabilistic values, and the amount of calculation is very huge.erefore, the introduction of variable independence hypothesis in Bayesian networks can greatly reduce the prior probability of the definition of Bayesian networks.For the probability density rule constructed in this paper, we can find a minimum subset S u ⊆ S T1 , S T2 , S T3 , . . ., S Tm− 1  , for any target task node S Tm in the network structure, which is not independent of S Tm condition: where S u is the parent node set of S Tm in graph G � 〈U, S, A〉.
In this way, the probability distribution of mission node S Tm allocated to UAV u k can be determined uniquely: Finally, the else true targets can be assigned by the other Co-UAVs, such as UAV member u l :

e Experimental Platform of UAVs.
e experiments have been conducted on the Co-UAVs with adaptive camera, flight controller, algorithmic solver, and data transmitter.Figure 9 shows a single UAV platform, which perceives outside environment from its onboard adaptive resolution camera embedded on the 3-DOF pan-tilt platform, inside real-time flight status from inner sensors L tr en, the information of outside environment and inside status are transmitted to airborne Intel computer stick, which functions on algorithm computing, target recognizing, and instruction generating.Finally, the generated instructions are converted to motor commands via the PIX-4 controller.
e Co-UAVs' experimental platform is shown in Figure 10.
e mobile screen can read all data from onboard Intel computer stick and change the algorithm parameters.
e ground station functions on obtaining real-time information from flying UAVs, including image features, flight status, and cooperative information.After calculation, the ground station sends control commands to each UAV.e result of WI is shown in Figure 12.
In Figure 12, the four images are montaged together.Also, the information of the whole environment can be obtained through the image mosaic.Result shows the proposed method can find similar region between images, and montage the four images based on the common region, indicating the superiority and feasibility of PMVSC.Complexity In order to describe the target recognition process of this algorithm, a real target (Figure 14 e airborne computer first transforms the acquired image into HSI color space, which can be recognized by the machine.After eliminating the useless information, it extracts the eigenvalues of the transformed image.
However, in the original eigenvalue space, the eigenvalues are almost full of the whole eigenvalue space, so it is impossible to classify the features to distinguish the target type.erefore, according to the algorithm constructed in this paper, the feature space is transformed.In the transformed feature space, the eigenvalues have obvious distribution characteristics and can be directly classified.Figures 15(d Figure 17 is a picture of cooperative attack of multiple UAVs over the targets.ree UAVs will attack its corresponding targets, and their attack probability to respective targets is 0.83, 0.82, and 0.86.All P(S Ti ) are labelled in Figure 11, and the optimal task allocation decision among all UAVs can be obtained by choosing the maximal probability value.Based on the UAV experimental platform, the relevant target assignment algorithms in this paper are tested.Not limited to the derivation of theoretical simulation, this paper applies the algorithm to practice and completely reproduces the feasibility of the algorithm from the actual Co-UAVs' platform.

Experimental Results on Co-UAVs.
In order to verify the effectiveness and feasibility of the proposed mechanism, PMVSC is tested in a real environment.In the experiment, 3 Co-UAVs were used to cluster, search, identify, and locate the true and false targets (circular target, diameter 7 m, and target recognition area 2 m) in the target area and then attack the target.e area is about 1000 m × 250 m, and the flight area includes the take-off and landing area (the rectangular area of the take-off and landing area is 100 m × 50 m) and the target area (the rectangular area of 200 m × 300 m).Six targets were set in the target area.During each attach task, three targets are randomly selected and placed a white sign "T" in the target center to represent the true target.Similarly, the other three targets use "F" sign to represent the false target.
e schematic illustration of the actual experimental environment is shown in Figure 18 (the experimental area is the irregular area shown in the figure due to the limitation of the actual environment), which contains hidden targets (grey), real targets, and false targets (red).
Figure 19 shows a practical area of three Co-UAVs in the aerial above targets' environment.ere are multiple targets needing recognition.Each UAV can perceive outside world from onboard camera, and the perceived information is transferred to clouds (which is shown in Figure 18, the green area) to merge independent and partial images to a whole image and to distinguish the true targets.Finally, the true targets are assigned to respective UAVs to attack, which is shown in Figures 20(a   Still, there are several issues in need of further study. (a) e cooperation among dozens of UAVs: though the cooperation and formation of UAVs have been studied, the proposed method is applied in only three UAVs; thus, how to make it general and be possible implemented in more UAVs is an important work.(b) Moving targets attack: in this paper, the targets are placed on ground, which means they are static.
Compared with moving targets, static targets are much harder to attack.Research on dynamic targets needs further study.

Figure 2 :
Figure 2: Outline of parallel multiview splicing on clouds.

Figure 1 :
Figure 1: A typical scenario of humans' cooperation.

Figure 4 :
Figure 4: Schematic of the vision sharing module.

Figure 8 :
Figure 8: Schematic of targets assignment based on the Bayesian network.

4. 2 .
Image Mosaic.Figures 11(a)-11(d) show 4 images captured by cameras onboard by Co-UAVs, which are transferred to clouds (ground station).e combined image of the whole environment can be obtained by applying the image mosaic algorithm proposed in this paper.Define the test region is a square with length L tr � 80 pixels and the threshold value of image mosaic sim[(I k , I i ), L tr ] is 0.85.

Complexity 4 . 3 .
Targets' Recognition.Setting S � S r ∪ S F � S T1 , S T2 , S T3 ,  S F1 , S F2 , S F3 } and U � u1, u2, u3 { }, that is, there are three true targets and three false targets in the targets' area and three UAVs are involved in the search and attack mission.Set model parameters T � I N×N as unit matrix and d � 0.2, c � 0.4, and σ � 0.15 as related constraints for feature constraints of targets.e standard true and false targets used for training are shown in Figure13, and the test results of each UAV in actual flight are shown in Figure14.Even if the target has a large distortion (such as dust cover, edge deformation, and random influence of the direction of true or false identification), the proposed method can extract feature points to calculate similarity and classify them and recognize them accurately.

Figure 12 :Figure 13 :
Figure 12: Image mosaic of four images from Co-UAVs.

8
(a)) and a false target (Figure 14(f )) are selected to elaborate, and the pictures in the process of processing are presented as shown in Figures 15 and 16, respectively.
) and 20(b).In Figure20(a), the armed UAV (which carries a white sandbag as ammunition) gets attack command, and then flies to the assigned target.Also, Figure20(b) is the result after attack, from which we can see the target is attacked precisely, indicating the feasibility and validity of the proposed method based on Co-UAVs.

Figure 17 :Figure 18 :
Figure 17: Cooperative attack of multiple UAVs over the targets.
B � [B 1 , B 2 , . . ., B n ] Τ represents the behavior set of individuals and n is the total amount of individuals in group.