Analysis for Mutual Impedance of Pistons by Neural Network and Its Extension of Derivative

*is study is basically a mathematical problem in sonar engineering. *e sonar plays a very important role in underwater communication, detection, and remote sensing. Pistons are key sensors in a sonar system. *e mutual coupling is a challenging problem in designing a sonar array. *e mutual impedance of pistons is required in analyzing the mutual coupling of a sonar array. In this paper, a mathematical model consisting of a neural network and its extension of derivative is given and then utilized to analyze the mutual impedance of pistons. Initially, the mutual impedance of pistons is modelled and predicted by a neural network. By suitably extending the neural network, the derivative, i.e., slope information, for the neural-network output is obtained easily. *erefore, the mutual impedance and its slope information are obtained simultaneously almost in real time as the neural network is well trained in advance. Numerical examples show that the neural network can accurately predict the mutual impedance and its extension of derivative gives the slope information of mutual impedance simultaneously. It should be emphasized that the training work of a neural network is performed only once, i.e., only the training work in mapping the mutual impedance is required. No additional training work is required in obtaining the slope information.


Introduction
In underwater environment, the acoustic wave-based sonar [1,2] is often utilized for communication, detection, and remote sensing because the electromagnetic wave attenuates rapidly in water. e piston is one type of acoustic sensors and then plays a very important role in an underwater sonar system. As the array structure of pistons is utilized in a sonar system, the mutual coupling between different pistons will affect the array performance. erefore, understanding and interpretations for mutual coupling [3,4] within pistons of a sonar array are important and required. According to [5], the mutual coupling effect within a communication array can be modelled and analyzed from the mutual impedance between any two array elements. In other words, the mutual impedance of pistons is required in designing an underwater sonar array including mutual coupling effects. However, the computation for mutual impedance of pistons often involves multidimensional integrals [6] and is then difficult and time consuming. is then motivates us to utilize the mathematical techniques of the neural network and its extension to analyze mutual impedance of pistons. e neural network (NN) [7] is an important mathematical technique. It belongs to machine learning and has many applications in engineering. ese applications are basically mathematical problems in engineering [8][9][10]. e NN serves as a black box of nonlinear mapping that accepts certain inputs and produces certain outputs. e term "black box" means the relation between the input and output is very complicated and difficult to characterize. In general, two situations require the aid of neural-network modelling. One is that the theoretical computations are difficult or time consuming. e other is that the practical experiments are difficult to implement. e neural-network machine-learning back box is expected to replace the difficult theoretical computations or practical experiments. ere have been widespread applications in applying the neural network to different engineering problems, e.g., [7][8][9][10][11][12][13][14][15]. In reference [7], several types of neural networks are utilized to model the relation between the input and output of different fundamental electromagnetic problems. Since the electromagnetic problems involve difficult wave theories and are usually difficult in experiments, the neural networks become good candidates for solving such problems. In reference [11], the neural network is deformed to achieve optimization of antenna arrays. e neural network can also be extended to calculate multidimensional integrals of engineering problems, e.g., [12]. In reference [13], the neural network is utilized to model antennas attached with nonlinear electronic components including mutual coupling effects. In addition to forward problem, the neural network can also be utilized to treat an inverse problem, e.g., microwave imaging [14]. In reference [15], the neural network is utilized to model the mutual coupling effects within an antenna array.
However, most applications of neural networks in engineering problems are for nonlinear mapping only, e.g. [7][8][9][10][11][12][13][14][15], as introduced above. In some application, the knowledge of slope information for the system output is necessary. is then motivates us to develop a model that consists both the neural network and its extension of derivative to help one design such a system.
In this paper, a mathematical model consisting of a neural network and its extension of derivative is given and then utilized to analyze the mutual impedance of pistons. Initially, the mutual impedance between two pistons within a sonar array is modelled and predicted by an RBF-NN (radial basis function neural network) [7]. By suitably extending this neural network, the derivative, i.e., slope information, for the output of the RBF-NN is obtained easily.
e RBF-NN is trained to serve as the nonlinear mapping for the mutual impedance of pistons. ere is one node in the input layer of RBF-NN to represent the geometry of the two array elements. ere is also one node in the output layer of RBF-NN to represent the mutual-impedance magnitude between the two pistons. ere still exist some nodes and transfer functions in the hidden layer for nonlinear mapping. e RBF-NN is trained by some existing data sets. After the RBF-NN is well trained, all the weights connecting different nodes within the neural network are determined. e output can then be predicted by simple algebraic computations from the weights and transformation functions within the neural network. e derivative for the output of RBF-NN can be easily obtained by suitably extending the RBF-NN. Since the output of an RBF-NN is the linear combination of node weights and nonlinear transfer functions within the neural network, the derivative for the output of an RBF-NN can then be replaced by differential operations upon this linear combination. e node weights are constant after the neural network is well trained. e nonlinear transfer functions are known and differentiable in general. erefore, the derivative for the output of RBF-NN can be easily transformed into the derivative for these nonlinear transfer functions. is transformation makes the computation of the derivative very simple and straightforward. An extension of RBF-NN is then developed to represent the derivative. It should be noted that no additional training work is required in predicting the derivative of neural-network output. In other words, the training work is performed only once.
In Section 2, the neural network and its extension of derivative are given. e application to mutual impedance of pistons is given in Section 3. Numerical examples are given in Section 4. Finally, the conclusion is given in Section 5.

Neural Network and Its Extension of Derivative
In this section, a model consisting of a neural network and its extension of derivative is given. e neural network used in this study is the RBF-NN [7]. As shown in Figure 1 (the part below the horizontal dotted line), the RBF-NN has three layers, which are the input layer, the hidden layer, and the output layer. ere is one node, i.e., y, in the output layer to represent the output. ere is one node, i.e., x, in the input layer to represent the piston parameter. ere still exist J nodes in the hidden layer for nonlinear mapping. In fact, there is no limitation on the node number of the input or output layer. However, we choose one node in both the input and output layers to make the illustration clear. e output of the RBF-NN in Figure 1 can be expressed as follows [7]: In equation (1), g j (x) represents the nonlinear transformation function of the jth node in the hidden layer and is given as where v j is the mean value corresponding to the jth hidden node and σ 2 is the autocovariance of the Gaussian function. e abovementioned neural network is trained by some existing data sets. e training processes are given in detail in [7]. After the neural network is well trained, all the weights of w j , j � 0, 1, . . ., J, are determined and the nonlinear mapping of x ⟶ y can be predicted by equation (1).
To obtain the nonlinear mapping of x ⟶ dy/dx, the differential operation is executed on equation (1). We have Since all the weights of w j , j � 0, 1, . . ., J, in equation (3) have been determined in the training process for mapping of x ⟶ y, we can determine the mapping of x ⟶ dy/dx from equation (3) straightforwardly.
e RBF-NN is then extended to model or predict dy/dx based on equation (3), as shown in Figure 1 (the part above the horizontal dotted line). e flow chart of using the RBF-NN and its extension of derivative is illustrated in Figure 2. It should be noted that the training work is performed only once (during the mapping of x ⟶ y ). e derivative, namely, dy/dx, means the slope information at some x. In designing a system, x is a controlled factor and y is the corresponding response. As the derivative or slope information is known, one can predict whether the next y for (x + Δx) is increasing or decreasing. is can reduce the chance of error in designing a system. After the mapping of x ⟶ dy/dx or the slope information of y is determined, one can give insight into the properties for characteristic of y, such as increasing, decreasing, opening upward, and opening downward. ese properties will be helpful in designing many engineering problems, such as a sonar array including mutual coupling effects.

Application to Mutual Impedance of Pistons
Prior to mutual impedance analyses, we first introduce the role of mutual impedance in mutual coupling and communication array design. For a communication array, the farfield pattern function is described by the product of the element factor and the array factor. e array factor is especially important. Consider an antenna array consisting of N elements. e array factor is the weighted sum of N terms with each term being e (�2.71828...) to the power of an imaginary number. e weight of each term is the radiating current. e feeding voltage and radiating current of array elements can be characterized by V � ZI, where V and I are both N-dimensional column vectors with components representing the feeding voltage and radiating current of array elements. e N × N matrix Z contains elements of Z ij representing the selfimpedance (i � j) or mutual impedance (i ≠ j) between the ith and jth elements. For isotropic elements, i.e., no mutual coupling, we have Z ij � 0 and V � ZI becomes simple multiplication of scalar. As the mutual coupling exists, we have Z ij ≠ 0 and V � ZI becomes a complicated multiplication of the matrix. For an acoustic sonar array, the abovementioned voltage is equivalent to mechanical reaction force and the abovementioned current is equivalent to vibration velocity.
eir roles in mutual coupling, far-field pattern, and array design are similar to those of antennas.
ese are the reasons why the mutual impedance of pistons is analyzed in this study.
Consider two rectangular pistons within a sonar array, as shown in Figure 3. Our goal is to calculate and then predict the mutual impedance between these two pistons. According to [6], the pressure p at a point P 2 (ξ, η) on piston-2 due to a point P 1 (x, y) on piston-1 which vibrates with angular frequency ω and maximum velocity ] 0 , is given by where j � �� � −1 √ , c is the velocity of sound in the medium, ρ is the density of the medium, and k is the wave number. e mutual impedance Z m of the two pistons may be found as the force due to the motion of piston-1 on piston-2 divided by the velocity of piston-1. e formulation can be given as follows [6]: Note that equations (4) and (5) are the solutions to the Helmholtz equation of acoustic wave theories. e abovementioned equation involves four-dimensional integrals and is time consuming in computation. erefore, modelling or predicting is necessary in practical applications. In this study, a neural network is utilized to model the nonlinear mapping for the mutual impedance between any two pistons. In addition, the slope information for the mutual-impedance characteristic can then be easily obtained by suitably extending the neural network as equation (3)

Numerical Simulation Results
In this section, two numerical examples are given to illustrate the abovementioned theory. For simplicity, there are two square pistons considered in this section, i.e., a � b in Figure 3. e hidden layer of the RBF-NN in Figure 1 w j w J Figure 1: e architecture of an RBF neural network and its extension of derivative.
Step 2: Weights of the original RBF-NN are determined after step 1.
To calculate y and (dy/dx): Step 1: Train the original RBF-NN using existing data sets of x → y.
Step 3: (dy/dx) is obtained from equation (3) and the extension of RBF-NN is developed by using the final weights in step 2.
Step 4: Mapping of x → y (original RBF-NN) and mapping of x → (dy/dx) (extension of RBF-NN) are obtained simultaneously.
The training work is performed only once (training during mapping of x → y). nodes, i.e., J � 10. e learning rate in the training procedure is 0.1. e autocovariance σ 2 of the Gaussian function is 0.5. e selection of J (number of hidden-layer nodes) and learning rate are based on experiences. Small values of J will make characteristics of nonlinear transformation inadequate, whereas large values of J will increase the neuralnetwork size and then make the training work difficult. e learning rate is generally selected within the range [0, 1]. Our past studies of references [11][12][13][14][15] have utilized J � 10 and learning rate 0.1 to serve as RBF-NN parameters to model electromagnetic problems and the results are very well. Since the mathematical formulas of acoustic wave theories in this study are very similar to those of electromagnetic wave theories in references [11][12][13][14][15], we expect that these values of RBF-NN parameters are also suitable in this study. In the training and predicting processes of the neural network, all values of the input variable are linearly normalized into 0 ≤ input ≤ 1.
e maximum training loops of the neural network is set to be 40000.
In the first example, the mutual impedance is calculated with respect to orientation angle ϕ under the assumptions of ka � 1 and d/a � 2. ere are 20 data sets randomly selected from the interval of 0°< ϕ < 90°to train the neural network.
ere are also 20 data sets (different from the training data sets) randomly selected from the interval of 0°< ϕ < 90°to verify the prediction accuracy of the neural network. All the training and verification data sets are calculated from equation (5). Figure 4 shows the normalized (divided by ρca 2 ) mutual-impedance magnitude with respect to orientation angle ϕ by using the RBF-NN prediction in Section 2, i.e., the part below the horizontal dotted line in Figure 1. For comparison, the results calculated by theoretical formula of equation (5) are also given. It shows that they are in good agreement. e discrepancy is defined as where y p and y T denote the result by prediction and by theory, respectively. e mean square error of discrepancy for all data of Figure 4 is about 0.009%. is implies that the prediction is very accurate. e consistence of the two curves in Figure 4 means that the trained RBF-NN can accurately predict the mutual-impedance magnitude between two pistons. With the use of RBF-NN, complex numerical calculations of multidimensional integrals in equation (5) can be replaced by very simple algebraic calculation of the neural network. is will make the analyses for mutual coupling effects of sonar arrays very efficient. e slope information of Figure 4 can be obtained straightforwardly from equation (3), i.e., the extension of RBF-NN in Figure 1 (the part above the horizontal dotted line). Following the procedures of the flow chart in Figure 2, the derivative or slope information for the curve of Figure 4 can be obtained easily. Figure 5 shows the slope of normalized mutual-impedance magnitude with respect to orientation angle ϕ by equation (3), i.e., the extension of RBF-NN in Figure 1. For comparison, those calculated by centerdifference differential on equation (5) are also shown. It shows that they are very consistent. e mean square error of discrepancy for all data of Figure 5 is about 1.74%. is implies that the prediction is very accurate. In Figure 5, much information about the distribution for the curve of mutual-impedance magnitude in Figure 4 can be found. e slope is zero and increasing at ϕ � 45°. is means that the curve of mutual-impedance magnitude in Figure 4 has a minimum value at ϕ � 45°. e slope is negative from ϕ � 0°t o ϕ � 45°and then the curve in Figure 4 will be decreasing in this interval. e slope is positive from ϕ � 45°to ϕ � 90°, and then the curve in Figure 4 will be increasing in this interval. e slope is decreasing from ϕ � 0°to ϕ � 20°and from ϕ � 70°to ϕ � 90°.
is means that the curve in   Similarly, the slope is increasing from ϕ � 20°to ϕ � 70°. is means that the curve in Figure 4 opens upward in this interval. e information shown in Figure 5 is consistent with the practical situations of curve distribution in Figure 4.
In the second example, the mutual impedance with respect to ka is studied. e distance and size of pistons are chosen as d/a � 2. Initially, the orientation angle for the two pistons is assumed to be ϕ � 0°, i.e., the two square pistons in Figure 3 are aligned on the x-axis. ere are 40 data sets randomly selected from the interval of 0 < ka < 20 to train the neural network. ere are also 40 data sets (different from the training data sets) randomly selected from the interval of 0 < ka < 20 to verify the prediction accuracy of the neural network. All the training and verification data sets are calculated from equation (5). Figure 6 (angle � 0°) shows the normalized mutual-impedance magnitude (divided by ρca 2 ) with respect to ka by using the RBF-NN prediction in Section 2, i.e., the part below the horizontal dotted line in Figure 1. For comparison, those calculated from theory of equation (5) are also illustrated in Figure 6 (angle � 0°). It shows that they are in very good agreement. e mean square error of discrepancy for all data of Figure 6 is about 1.22%.
is implies that the prediction is very accurate. Figure 7 (angle � 0°) shows the slope of normalized mutualimpedance magnitude with respect to ka predicted by equation (3), i.e., the extension of RBF-NN in Figure 1. For comparison, those calculated from center-difference differential on equation (5) are also shown in Figure 7 (angle � 0°). It shows that they are in good agreement. e mean square error of discrepancy for all data of Figure 7 By neural-network extension Center-difference differential on equation (5) 15 Slope of normalized mutual-impedance magnitude Figure 5: e slope of normalized mutual-impedance magnitude with respect to orientation angle ϕ by extension of the neural network and by center-difference differential on theoretical formula of equation (5).  is about 2.82%. is implies that the prediction is very accurate. Similar to the previous example, the slope information in Figure 7 shows that the local maxima for the curve in Figure 6 should occur at ka � 2.5, 9.5, and 15.6 since the slope values are zero and decreasing at these points. Similarly, it is found from Figure 7 that the local minima for the curve in Figure 6 should occur at ka � 7.0, 12.5, and 19.1 since the slope values are zero and increasing at these points. e information shown in Figure 7 is consistent with the practical situations of curve distribution in Figure 6. Next, the orientation angle for the two pistons is changed from ϕ � 0°to ϕ � 15°. e other procedures are the same as those of the case ϕ � 0°. Figure 6 (angle � 15°) shows the normalized mutualimpedance magnitude (divided by ρca 2 ) with respect to ka by using the RBF-NN prediction and theoretical computation, respectively. It shows that they are in very good agreement. Figure 7 (angle � 15°) shows the slope of normalized mutualimpedance magnitude with respect to ka predicted by using the RBF-NN extension and center-difference differential on equation (5), respectively. It also shows that they are in good agreement. e meanings of curves are not repeated here because they are similar to those of the case ϕ � 0°.
From the abovementioned numerical examples, it can be observed that the curves predicted by the neural network or its extension of derivative are not as smooth as those calculated from theories. Since the neural network is inherently a "black box" for nonlinear mapping, the slight roughness for curves predicted by neural networks in Figures 4∼7 is reasonable. In some practical applications, the training data sets are obtained by measurement. ese measured data contain not only clean signals but also random noises. erefore, the measured curves may have roughness in most cases. Due to the inherent black-box property of neural networks, the proposed methods can deal with nonlinear mapping for rough curves of measurement. e black-box nonlinear mapping of this study is achieved through the Gaussian bases in equation (2). In this paper, the main purpose of the neural-network black-box is to replace the four-dimensional integral computation in equation (5), which is very complicated. In both examples, the relation between the input (controlled) variable and output response is nonlinear, as shown in Figures 4∼7. What we want to emphasize is that the neural-network black box has successfully replaced numerical calculations of multidimensional integrals. Moreover, the black-box property implies that the relation between the input and output may be very complicated. erefore, the neural-network black box can also be applied to different forms of sonar arrays, such as rectangular, cylindrical, or spherical structures. As the radiator is produced of a real material, the significant factor of water fluidloading can be included in the black box, i.e., neural-networkbased machine learning. ese will make the sonar array design very convenient and efficient. For simplicity without loss of generality, the input is either ϕ or ka, and the output is mutual-impedance magnitude in the neural network of this study. In fact, all controllable factors in equation (5) can serve as the input of the neural network. e input and output of the neural network may have multiple nodes to represent multiple controllable factors and multiple responses, respectively. is is the difference between our neural-network machine learning and a look-up table. Furthermore, our neural-network machine learning can also treat arrays with multiple transducers as one increases the number of input and output nodes. Of course, this will also lead to the increase of training work. e abovementioned numerical simulations are performed using personal computer with CPU of Intel Core i7-4790 3.6 GHz. All the programs are coded using Fortran-90 computer language with compiler version of Absoft Pro Fortran 6.2. By using numerical computation of equation (5), the computing time of each data point in Figures 4 and 6 is about 5 seconds. Our computing results are consistent with those of reference [16]. Note that the computation is almost in real time by using the trained neural network in this study. is is because a neural network involves only very simple algebraic calculations. Although the training work of a neural network is somewhat time consuming, it can be finished before one uses the neural network and its extension of derivative.

Conclusion
In this study, a mathematical model consisting of a RBF-NN and its extension of derivative is successfully applied to nonlinear mapping for mutual impedance of pistons and the slope information. e training work is performed only once, i.e., during the mapping for mutual impedance of pistons. According to reference [17], the RBF-NN model utilized in this study is inherently one type of the general regression, and it can predict new results nonlinearly from some training data sets. In addition, the studies of references [11][12][13][14][15] have successfully utilized the RBF-NN to model different electromagnetic problems and the results are very well. Since the equations of acoustic waves are very similar to those of electromagnetic waves in references [11][12][13][14][15], it is reasonable that the RBF-NN is also a good mathematical model in this study. Although reference [15] has successfully utilized the RBF-NN to deal with the mutual coupling of antenna arrays, the modelling is only the conventional neural network itself. It should be emphasized that this study not only utilizes the RBF-NN to model the mutual coupling of acoustic sensors but also extends the RBF-NN to achieve the output derivative information. As shown in equation (3), the output derivative is achieved by differential operations on nonlinear transformation functions in the hidden layer, e.g., equation (2). In fact, as long as the nonlinear transformation functions in the hidden layer are differentiable, any types of multilayer neural networks can be utilized. erefore, the modelling flow chart of this paper can also be analogized to other black-box machine-learning techniques. Remember that we obtain the gradient information from equation (3), which is essentially the exact derivative on the differentiable function of equation (2). By this way, one does not need to calculate y prior to dy/dx since the right part of equation (3) involves only weights within the neural network together with simple mathematical functions. ere are two advantages to achieve the gradient by this way. Firstly, the computation is efficient since one does not need to calculate y prior to dy/dx. Secondly, one can reduce the chance of producing error from the neural-network output y. Compared with conventional center-difference gradient, our neural-network extension of derivative is not only efficient but also accurate. e concept of mutual impedance can also be analogized to calculate the sound radiation impedance, e.g., the discrete calculation method (DCM) originally designed by Norihisa Hashimoto [18]. In that method, the vibrating object is divided virtually into small elements. Each individual element is treated as a circular piston vibrating plate with an area equal to that of the corresponding element. e sound power of each individual element is related to mutual impedance between different individual elements. e total radiation power of a vibrating object can be obtained by calculating and summing up all of the sound power of each individual element. Note that our neural-network-based analysis in this paper belongs to supervised machine learning. Prior to predicting, known examples with answers are required to train the neural network. As the supervised learning procedures are not finished, our neural-network-based technique cannot work. In fact, such a learning process is the common requirement for all supervised machine-learning techniques, but not the particular property of this study. e major difference between our neural-network-based technique and Hashimoto's method is the supervised learning phase. Our neural-network-based technique requires known samples with answers for learning in advance. After the supervised learning procedures are finished in advance, it can be generalized to predict unseen data fast and accurately, whereas Hashimoto's method [17] does not require learning procedures. It is basically an improved calculation technique based on physics and mathematics.
Numerical examples of Figures 4∼7 have verified the proposed model to be accurate and efficient. Compared with conventional mathematical models (e.g., function interpolation or regression), the neural network is inherently a black box and can give mapping with strong nonlinearity. With the use of neural network and its extension of derivative, one can quickly obtain the output gradient information without knowledge of the overall output in advance. Although the training work of a neural network is usually time consuming, it can be completed in advance. e proposed model can be applied to many other mathematical problems in engineering.

Conflicts of Interest
e authors declare that they have no conflicts of interest.