A Nonpenalty Neurodynamic Model for Complex-Variable Optimization

In this paper, a complex-variable neural network model is obtained for solving complex-variable optimization problems described by diﬀerential inclusion. Based on the nonpenalty idea, the constructed algorithm does not need to design penalty parameters, that is, it is easier to be designed in practical applications. And some theorems for the convergence of the proposed model are given under suitable conditions. Finally, two numerical examples are shown to illustrate the correctness and eﬀectiveness of the proposed optimization model.


Introduction
In practical applications, many engineering problems such as machine learning, signal processing, and resource allocation [1][2][3][4] can be transformed into the optimization problems, and a lot of algorithms have been developed in recent years [5][6][7]. Since many engineering systems can be described by complex signals and complex-variable optimization problems widely exist in fields of medical imaging, adaptive filters, and remote sensing [8][9][10], the study of complex-variable optimization has also attracted more and more scholars' attention.
In this paper, we consider the following complex-variable optimization: minimize g(ω), subject to H(ω) ≤ 0, where ω ∈ C n is the decision variable, g: C n ⟶ R is the convex objective function, H(ω) � (h 1 (ω), h 2 (ω), . . . , h l (ω)) T : C n ⟶ R l , h i (·) is convex, h(ω) � max h i (ω): i � 1, . . . , l}, B ∈ C m×n is a full row-rank matrix, and b ∈ C m . Ω 1 � ω ∈ C n |h(ω) ≤ 0 { }, Ω 2 � ω ∈ C n |Bω � b { }, and Ω � Ω 1 ∩ Ω 2 is the feasible region of optimization (1). Ω * means the optimal solution set for (1). Ω 0 , Ω, zΩ, and Ω ′ represent the interior, the closure, the boundary, and the complement of the set Ω. e method of neurodynamic optimization is to establish the corresponding relationship between the solution of the optimization problem and the state of the neurodynamic system and to obtain the optimal solution by tracking the state output of the neurodynamic system. Because of its parallel operation and fast convergence speed, the neural dynamic optimization method is superior to the traditional optimization algorithm in solving optimization problems. erefore, various neural networks have been proposed solving optimization problems [11][12][13][14][15][16][17][18]. In [11], the authors proposed a Hopfield-type neural network solving linear programming. In [12], Kennedy and Chua designed a model of neural networks based on the gradient to solve the constrained nonlinear programming problem with equality constraints. In [13], Liu et al. proposed a recursive neural network model with convergence in finite time to solve linear programming problems. In [14], Forti et al. promoted the optimal problem into the neural network model described by differential inclusion for solving the nonsmooth optimization. Furthermore, in [15,16], Bian et al. extended the problem to the one with equality and inequality constraints. In [17,18], based on the nonpenalty idea, Hosseini proposed the neural network models upon differential inclusions solving the nonsmooth optimization problems. However, all these works focused on real-variable optimization problems.
Because the properties of the complex-variable function and real-variable function are different, complex-variable optimization problems cannot be solved by traditional optimization methods. Most scholars usually decompose complex-variable optimizations into real-variable problems.
e main drawback of this approach is that it breaks the twodimensional structure of the data. In order to overcome the above difficulties, more and more scholars begin to pay attention to the neural dynamic optimization methods for complex-variable optimization.
With the improvement of complex-variable dynamic system theories including asymptotic stability [19,20], exponential stability [21,22], robust stability [23,24], multistability [25,26], finite-time stability [27], periodicity [28], synchronization [29], and dissipation [30], complex-variable optimization problems were also gradually studied. Zhang et al. [31] proposed a complex-variable model to solve the convex problem with equality constraints. In [32], the authors designed a three-layer projection neural network model to solve the complex-variable convex problems. In [33], a complex-variable projection model was proposed to solve an optimization with equality and inequality constraints. In fact, the semidefinite condition of global convergence of the neural networks in [31,32] is not valid for the nonsmooth problems, and it is difficult to choose the penalty parameters of the neural networks designed based on the algorithm in [33].
In order to overcome the above difficulties, this paper designs a complex-variable model using a neural network described by differential inclusion for solving the optimization problem. Compared with the models in the existing literature [31][32][33], the model proposed in this paper has the following characteristics. Firstly, the nonsmooth optimization problem can be solved because the model adopts differential inclusion description. Moreover, the model does not need to select penalty parameters in practical applications because the proposed model is based on the idea of nonpenalty design and does not contain any penalty parameters. Finally, the model structure we designed is simpler and more convenient to be applied in practical problems. e sections of this article are as follows. In Section 2, we present some preliminaries and lemmas. In Section 3, a model is proposed for solving complex-variable convex problems, and the state of the model is proved to converge under some conditions. In Section 4, some typical examples are given to illustrate the effectiveness of the proposed model. Finally, a brief conclusion is given.

Preliminaries
In this paper, notation I means the identity matrix. i � �� � − 1 √ is the imaginary unit. e superscripts (·) T , (·) H , and (·) represent the transpose, conjugate transpose, and complex conjugate, respectively. | · | means the absolute value of a real number or the model of a complex number. ‖ · ‖ 1 and ‖ · ‖ denote 1-norm and 2-norm. e real and complex field are represented by R and C, respectively. e real and imaginary parts of a complex matrix or vector are represented by Re(·) and Im(·).
Definition 2 (see [35]). (Let g be Lipschitz near a point z in norm space X. e directional derivative of g in direction h in the Clarke sense is the following limit: t . (

2)
A set is the subdifferential of g at a point z in the Clarke sense if and g is regular in the Clarke sense if g 0 (z, h) � g ′ (z, h).

Lemma 1 (see [36]). Suppose that v(t) is absolutely continuous on each compact interval and g(v)
is regular in the Clarke sense; then, Definition 3 (see [35]). Let S ⊂ C n be a convex set. For any two points z 1 , z 2 ∈ S and any t ∈ (0, 1), g(z) is a strictly convex function on S if g( . For the rest of the calculation, the concept of relaxed CR calculus [37,38] will be introduced. First, the Suppose (zb/zμ) and (zb/z]) exist. e following formula is the R-derivative of g(ω, ω): Definition 4 (see [37,38]). If (zb/zμ) and (zb/z]) exist, the complex gradient of g(ω) is expressed as 2 Discrete Dynamics in Nature and Society If the complex gradient of convex function g(ω) does not exist, the generalized complex gradient or subdifferential of g(ω) is defined as follows.

Definition 5.
e generalized subdifferential of g(ω) at point ω is expressed as

Neural Network Model
In this section, the following model is proposed to solve complex-variable problem (1): in which P � B H (BB H ) − 1 B, and I is the identity matrix.
Remark 1. In [32], with common projection methods, the authors proposed a n + m + p-neurons and three-layer model. In [33], the authors proposed a single-layer model which needs to select the penalty parameters. In our paper, based on the nonpenalty idea, we propose a single-layer and n-neurons model which avoids selection of penalty parameters. us, the proposed model is of lower dimensions and easier to apply in practical problems.

Convergence Analysis
e convergence and stability of the neural network (19) will be analyzed and proved in this section. Firstly, it is proved that the state of the neural network (19) will reach Ω 2 � ω ∈ C n |Aω � b { } in finite time even if the initial point is chosen from outside of feasible region Ω. Secondly, we prove that the equilibrium point of neural network (19) converges to the optimal solution of complex-variable optimization problem (1). (19), given an arbitrary initial point ω 0 ∈ C n , there exist T > 0 and t ≥ T such that the state ω(t) ∈ Ω 2 .
Next, it will be proved that the state ω(t) will stay Ω 2 once ω(t) reaches Ω 2 . Otherwise, there exists an interval (t 1 , t 2 ) such that ω(t) ∉ Ω 2 for any t ∈ (t 1 , t 2 ). Note that (31) still holds for t ∈ (t 1 , t 2 ); thus, which leads to a contradiction. So, the assumption does not support.
ere exist ε > 0 and T > 0 satisfying for any t > T.

Simulations
In this section, we illustrate the effectiveness of the neural network (19) in solving the complex-valued nonlinear and nonsmooth convex optimization problem by two examples. Example 1. Consider the following problem: Here, we design ten arbitrary initial states in our model (19). Figures 1 and 2 show that the state of the neural network (19) will converge to the optimal solution ω * � (1.5042 + 0.4916i, − 0.5022 − 0.4936i) T , and the minimum is g(ω * ) � 1.1909.     Discrete Dynamics in Nature and Society Example 2. Consider the quadratic optimization with ten arbitrary initial states: From Figure 3, we can find the convergent behaviors of the state variables ω(t) with the arbitrary initial states for Example 2.

Conclusion
is paper aims to solve a complex-variable nonlinear nonsmooth convex optimization problem. We present a nonpenalty neurodynamic model by differential inclusion to obtain the optimal solution of the problem. Different from the existing algorithms, the proposed neurodynamic model need not choose penalty parameters, which makes it easier to be designed in practical applications. And some theorems are given in order to ensure convergence. Finally, numerical examples are shown to illustrate the effectiveness of the proposed neural networks. In further studies, we will seek a new neurodynamic model with simpler structure to solve complex-variable nonlinear nonsmooth convex optimization based on the nonpenalty approach.

Data Availability
No data were used to support this study.

Conflicts of Interest
e authors declare that they have no conflicts of interest.