MPE Mathematical Problems in Engineering 1563-5147 1024-123X Hindawi Publishing Corporation 107620 10.1155/2014/107620 107620 Research Article Neural Network for Sparse Reconstruction Li Qingfa 1 Liu Yaqiu 1 Zhu Liangkuan 2 Wu Huaiqin 1 School of Information and Computer Engineering Northeast Forestry University No. 26, Hexing Street, Harbin 150040 China nefu.edu.cn 2 College of Electromechanical Engineering Northeast Forestry University No. 26, Hexing Street, Harbin 150040 China nefu.edu.cn 2014 3132014 2014 24 12 2013 04 03 2014 31 3 2014 2014 Copyright © 2014 Qingfa Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We construct a neural network based on smoothing approximation techniques and projected gradient method to solve a kind of sparse reconstruction problems. Neural network can be implemented by circuits and can be seen as an important method for solving optimization problems, especially large scale problems. Smoothing approximation is an efficient technique for solving nonsmooth optimization problems. We combine these two techniques to overcome the difficulties of the choices of the step size in discrete algorithms and the item in the set-valued map of differential inclusion. In theory, the proposed network can converge to the optimal solution set of the given problem. Furthermore, some numerical experiments show the effectiveness of the proposed network in this paper.

1. Introduction

Sparse reconstruction is the term used to describe the process of extracting some underlying original source signals from a number of observed mixture signals, where the mixing model is either unknown or the knowledge about the mixing process is limited. The problem of recovering a sparse signal from noisy linear observation arises in many real world sensing applications . Mathematically, a signal recovery problem can be formulated as estimating the original signal based on noisy linear observations, which can be expressed as (1) b = A x + η , where A m × n is the mixing matrix, x n is the original signal, b m is the observed signal, and η m is the noise. In many cases, A is a matrix of block Toeplitz with Toeplitz blocks (BTTB) when zero boundary conditions are applied and block Toeplitz-plus-Hankel with Toeplitz-plus-Hankel blocks (BTHTHB) when Neumann boundary conditions are used . Then, this problem can be viewed as a linear inverse problem. A standard approach to solve linear inverse problems is to define a suitable objective function and to minimize it. It is often divided into two steps to solve this problem, which are estimation of mixture matrix A and recovery of original signal x . In this paper, we focus on the study of the second step, where we assume that we have known mixture matrix A .

Generally, finding a solution with few nonzero entries for an underdetermined linear system with noise is often modeled as the regularization problem: (2) min A x - b 2 + λ x 0 , where λ > 0 and x 0 is defined by the number of nonzero entries in x . However, the l 2 - l 0 regularized problem (2) is difficult to deal with because of the discrete structure of the l 0 norm, which derives researchers to pay attention to the continuous l 2 - l 1 minimization problem: (3) min A x - b 2 + λ x 1 . The first term in (3) is often called as date fitting term, which forces the solutions of (3) closeness to the data, and the second term in it is often called as regularization or potential term, which is to push the solutions to exhibit some prior expected features. Under certain conditions, l 2 - l 1 problem and l 2 - l 0 problem have the same solution sets . The l 2 - l 1 problem is a continuous convex optimization problem and can be efficiently solved, which is known as Lasso .

A class of signal recovery problems can be formulated as (4) min A x - b 2 + λ D x 1 s . t . x Ω , where D is a linear operator, λ is the regularization parameter that controls the trade-off between the regularization term and the data-fitting term, and constraint set Ω is a closed convex subset of n .

Optimization problems arise in a variety of scientific and engineering applications and they really need real time solutions. Since the computing time greatly depends on the dimension and the structure of the optimization problems, numerical algorithms are usually less effective in large scale or real time optimization problems. In many applications, real time optimal solutions are usually imperative, such as on-board signal processing and robot motion planning and control. One promising approach to handle these problems is to employ artificial neural network. During recent decades, neural dynamical method for solving optimization problems has been a major area in neural network research based on circuit implementation . First, the structure of a neural network can be implemented physically by designated hardware such as application-specific integrated circuits where the computational procedure is distributed and parallel. This lets the neural network approach solve optimization problems in running time at the order of magnitude much faster than conventional optimization algorithms executed on general-purpose digital computers. Second, neural networks can solve many optimization problems with time-varying parameters. Third, the dynamical and ODE techniques can be applied to the continuous-time neural networks. And recent reports have shown that the global convergence can be obtained by the neural network approach under some weaker conditions.

Since the neural network was first proposed for solving linear [13, 14] and nonlinear  programming problems, many researchers were inspired to develop neural networks for optimization. Many types of neural networks have been proposed to solve various optimization problems, for example, the recurrent neural network, the Lagrangian network, the deterministic annealing network, the projection-type neural network, a generalized neural network, and so forth. In , Chong et al. proposed a neural network for linear programming problem with finite time convergence. In , a generalized neural network was presented for solving a class of nonsmooth convex optimization problems. In , a neural network was defined by using the penalty function method and differential inclusion for solving a class of nonsmooth convex optimization problems. In fact, in many important applications, neural network built by a differential inclusion is an important method to solve a class of nonsmooth optimization problems. One has to mention that the optimization problems are not differentiable in many important applications. Moreover, the neural networks for smooth optimization problems required the gradients of the objective and constrained functions in such neural networks. So these networks cannot solve nonsmooth optimization problems. Using smoothing techniques in neural network is an effective method for solving nonsmooth optimization problems [19, 20]. The main feature of smoothing method is to approximate the nonsmooth functions by parameterized smooth functions [21, 22]. By smoothing approximations, we can give a class of smooth functions, which converge to the original nonsmooth function and whose gradients converge to the subgradient of nonsmooth function. For solving many constrained optimization problems, projection is a simple and effective method for solving the constraints. In [23, 24], projection had been used in neural networks for solving some kind of constrained optimization problems.

Basing on the advantages of the neural networks, in this paper, we will propose a neural network and use some mathematical techniques to solve optimization problem (4). The problem (4) is nonsmooth. Many neural networks are modeled by differential inclusions, which have the difficulty in the choice of the right set-valued map. In this paper, we will introduce a smoothing function to overcome this problem. Using smoothing techniques into neural network is an interesting and promise method for solving (4).

Notation. Throughout this paper, · denotes the l 2 norm and · 1 denotes the l 1 norm.

2. Preliminary Results

In this section, we will introduce several basic definitions and lemmas, which are used in the development.

Definition 1.

Suppose that f is Lipschitz near x ; the generalized directional derivative of f at x in the direction v n is given by (5) f 0 ( x ; v ) = limsup y x ; r 0 + f ( y + r v ) - f ( y ) r . Furthermore, the Clarke generalized gradient of f at x is defined as (6) f ( x ) = { ξ n : f 0 ( x ; v ) v , ξ , v n } .

Moreover, if f : n is a convex function, then it has the following properties as well.

Proposition 2.

If f : n is a convex function, the following property holds: (7) f ( x ) - f ( y ) p , x - y , x , y n , p f ( x ) .

Since the constraint set of (4) is a closed convex subset of n , then we use the projection operator to handle the constraint. The projection operator of x to the closed convex subset Ω is defined by (8) P Ω ( x ) = arg min u Ω u - x .

The projection operator has the following properties.

Proposition 3.

Consider the following: (9) v - P Ω ( v ) , P Ω ( v ) - u 0 , v n , u Ω , P Ω ( u ) - P Ω ( v ) u - v , u , v n .

Definition 4.

Let h : n be a locally Lipschitz function. We call h ~ : n × [ 0 , + ) a smoothing function of h , if h ~ satisfies the following conditions.

For any fixed μ > 0 , h ~ ( · , μ ) is continuously differentiable in n , and for any fixed x n , h ~ ( x , · ) is differentiable in [ 0 , + ) .

For any fixed x n , lim μ 0 h ~ ( x , μ ) = h ( x ) .

There is a positive constant κ h ~ > 0 such that (10) | μ h ~ ( x , μ ) | κ h ~ , μ [ 0 , + ) , x n .

{ lim z x , μ 0 z h ~ ( z , μ ) } h ( x ) .

From the above definition, we can get that for any fixed x n , (11) lim z x , μ 0 h ~ ( z , μ ) = h ( x ) , | h ~ ( x , μ ) - h ( x ) | κ h ~ μ , μ [ 0 , + ) , x n .

Next, we present a smoothing function of the absolute value function, which is defined by (12) φ ( y , μ ) = { | y | if    | y | μ , y 2 2 μ + μ 2 if    | y | < μ .

Proposition 5 (see [<xref ref-type="bibr" rid="B21">21</xref>]).

Consider the following:

φ ( y , μ ) is continuously differentiable about y in for any fixed μ > 0 and differentiable about μ for any fixed y ;

0 μ φ ( y , μ ) 1 , for all y , for all μ ( 0,1 ] ;

0 φ ( y , μ ) - | y | μ / 2 , for all y , for all μ ( 0,1 ] ;

φ ( y , μ ) is convex about y for any fixed μ and { lim μ 0 y φ ( y , μ ) } | y | .

3. Theoretical Results

In (4), D m × n can be rewritten as D = ( d 1 , d 2 , , d m ) T , where d i ( i = 1,2 , , m ) is an n dimensional vector. Then (4) can be rewritten as (13) min A x - b 2 + λ i = 1 m | d i T x | s . t . x Ω .

In the following, we use i = 1 m φ ( d i T x , μ ) to approximate i = 1 m | d i T x | . From the idea of the projected gradient method, we construct our neural network as follows: (14) x ˙ ( t ) = - x ( t ) + P Ω [ i = 1 m x ( t ) - 2 A T ( A x ( t ) - b ) - λ i = 1 m x φ ( d i T x ( t ) , μ ( t ) ) ] , where x ( 0 ) = x 0 , μ ( t ) = e - t , and P Ω is the projection operator on Ω .

Next, we will give some analysis on the proposed neural network (14).

Theorem 6.

For any initial point x 0 Ω , there is a global and uniformly bounded solution of (14).

Proof.

The right hand of (14) is continuous about x and t , then there is a local solution of (14) with x 0 Ω . And we assume that [ 0 , T ) is the maximal existence interval of t . First, we prove that x ( t ) Ω for all t [ 0 , T ) . Obviously, (14) can be rewritten as (15) x ˙ ( t ) + x ( t ) = P Ω [ i = 1 m x ( t ) - 2 A T ( A x ( t ) - b ) - λ i = 1 m x φ ( d i T x ( t ) , μ ( t ) ) ] .

From the integration about the above differential equation, we have (16) x ( t ) = e - t x 0 + ( 1 - e t ) 0 t k ( s ) e s e t - 1 d s , where k ( t ) = P Ω [ x ( t ) - 2 A T ( A x ( t ) - b ) - λ i = 1 m x φ ( d i T x ( t ) , μ ( t ) ) ] .

Since 0 t ( e s / ( e t - 1 ) ) d s = 1 , x 0 Ω , and Ω is a closed convex subset, we confirm that (17) x ( t ) Ω , t [ 0 , T ) .

Differentiating A x ( t ) - b 2 + λ i = 1 m φ ( d i T x ( t ) , μ ( t ) ) along this solution of (14), we obtain (18) d d t [ A x ( t ) - b 2 + λ i = 1 m φ ( d i T x ( t ) , μ ( t ) ) ] = 2 A T ( A x ( t ) - b ) , x ˙ ( t ) + λ i = 1 m [ x φ ( d i T x ( t ) , μ ( t ) ) , x ˙ ( t ) + μ φ ( d i T x ( t ) , μ ( t ) ) μ ˙ ( t ) ] 2 A T ( A x ( t ) - b ) + λ i = 1 m x φ ( d i T x ( t ) , μ ( t ) ) , x ˙ ( t ) .

Using the inequality of project operator to (14), we obtain (19) 2 A T ( A x ( t ) - b ) + λ i = 1 m x φ ( d i T x ( t ) , μ ( t ) ) , x ˙ ( t ) - x ˙ ( t ) 2 .

Thus, (20) d d t [ A x ( t ) - b 2 + λ i = 1 m φ ( d i T x ( t ) , μ ( t ) ) ] - x ˙ ( t ) 2 , which follows that A x ( t ) - b 2 + λ i = 1 m φ ( d i T x ( t ) , μ ( t ) ) is nonincreasing along the solution of (14). On the other hand, by Proposition 5, we know that (21) A x ( 0 ) - b 2 + λ i = 1 m φ ( d i T x ( 0 ) , μ ( 0 ) ) A x ( t ) - b 2 + λ i = 1 m φ ( d i T x ( t ) , μ ( t ) ) A x ( t ) - b 2 + λ D x ( t ) 1 .

Thus, x ( t ) is bounded on [ 0 , T ) . Using the extension theorem, the solution of (14) is globally existent and uniformly bounded.

Theorem 7.

For any initial point x 0 Ω , the solution of (14) is unique and satisfies the following:

x ˙ ( t ) is nonincreasing on [ 0 , + ) and lim t + x ˙ ( t ) = 0 ;

the solution of (14) is convergent to the optimal solution set of (4).

Proof.

Suppose that there exist two solutions x : [ 0 , ) n and y : [ 0 , ) n of (14) with initial point x 0 = y 0 , which means that (22) x ˙ ( t ) = - x ( t ) + P Ω [ i = 1 m x ( t ) - 2 A T ( A x ( t ) - b ) - λ i = 1 m x φ ( d i T x ( t ) , μ ( t ) ) ] , y ˙ ( t ) = - y ( t ) + P Ω [ i = 1 m y ( t ) - 2 A T ( A x ( t ) - b ) - λ i = 1 m y φ ( d i T y ( t ) , μ ( t ) ) ] . Thus, (23) d d t 1 2 x ( t ) - y ( t ) 2 = x ( t ) - y ( t ) , x ˙ ( t ) - y ˙ ( t ) = - x ( t ) - y ( t ) 2 + x ( t ) - y ( t ) , P Ω [ ξ 1 ( t ) ] - P Ω [ ξ 2 ( t ) ] , where ξ 1 ( t ) = x ( t ) - 2 A T ( A x ( t ) - b ) - λ i = 1 m x φ ( d i T x ( t ) , μ ( t ) ) and ξ 2 ( t ) = y ( t ) - 2 A T ( A x ( t ) - b ) - λ i = 1 m y φ ( d i T y ( t ) , μ ( t ) ) .

From the expression of P Ω and x ( t ) , y ( t ) Ω , for all t 0 , we get (24) ξ 1 ( t ) - P Ω [ ξ 1 ( t ) ] , x ( t ) - y ( t ) = 0 , t 0 , ξ 2 ( t ) - P Ω [ ξ 2 ( t ) ] , x ( t ) - y ( t ) = 0 , t 0 . Thus, we have (25) x ( t ) - y ( t ) , P Ω [ ξ 1 ( t ) ] - P Ω [ ξ 2 ( t ) ] = x ( t ) - y ( t ) , ξ 1 ( t ) - ξ 2 ( t ) = x ( t ) - y ( t ) 2 - 2 A ( x ( t ) - y ( t ) ) 2 - λ i = 1 m x ( t ) - y ( t ) , x φ ( d i T x ( t ) , μ ( t ) ) - y φ ( d i T y ( t ) , μ ( t ) ) .

Since φ ( d i T y , μ ) , ( i = 1,2 , , m ) is convex about y for any fixed μ , we have (26) x - y , x φ ( d i T x , μ ) - y φ ( d i T y , μ ) 0 , x , y n . Thus, for all t 0 , (27) i = 1 m x ( t ) - y ( t ) , x φ ( d i T x ( t ) , μ ( t ) ) - y φ ( d i T y ( t ) , μ ( t ) ) ( d i T x ( t ) , μ ( t ) ) 0 . Using (25) and (27) into (23), we have (28) d d t 1 2 x ( t ) - y ( y ) 2 iii = - 2 A ( x ( t ) - y ( t ) ) 2 iiiiiii - x ( t ) - y ( t ) , λ i = 1 m [ x φ ( d i T x ( t ) , μ ( t ) ) - y φ ( d i T y ( t ) , μ ( t ) ) ] i = 1 m 0 , iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii t [ 0 , + ) .

By integrating (28) from 0 to t , we derive that (29) sup t 0 x ( t ) - y ( t ) x 0 - y 0 . Therefore, x ( t ) = y ( t ) , for all t 0 , when x 0 = y 0 , which derives the uniqueness of the solution of (14).

Let y ( t ) = x ( t + h ) , where h > 0 ; (29) implies that (30) x ( t + h ) - x ( t ) x ( h ) - x ( 0 ) , t 0 . Therefore t x ˙ ( t ) is nonincreasing.

From (20), we obtain that A x ( t ) - b 2 + λ i = 1 m φ ( d i T x ( t ) , μ ( t ) ) is nonincreasing and bounded form below on [ 0 , + ) ; therefore we have that (31) lim t + [ A x ( t ) - b 2 + λ i = 1 m φ ( d i T x ( t ) , μ ( t ) ) ] exists . Using Proposition 5 to the above result, we obtain that (32) lim t + [ A x ( t ) - b 2 + λ D x ( t ) 1 ] exists .

Moreover, we have that (33) lim t + d d t [ A x ( t ) - b 2 + λ i = 1 m φ ( d i T x ( t ) , μ ( t ) ) ] = 0 .

Combining (20) and (33), we confirm that (34) lim t + x ˙ ( t ) = 0 .

Since x ( t ) is uniformly bounded on the global interval, there is a cluster point of it, denoted as x * , which follows that there exists an increasing sequence t n such that (35) lim n + t n = + , lim n + x ( t n ) = x * .

Using the expression of (14) and lim t + x ˙ ( t ) = 0 , we have (36) x * = P Ω [ x * - lim n + ( i = 1 m 2 A T ( A x ( t n ) - b ) + λ i = 1 m x φ ( d i T x ( t n ) , μ ( t n ) ) ) ] .

Using Proposition 3 in the above equation, we have (37) lim n + ( 2 A T ( A x ( t n ) - b ) + λ i = 1 m x φ ( d i T x ( t n ) , μ ( t n ) ) ) , v - x * i = 1 m 0 , v Ω .

From Proposition 5, there exists ξ [ A x ( t ) - b 2 + λ D x ( t ) 1 ] such that (38) ξ , v - x * 0 , v Ω .

Therefore, x * is a Clarke stationary point of (4). Since (4) is a convex programming, x * is an optimal solution of (4). Owning to the random cluster point of x ( t ) , we know that any cluster point of x ( t ) is an optimal solution of (4), which means that the solution of (14) converges to the optimal solution set of (4).

4. Numerical Experiments

In this section, we will give two numerical experiments to validate the theoretical results obtained in this paper and the good performance of the proposed neural network in solving the sparse reconstruction problems.

Example 1.

In this experiment, we will test an experiment for the signal recovered with noise. Every original signal with sparsity 1 means that there is only one sound at time point. We use the following MATLAB codes to generate a 100 length original signal x 5 × 100 , mixing matrix A 5 × 4 , observed signal b 4 × 100 , and noise n 4 × 100 :

s=zeros(5,100);

for l=1:100;

q=randperm(5);

s(q(1:2),l) = (2*randn(2,1));

end

A=randn(3,5);

n = 0.05 * randn(3,100);

b=A*s-n.

We denote s * as the recovered signal using our method. Figures 1(a)-2(a) show the original, observed, and recovered signals using (14). Figure 2(b) presents the convergence of signal-to-noise ratio (SNR) along the solution of the proposed neural network. From this figure, we see that our method recovers this random original effectively. And we should state that the SNR of the recovered signal is 22.15 dB, where (39) SNR = i = 1 L - 1 L 20 lg ( s * ( l ) - s ( l ) 2 s ( l ) 2 ) .

(a) Original signals; (b) observed signals.

(a) Recovered signals; (b) the convergence of SNR ( x ( t ) ) .

Example 2.

In this experiment, we perform the proposed network (14) on the restoration of 20 × 20 circle image. The observed image is distorted from the unknown true image mainly by two factors: the blurring and the random noise. The blurring is a 2 D Gaussian function: (40) h ( i , j ) = e - 2 ( i / 3 ) 2 - 2 ( j / 3 ) 2 , which is truncated such that the function has a support of 7 × 7 . A Gaussian noise with zero mean and standard derivation of 0.05 dB is added to the blurred image. Figures 3(a) and 3(b) present the original and the observed images, respectively. The peak signal-to-noise ratio (PSNR) of the observed image is 16.87 dB . Denote x o and x b as the original and corresponding observed images and use the PSNR to evaluate the quality of the restored image; that is, (41) PSNR ( x ) = - 10 log 10 x - x o 20 × 20 .

(a) Original image; (b) observed image; (c) recovered image.

We use problem (13) to recover this image, where we let λ = 0.017 , Ω = { x : 0 x e } , and (42) D = ( L 1 I I L 1 ) with L 1 = ( 1 - 1 1 - 1 1 - 1 ) .

Choose x 0 = P Ω ( b ) . The recovered image by (14) with x 0 is figured in Figure 3(c) with PSNR = 19.65  dB. The convergence of the objective value and PSNR( x ( t ) ) along the solution of (14) with initial point x 0 is presented in Figures 4(a) and 4(b). From Figures 4(a) and 4(b), we find that the objective value is monotonely decreasing and the PSNR is monotonely increasing along the solution of (14).

(a) Convergence of the objective value; (b) convergence of PSNR ( x ( t ) ) .

5. Conclusion

Basing on the smoothing approximation technique and projected gradient method, we construct a neural network modeled by a differential equation to solve a class of constrained nonsmooth convex optimization problems, which have wide applications in sparse reconstruction. The proposed network has a unique and bounded solution with any initial point in the feasible region. Moreover, the solution of proposed network converges to the solution set of the optimization problem. Simulation results on numerical examples are elaborated upon to substantiate the effectiveness and performance of the neural network.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank the Editor-in-Chief Professor Huaiqin Wu and the three anonymous reviewers for their insightful and constructive comments, which help to enrich the content and improve the presentation of the results in this paper. This work is supported by the Fundamental Research Funds for the Central Universities (DL12EB04) and National Natural Science Foundation of China (31370565).

Eldar Y. C. Mishali M. Robust recovery of signals from a structured union of subspaces IEEE Transactions on Information Theory 2009 55 11 5302 5316 2-s2.0-70350743173 10.1109/TIT.2009.2030471 Saab R. Yilmaz Ö. McKeown M. J. Abugharbieh R. Underdetermined anechoic blind source separation via lq-basis-pursuit With q < 1 IEEE Transactions on Signal Processing 2007 55 8 4004 4017 2-s2.0-34547857004 10.1109/TSP.2007.895998 Donoho D. L. Compressed sensing IEEE Transactions on Information Theory 2006 52 4 1289 1306 2-s2.0-33645712892 10.1109/TIT.2006.871582 Montefusco L. B. Lazzaro D. Papi S. Nonlinear filtering for sparse signal recovery from incomplete measurements IEEE Transactions on Signal Processing 2009 57 7 2494 2502 2-s2.0-67650138146 10.1109/TSP.2009.2016244 Xiang Y. Ng S. K. Nguyen V. K. Blind separation of mutually correlated sources using precoders IEEE Transactions on Neural Networks 2010 21 1 82 90 2-s2.0-73949150710 10.1109/TNN.2009.2034518 Ng M. K. Chan R. H. Tang W.-C. Fast algorithm for deblurring models with Neumann boundary conditions SIAM Journal on Scientific Computing 1999 21 3 851 866 2-s2.0-0033296755 Donoho D. L. Neighborly polytopes and sparse solutions of underdetermined linear equations Tech. Rep 2005 Standford, Calif, USA Department of Statistics, Stanford University Tibshirani R. Regression shrinkage and selection via the lasso Journal of the Royal Statistical Society B 1996 58 267 288 Cichocki A. Unbehauen R. Neural Networks for Optimization and Signal Processing 1993 London, UK Wiley Bian W. Xue X. Subgradient-based neural networks for nonsmooth nonconvex optimization problems IEEE Transactions on Neural Networks 2009 20 6 1024 1038 2-s2.0-67649321486 10.1109/TNN.2009.2016340 Xia Y. Wang J. A recurrent neural network for solving nonlinear convex programs subject to linear constraints IEEE Transactions on Neural Networks 2005 16 2 379 386 2-s2.0-15344350315 10.1109/TNN.2004.841779 Gao X. B. Liao L. Z. A new one-layer recurrent neural network for nonsmooth convex optimization subject to linear equality constraints IEEE Transactions on Neural Networks 2010 21 918 929 Hopfield J. J. Tank D. W. Neural computation of decisions in optimization problems Biological Cybernetics 1985 52 3 141 152 2-s2.0-0021835689 Tank D. W. Hopfield J. J. Simple neural optimization network: an A/A converter, signal decision circuit, and a linear programming circuit IEEE Transactions on Circuits and Systems 1986 33 5 533 541 2-s2.0-0022721216 Kennedy M. P. Chua L. O. Neural networks for nonlinear programming IEEE Transactions on Circuits and Systems 1988 35 5 554 562 2-s2.0-0024016062 10.1109/31.1783 Chong E. K. P. Hui S. Zak S. H. An analysis of a class of neural networks for solving linear programming problems IEEE Transactions on Automatic Control 1999 44 11 1995 2006 2-s2.0-0033354129 10.1109/9.802909 Forti M. Nistri P. Quincampoix M. Generalized neural network for nonsmooth nonlinear programming problems IEEE Transactions on Circuits and Systems I: Regular Papers 2004 51 9 1741 1754 2-s2.0-4744373635 10.1109/TCSI.2004.834493 Xue X. Bian W. Subgradient-based neural networks for nonsmooth convex optimization problems IEEE Transactions on Circuits and Systems I: Regular Papers 2008 55 8 2378 2391 2-s2.0-54749093290 10.1109/TCSI.2008.920131 Chen X. Ng M. K. Zhang C. Nonconvex lp-regularization and box constrained model for image restoration IEEE Transactins on Image Processing 2010 21 4709 4721 Bian W. Chen X. Neural network for nonsmooth, nonconvex constrained minimization via smooth approximation IEEE Transactions on Neural Networks and Learning Systems 2014 25 545 556 Chen X. Smoothing methods for complementarity problems and their applications: a survey Journal of the Operations Research Society of Japan 2000 43 1 32 47 2-s2.0-0034148176 Rockafellar R. T. Wets R. J-B Variational Analysis 1998 Berlin, Germany Springer Xue X. Bian W. A project neural network for solving degenerate convex quadratic program Neurocomputing 2007 70 13–15 2449 2459 2-s2.0-34249697036 10.1016/j.neucom.2006.10.038 Liu Q. Cao J. A recurrent neural network based on projection operator for extended general variational inequalities IEEE Transactions on Systems, Man, and Cybernetics B: Cybernetics 2010 40 3 928 938 2-s2.0-77952584126 10.1109/TSMCB.2009.2033565