^{1}

^{1}

^{2}

^{1}

^{2}

We construct a neural network based on smoothing approximation techniques and projected gradient method to solve a kind of sparse reconstruction problems. Neural network can be implemented by circuits and can be seen as an important method for solving optimization problems, especially large scale problems. Smoothing approximation is an efficient technique for solving nonsmooth optimization problems. We combine these two techniques to overcome the difficulties of the choices of the step size in discrete algorithms and the item in the set-valued map of differential inclusion. In theory, the proposed network can converge to the optimal solution set of the given problem. Furthermore, some numerical experiments show the effectiveness of the proposed network in this paper.

Sparse reconstruction is the term used to describe the process of extracting some underlying original source signals from a number of observed mixture signals, where the mixing model is either unknown or the knowledge about the mixing process is limited. The problem of recovering a sparse signal from noisy linear observation arises in many real world sensing applications [

Generally, finding a solution with few nonzero entries for an underdetermined linear system with noise is often modeled as the regularization problem:

A class of signal recovery problems can be formulated as

Optimization problems arise in a variety of scientific and engineering applications and they really need real time solutions. Since the computing time greatly depends on the dimension and the structure of the optimization problems, numerical algorithms are usually less effective in large scale or real time optimization problems. In many applications, real time optimal solutions are usually imperative, such as on-board signal processing and robot motion planning and control. One promising approach to handle these problems is to employ artificial neural network. During recent decades, neural dynamical method for solving optimization problems has been a major area in neural network research based on circuit implementation [

Since the neural network was first proposed for solving linear [

Basing on the advantages of the neural networks, in this paper, we will propose a neural network and use some mathematical techniques to solve optimization problem (

In this section, we will introduce several basic definitions and lemmas, which are used in the development.

Suppose that

Moreover, if

If

Since the constraint set of (

The projection operator has the following properties.

Consider the following:

Let

For any fixed

For any fixed

There is a positive constant

From the above definition, we can get that for any fixed

Next, we present a smoothing function of the absolute value function, which is defined by

Consider the following:

In (

In the following, we use

Next, we will give some analysis on the proposed neural network (

For any initial point

The right hand of (

From the integration about the above differential equation, we have

Since

Differentiating

Using the inequality of project operator to (

Thus,

Thus,

For any initial point

the solution of (

Suppose that there exist two solutions

From the expression of

Since

By integrating (

Let

From (

Moreover, we have that

Combining (

Since

Using the expression of (

Using Proposition

From Proposition

Therefore,

In this section, we will give two numerical experiments to validate the theoretical results obtained in this paper and the good performance of the proposed neural network in solving the sparse reconstruction problems.

In this experiment, we will test an experiment for the signal recovered with noise. Every original signal with sparsity 1 means that there is only one sound at time point. We use the following MATLAB codes to generate a 100 length original signal

We denote

(a) Original signals; (b) observed signals.

(a) Recovered signals; (b) the convergence of

In this experiment, we perform the proposed network (

(a) Original image; (b) observed image; (c) recovered image.

We use problem (

Choose

(a) Convergence of the objective value; (b) convergence of

Basing on the smoothing approximation technique and projected gradient method, we construct a neural network modeled by a differential equation to solve a class of constrained nonsmooth convex optimization problems, which have wide applications in sparse reconstruction. The proposed network has a unique and bounded solution with any initial point in the feasible region. Moreover, the solution of proposed network converges to the solution set of the optimization problem. Simulation results on numerical examples are elaborated upon to substantiate the effectiveness and performance of the neural network.

The authors declare that there is no conflict of interests regarding the publication of this paper.

The authors would like to thank the Editor-in-Chief Professor Huaiqin Wu and the three anonymous reviewers for their insightful and constructive comments, which help to enrich the content and improve the presentation of the results in this paper. This work is supported by the Fundamental Research Funds for the Central Universities (DL12EB04) and National Natural Science Foundation of China (31370565).

_{q}-basis-pursuit With q < 1