^{1}

^{2}

^{3}

^{1}

^{1}

^{2}

^{3}

Nonnegative sparsity-constrained optimization problem arises in many fields, such as the linear compressing sensing problem and the regularized logistic regression cost function. In this paper, we introduce a new stepsize rule and establish a gradient projection algorithm. We also obtain some convergence results under milder conditions.

In this paper, we are mainly concerned with the nonnegative sparsity-constrained optimization problem (NN-SCO):

Recently, a great deal of work has been devoted to algorithms for sparsity-constrained optimization problem. Beck and Eldar [

Inspired by the above literature studies, in this paper, we establish a gradient projection algorithm with a new stepsize. The new algorithm removes the condition of the restricted strong smoothness of objective function which makes it more applicable. Meanwhile, we prove the convergence of the algorithm.

The rest of this paper is organized as follows. In Section

To make it easier to read, we give some used notations as follows:

Let

A function

A function

If and only if for any

In particular, in (

The projected gradient

For

In particular, when

When

For any ^{th} component is one and others are zeros.

In this section, we establish a new algorithm which improves the IIHT algorithm for (

Next, let us list the following assumptions for convenience:

For any

Let the sequence

Let

Then,

Thus,

Then, (

We suppose

Since

Moreover,

Because

Therefore,

By Lemma

Hence,

Let

Let the sequence

for any

Since

Setting

We can easily get that

Let

Summing over both sides of this inequality, we get

Since

It easily can be got by (

Let the sequence

Because the sequence

Let the sequence

Any accumulation of sequence

If

Suppose that

Because

we get

Moreover,

We consider the next two cases:

For

By

Since

without loss of generality, we can suppose

i.e.,

For

When

Due to the property of the projections

Thus,

Taking limits on both sides, we obtain

When

For all sufficiently large

Since

which contradicts with

Summarizing the two cases, we obtain

Thus,

Set

By Definition

Moreover, the maximum value is taken at

Because

Thus, for any

Taking

By the Cauchy–Schwartz inequality, we get

By Lemma

Taking limits on both sides and using Lemma

By (

Let the sequence

If

If

For

Because

Thus,

If

In fact, for all sufficiently large

For any

Thus,

By

For any

Thus,

Let the sequence

By Theorem

Set

For all sufficiently large

Because

Since

Setting

Thus,

By

From

Thus,

In this paper, we are mainly concerned with the nonnegative sparsity-constrained optimization problem. We introduce a new stepsize rule and propose a new gradient projection algorithm to solve this problem. The new algorithm removes the condition of the restricted strong smoothness of objective function which makes the new algorithm more applicable. Meanwhile, we prove the convergence of the algorithm.

No data were used to support this study.

The authors declare that they have no conflicts of interest.

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

This project was supported by the National Science Foundation of Shandong Province (no. ZR2018MA019).