When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.

Newton algorithm (also known as Newton’s method) is commonly used in numerical analysis, especially in nonlinear optimization [

There are several improved forms of the classical Newton algorithm on the above problem: (1) combining Newton direction with the steepest descent direction [

We discuss the extreme value problems of multi-function. Suppose object function

If the Hessian matrix of

As we know,

Equation (

Equation (

Next, we must give a proper step length to ensure the value of function declining. A well-known algorithm is “Damped Newton Algorithm,” which gets the step length by solving the following optimization problem:

First, take eigenvalue decomposition on the Hessian matrix

The inner product of the negative gradient direction with

We prove that the modified Newton algorithm based on eigenvalue decomposition is convergent. First, we talk about Wolfe-Powell condition [

If step length

Equation (

Let

Let

Finally, we can get the Wolfe-Powell condition.

Let

We proof the theorem by contradiction. Suppose

Equation (

Because of

As the new searching direction

So, if the Hessian matrix of all points in definition domain is positive, then the convergence rate of modified algorithm equals the classical Newton algorithm. The other side, if there are any points with its Hessian matrix negative, then the convergence rate at most equals the one of classical Newton algorithm.

In addition, in order to verify that the improved algorithm has a wider convergence domain than the classical Newton algorithm, we employ a convex function:

Results of numerical experiment (a) objective function; (b) contour map; (c) convergence domain of Newton's algorithm; (d) convergence domain of the new algorithm.

In Figure

A new algorithm is put forward based on eigenvalue decomposition and classical Newton algorithm. The eigenvalue decomposition based-algorithm has a wider convergence domain than the classical Newton algorithm.

This work is supported by the Natural Science Foundation of China (60974124), the Program for New Century Excellent Talents in University, and the Project sponsored by SRF for ROCS, SEM in China, the key lab open foundation for Space Flight Dynamics technique (SFDLXZ-2010-004).