Eigenvalue Decomposition-Based Modified Newton Algorithm

When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-basedmodifiedNewton algorithm is presented, which first takes the eigenvalue decomposition of theHessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction.The new searching direction is always the descending direction.The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.


Introduction
Newton algorithm (also known as Newton's method) is commonly used in numerical analysis, especially in nonlinear optimization [1].But, in the definition domain of the object function, the method requires that (1) object function (x) is twice differentiable, (2) the Hessian matrix must be positive, and (3) the initial solution x 0 should be near the extreme point x  [2].If there is a point in the definition domain whose Hessian matrix is not positive, then the Newton direction of this point is not the descending direction.So the classical algorithm may not be convergent in the definition domain.
There are several improved forms of the classical Newton algorithm on the above problem: (1) combining Newton direction with the steepest descent direction [3]; (2) combining Newton direction with the negative curvature direction [4]; (3) adding a diagonal matrix V  I to the Hessian matrix so that the sum matrix is positive, where V  is a positive number bigger than the absolutely value of the smallest eigenvalue [5].This paper presents a new algorithm to handle the "nonpositive problem."

Newton Algorithm
We discuss the extreme value problems of multi-function.Suppose object function (x) is a twice differentiable function, and the extreme value problem of (x) is min x  (x) . ( If the Hessian matrix of (x) is positive, the Taylor expansion to the second order of (x) is: As we know, ∇ x (x) = 0 is the necessary condition, so the derivative form of (2) is Equation ( 3) is usually called Newton's equation.When matrix ∇ 2 (x  ) is positive, and the inverse matrix exists, we can get an iterative equation as follows: Equation ( 4) is usually called Newton iterative algorithm format, where −∇(x  ) is called negative gradient direction, and Next, we must give a proper step length to ensure the value of function declining.A well-known algorithm is "Damped Newton Algorithm, " which gets the step length by solving the following optimization problem:

Modified Newton Algorithm Based on Eigenvalue Decomposition
3.1.Modified Algorithm.First, take eigenvalue decomposition on the Hessian matrix where U is a unitary matrix; Σ is a diagonal matrix.Because matrix is not positive, there must be a few negative eigenvalues in Σ.We replace the negative eigenvalues with their absolutely values, so the modified searching direction is: The inner product of the negative gradient direction with d  is always positive, because the right part of the next equation is a quadratic form [6]: Equation ( 8) shows that the angle between the negative gradient direction and d  is less than 90 ∘ , so the new direction d  is always a descent direction.Then we can get the similar iterative equation by replacing the Newton direction with d  , that is, where  is the step length obtained by solving optimization problem (5).

Convergence Conclusion.
We prove that the modified Newton algorithm based on eigenvalue decomposition is convergent.First, we talk about Wolfe-Powell condition [7].
Finally, we can get the Wolfe-Powell condition.
Theorem 2. Let  : R  → R be a continuously differentiable function, so  has lower bounded in R  ; ∇ is uniformly continuous on , which contains a horizontal set V  0 := {x ∈ R  | (x) ≤  0 }, where  0 = (x 0 ) and x 0 ∈ R  .If d  satisfy the following conditions:   ≤ /2 − , where   stands for the angle between −∇(x  ) and d  , and  ∈ (0, /2) is a constant, then there must be a  satisfying ∇(x  ) = 0 or ∇(x  ) → 0 by Wolfe-Powell conditions.

Simulations on Convergence Properties
As the new searching direction d  satisfies the following equation: So, if the Hessian matrix of all points in definition domain is positive, then the convergence rate of modified algorithm equals the classical Newton algorithm.The other side, if there are any points with its Hessian matrix negative, then the convergence rate at most equals the one of classical Newton algorithm.
In addition, in order to verify that the improved algorithm has a wider convergence domain than the classical Newton algorithm, we employ a convex function: as the objective function.Let the definition domain be (, ) ∈ {[−3, 1], [−3, 1]}, and we choose the grid points by 0.1 interval, then use both classical Newton's algorithm and eigenvalue decomposition-based algorithm to find the minimum value.The results are drawn in Figure 1.
In Figure 1(c), the green point means that the algorithm has found the minimum value using this point as the initial solution; and the red point means that the algorithm did not converge or has not found the minimum value.Obviously, the convergence domain of eigenvalue decomposition algorithm is bigger than the one of the classical Newton algorithm.

Conclusion
A new algorithm is put forward based on eigenvalue decomposition and classical Newton algorithm.The eigenvalue decomposition based-algorithm has a wider convergence domain than the classical Newton algorithm.

Figure 1 :
Figure 1: Results of numerical experiment (a) objective function; (b) contour map; (c) convergence of Newton's algorithm; convergence domain of the new algorithm.