Newton-Krylov Type Algorithm for Solving Nonlinear Least Squares Problems

The minimization of a quadratic function within an ellipsoidal trust region is an important subproblem for many nonlinear programming algorithms. When the number of variables is large, one of the most widely used strategies is to project the original problem into a small dimensional subspace. In this paper, we introduce an algorithm for solving nonlinear least squares problems. This algorithm is based on constructing a basis for the Krylov subspace in conjunction with a model trust region technique to choose the step. The computational step on the small dimensional subspace lies inside the trust region. The Krylov subspace is terminated such that the termination condition allows the gradient to be decreased on it. A convergence theory of this algorithm is presented. It is shown that this algorithm is globally convergent.


Introduction
Nonlinear least squares NLS problems are unconstrained optimization problems with special structures.These problems arise in many aspects such as the solution of overdetermined systems of nonlinear equations, some scientific experiments, pattern recognition, and maximum likelihood estimation.For more details about these problems 1 .The general formulation of the NLS problem is to determine the solution x ∈ R n that minimizes the function where R x R 1 x , R 2 x , . . ., R m x t , R i : R n → R, 1 ≤ i ≤ m and m ≥ n.There are two general types of algorithms for solving NLS problem 1.1 .The first type is most closely related to solving systems of nonlinear equations 2 and it leads to The presented algorithm is a Newton-Krylov type algorithm.It requires a fixed-size limited storage proportional to the size of the problem and relies only upon matrix vector product.It is based on the implicitly restarted Arnoldi method IRAM to construct a basis for the Krylov subspace and to reduce the Jacobian into a Hessenberg matrix in conjunction with a trust region strategy to control the step on that subspace 6 .
Trust region methods for unconstrained minimization are blessed with both strong theoretical convergence properties and a good accurate results in practice.The trial computational step in these methods is to find an approximate minimizer of some model of the true objective function within a trust region for which a suitable norm of the correction lies inside a given bound.This restriction is known as the trust region constraint, and the bound on the norm is its radius.The radius is adjusted so that successive model problems minimized the true objective function within the trust region 7 .
The trust region subproblem is the problem of finding s Δ so that where Δ is some positive constant, • is the Euclidean norm in R n , and φ s g t s 1 2 s t Hs, 1.3 where g ∈ R n and H ∈ R n×n are, respectively, the gradient vector and the Hessian matrix or their approximations.
There are two different approaches to solve 1.2 .These approaches are based on either solving several linear systems 8 or approximating the curve s Δ by a piecewise linear approximation "dogleg strategies" 9 .In large-scale optimization, solving linear systems is computationally expensive.Moreover, it is not clear how to define the dogleg curves when the matrix H is singular 10 .
Several authors have studied inexact Newton's methods for solving NLS problems 11 .Xiaofang et al. have introduced stable factorized quassi-Newton methods for solving large-scale NLS 12 .Dennis et al. proposed a convergence theory for structured BFGS secant method with an application for NLS 13 .The Newton-Krylov method is an example of inexact Newton methods.Krylov techniques inside a Newton iteration in the context of system of equations have been proposed in 14 .The recent work of Sorensen, provides an algorithm which is based on recasting the trust region subproblem into a parameterized eigenvalue problem.This algorithm provides a super linearly convergent scheme to adjust the parameter and find the optimal solution from the eigenvector of the parameterized problem, as long as the hard case does not occur 15 .This contribution is organized as follows.In Section 2, we introduce the structure of the NLS problem and a general view about the theory of the trust region strategies and their convergence.The statement of the algorithm and its properties is developed in Section 3. Section 4 states the assumptions and presents the role of restarting mechanism to control the dimension of the Krylov subspace.The global convergence analysis is presented in Section 5. Concluding remarks and future ideas are given in the last section.
International Journal of Mathematics and Mathematical Sciences 3

Structure of the Problem
The subspace technique plays an important role in solving the NLS problem 1.1 .We assume that the current iterate x k and a Krylov subspace S k .Denote the dimension of S k to be k j and {v k 1 , v k 2 , . . ., v k k j } be a set of linearly independent vectors in S k .The next iterate x k 1 is such that the increment x k 1 − x k ∈ S k .Thus, we have R j x k V k y 0, for j 1, 2, . . ., m and y ∈ R k j , where the matrix where . If we consider s ∈ S k , we get the quadratic model where The first order necessary conditions of 2.1 is Thus, a solution of the quadratic model is a solution to an equation of the form The model trust region algorithm generates a sequence of points x k , and at the kth stage of the iteration the quadratic model of problem 1.2 has the form At this stage, an initial value for the trust region radius Δ k is also available.An inner iteration is then performed which consists of using the current trust region radius, Δ k , and the information contained in the quadratic model to compute a step, s Δ k .Then a comparison of the actual reduction of the objective function and the reduction predicted by the quadratic model is performed.If there is a satisfactory reduction, then the step can be taken, or a possibly larger trust region used.If not, then the trust region is reduced and the inner iteration is repeated.
For now, we leave unspecified what algorithm is used to form the step s Δ , and how the trust region radius Δ is changed.We also leave unspecified the selection of J t k J k Q k except to restrict it to be symmetric.Details on these points will be addressed in our forth coming paper.
The solution s of the quadratic model 2.2 is a solution to an equation of the form with μ ≥ 0, μ s t s − Δ 2 0 and J t J Q μI is positive semidefinite.This represents the first-order necessary conditions concerning the pair μ and s, where s is a solution to model 2.2 and μ is the Lagrange multiplier associated with the constraint s t s ≤ Δ 2 .The sufficient conditions that will ensure s to be a solution to 2.2 can be established in the following lemma which has a proof in 16 .

2.9
Of course, if J t J Q μI is positive definite then s is a unique solution.
The main result which is used to prove that the sequence of gradients tends to zero for modified Newton methods is where s is a solution to 2.2 .The geometric interpretation of this inequality is that for a quadratic function, any solution s to 2.2 produces a decrease f − φ s that is at least as much as the decrease along the steepest descent direction −R t J.A proof of this result may be found in 17 .

Algorithmic Framework
The algorithm we will discuss here requires that f is twice differentiable at any point x in the domain of f.This algorithm involves two levels.In the first level we use IRA method to reduce the Hessian to a tridiagonal matrix and construct an orthonormal basis for the invariant subspace of the Hessian matrix.The second level is used to compute the step and update the trust region radius for the reduced local model a model defined on the subspace .
The desired properties of this algorithm are: 1 the algorithm should be well defined for a sufficiently general class of functions and it is globally convergent; 2 the algorithm should be invariant under linear affine scalings of the variables, that is, if we replace f x by f y f s V y , where V ∈ R n×k is an orthonormal matrix, x ∈ R n and y ∈ R k , then applying the iteration to f with initial guess y 0 satisfying x 0 s V y 0 should produce a sequence {y l } related to the sequence {x l } by x l V y l s, where x l is produced by applying the algorithm to f with initial guess x 0 ; 3 the algorithm should provide a decrease that is at least as large as a given multiple of the minimum decrease that would be provided by a quadratic search along the steepest descent direction; 4 the algorithm should give as good a decrease of the quadratic model as a direction of the negative gradient when the Hessian, J t J Q , is indefinite and should force the direction to be equal to the Newton direction when J t J Q is symmetric positive definite.
The following describes a full iteration of a truncated Newton type method.Some of the previous characteristics will be obvious and the other ones will be proved in the next section.
2 Step 1 construction a basis for the Krylov subspace .a Choose ∈ 0, 1 , an initial guess s 0 which can be chosen to be zero, form i Form J t J Q v j and orthogonalize it against the previous v 1 , v 2 , . . ., v j then ii Compute the residual norm β j R t J J t J Q s j of the solution s j that would be obtained if we stopped.
iii If β j ≤ j R t J , where j ≤ 1/D j ∈ 0, , and D j is an approximation to the condition number of J t J Q j , then set k j and go to 3, else continue.
International Journal of Mathematics and Mathematical Sciences 3 Step 2 compute the trial step .
i Construct the local model, ii Compute the solution y to the problem min{ψ k y : y ≤ Δ k } and set s k V k y. 4 Step 3 test the step and update Δ k .
i Evaluate

The Restarting Mechanism
In this section, we discuss the important role of the restarting mechanism to control the possibility of the failure of nonsingularity of the Hessian matrix, and introduce the assumptions under which we prove the global convergence.
Let the sequence of the iterates generated by Algorithm 3.1 be {x k }; for such sequence we assume G1 for all k, x k and An immediate consequence of the global assumptions is that the Hessian matrix J t J Q x is bounded, that is, there exists a constant β 1 > 0 such that J t J Q x < β 1 .Therefore, the condition numbers of the Hessian, D k cond 2 J t J Q k are bounded.

International Journal of Mathematics and Mathematical Sciences 7
Assumption G3 that J t J Q is nonsingular is necessary since the IRA iteration is only guaranteed to converge for nonsingular systems.We first discuss the two ways in which the IRA method can breakdown.This can happen if either v k 1 0 so that v k 1 cannot be formed, or if H k max is singular which means that the maximum number of IRA steps has been taken, but the final iterate cannot be formed.The first situation has often been referred to as a soft breakdown, since v k 1 0 implies J t J Q k is nonsingular and x k is the solution 18 .The second case is more serious in that it causes a convergence failure of the Arnoldi process.One possible recourse is to hope that J t J Q k is nonsingular for some k among 1, 2, . . ., k max−1 .If such k exists, we can compute x j and then restart the algorithm using x k as the new initial guess x 0 .It may not be possible to do this.
To We have discussed the possibility of stagnation in the linear IRA method which results in a break down in the nonlinear iteration.Sufficient conditions under which stagnation of the linear iteration never occurs are 1 we have to ensure that the steepest descent direction belongs to the subspace, S k .
Indeed in Algorithm 3.1, the Krylov subspace will contain the steepest descent direction because v 1 −R t J/ R t J and the Hessian is nonsingular; 2 there is no difficulty, if one required the Hessian matrix, J t J Q x to be positive definite for all x ∈ Ω, then the termination condition can be satisfied, and so convergence of the sequence of iterates will follow.This will be the case for some problems, but, requiring J t J Q x to be positive definite every where is very restrictive.
One of the main restrictions of most of the Newton-Krylov schemes is that the subspace onto which a given Newton step is projected must solve the Newton equations with a certain accuracy which is monitored by the termination condition assumption G4 .This condition is enough to essentially guarantee convergence of the trust region algorithm.Practically, the main difficulty is that one does not know in advance if the subspace chosen for projection will be good enough to guarantee this condition.Thus, k j can be chosen as large as the termination condition will eventually be satisfied, but, when k j is too large the computational cost and the storage become too high.An alternative to that is to restart the algorithm keeping J t J Q nonsingular and k j < n.Moreover, preconditioning and scaling are essential for the successful application of these schemes 19 .

Global Convergence Analysis
In this section, we are going to establish some convergence properties which are possessed by Algorithm 3.1.The major differences between the proposed results and the preexisting International Journal of Mathematics and Mathematical Sciences ones arise from the fact that a lower dimensional quadratic model is used, rather than the full dimension model that is used in the preexisting results 10 .Lemma 5.1.Let the global assumptions hold.Then for any s ∈ Ω, one has Proof.Suppose r k be the residual associated with s so that

5.4
Introduce 5.4 into 5.3 , we obtain the following:

5.5
International Journal of Mathematics and Mathematical Sciences 9 Inequality 5.1 can be written as

5.6
This condition 5.6 tells us at each iteration of Algorithm 3.1, we require that the termination condition holds.Since the condition numbers, D k , are bounded from above, condition 5.6 gives where θ is the angle between R t J k and s and D are the upper bound of the condition numbers D k .Inequality 5.7 shows that the acute angle θ is bounded away from π/2.
The following lemma shows that the termination norm assumption G4 implies that the cosine of angle between the gradient and the Krylov subspace is bounded below.

Lemma 5.2. Let the global assumptions hold. Then
Proof.Suppose s V k y be a vector of S k satisfying the termination condition, R t J k J t J Q k s ≤ k R t J k , k ∈ 0, 1 .From Lemma 5.1 and the fact that s V y y , we obtain From Cauchy-Schwartz inequality, we obtain

5.10
Combining formulas 5.9 and 5.10 , we obtain the result.
The following Lemma establishes that Algorithm 3.1 converges with a decrease in the quadratic model on the lower dimensional space at least equal to the decrease in the quadratic model on the full dimensional space.

Lemma 5.3. Let the global assumptions hold and let y be a solution of the minimization problem, min y ≤Δ ψ y . Then
where Step 3 of Algorithm 3.1, we obtain We have to convert the lower bound of this inequality to one involving R t J k rather than which completes the proof.
The following two facts will be used in the remainder of the proof.

Fact 1.
By Taylor's theorem for any k and Δ > 0, we have

5.16
International Journal of Mathematics and Mathematical Sciences 11 where, For any sequence {x k } generated by an algorithm satisfying the global assumptions, the related sequence {f k } is monotonically decreasing and bounded from below, that is, The next result establishes that every limit point of the sequence {x k } satisfies the firstorder necessary conditions for a minimum.Lemma 5.4.Let the global assumptions hold and Algorithm 3.1 is applied to f x , generating a sequence {x k }, x k ∈ Ω, and k 1, 2, . . ., then h k → 0.
Proof.Since k ≤ < 1 for all k, we have There are two cases, either for all k ≥ l, x l ∈ B ρ or eventually {x l } leaves the ball B ρ .Suppose x l ∈ B ρ for all k ≥ l, then g l ≥ g k /2 σ for all l > k.Thus by Lemma 5.3, we have

5.20
This gives for Δ k sufficiently small and k ≥ l that

International Journal of Mathematics and Mathematical Sciences
In addition, we have Inequalities 5.21 and 5.22 tell us that, for Δ k sufficiently small, we cannot get a decrease in Δ k in Step 3 ii of Algorithm 3.1.It follows that Δ k is bounded away from 0, but, since and since f is bounded from below, we must have Δ k → 0 which is a contradiction.Therefore, {x k } must be outside B ρ for some k > l.
Let x j 1 be the first term after x l that does not belong to B ρ .From inequality 5.23 , we get

5.25
If there exists at least one k with Δ k > σ/β 1 , we get

5.26
In either case, we have

5.27
International Journal of Mathematics and Mathematical Sciences 13 Since, f is bounded below, {f k } is a monotonically decreasing sequence i.e., f k → f .Hence, i.e., R t J k → 0 as k → ∞ .The proof is completed by using Lemma 5.2.
The following lemma proves that under the global assumptions, if each member of the sequence of iterates generated by Algorithm 3.1 satisfies the termination condition of the algorithm, then this sequence converges to one of its limit points.Lemma 5.5.Let the global assumptions hold, H k V t k J t J Q x k V k for all k, J t J Q x is Lipschitz continuous with L, and x * is a limit point of the sequence {x k } with J t J Q x * be positive definite.Then {x k } converges q-Superlinearly to x .
Proof.Let {x k m } be a subsequence of the sequence {x k } that converges to x , where, x is a limit of {x k }.From Lemma 5.4, R t J x 0. Since J t J Q x is positive definite and J t J Q x is continuous there exists δ 1 > 0 such that if x − x < δ 1 , then J t J Q x is positive definite, and if x / x then R t J x / 0. Let, B 1 {x : x − x < δ 1 }.
Since R t J x 0, we can find δ 2 > 0 with δ 2 < δ 1 /4 and J t J Q We claim that x i 1 ∈ B 2 , which implies that the entire sequence beyond x k lo ∈ B 2 .Suppose x i 1 does not belong to B 2 .Since f i 1 < f k lo , x i 1 does not belong to B 1 either.So

5.29
So , the truncated-Newton step is within the trust region, we obtain

5.30
Since y i Δ < δ 1 /2, it follows that x i 1 ∈ B 1 , which is a contradiction.Hence, for all k ≥ k l o , x k ∈ B 2 , and so since f x k is strictly decreasing sequence and x is the unique minimizer of f in B 2 , we have that {x k } → x .
The following lemma establishes the rate of convergence of the sequence of iterates generated by Algorithm 3.1 to be q-super linear.Lemma 5.6.Let the assumptions of Lemma 5.5 hold.Then the rate of convergence of the sequence {x k } is q-Super linear.
Proof.To show that the convergence rate is super linear, we will show eventually that H −1 k h k will always be less than Δ j and hence the truncated-Newton step will always be taken.Since J t J Q x is positive definite, it follows that the convergence rate of {x k } to x is super linear.
To show that eventually the truncated-Newton step is always shorter than the trust region radius, we need a particular lower bound on pred k Δ .By assumptions, for all k large enough, H k is positive definite.Hence, either the truncated-Newton step is longer than the trust region radius, or y k Δ is the truncated-Newton step.In either case . By Lemma 5.3, for all k large enough, we have

5.33
So, by the continuity of J t J Q x , for all k large enough, we obtain Finally, by the argument leading up to Fact 1 and the Lipschitz continuity assumption, we have

5.35
International Journal of Mathematics and Mathematical Sciences 15 Thus, for any Δ k > 0 and k large enough, we obtain

5.36
Thus by Step 3 ii of Algorithm 3.1, there is a Δ such that if Δ k−1 < Δ, then Δ k will be less than It follows from the superlinear convergence of the truncated-Newton method that for x k−1 close enough to x and k large enough,

5.37
Now, if Δ k is bounded away from 0 for all large k, then we are done.Otherwise, if for an arbitrarily large k, Δ k is reduced, that is, Δ k < Δ k−1 , then we have 5.38 and so the full truncated-Newton step is taken.Inductively, this occurs for all subsequence iterates and super linear convergence follows.
The next result shows that under the global assumptions every limit point of the sequence {x k } satisfies the second-order necessary conditions for a minimum.Lemma 5.7.Let the global assumptions hold, and assume pred k Δ ≥ α 2 −λ 1 H Δ 2 for all the symmetric matrix H ∈ R k j ×k j , h ∈ R k j , Δ > 0 and some α 2 > 0. If {x k } → x , then H x is positive semidefinite, where λ 1 H is the smallest eigenvalue of the matrix H.
Proof.Suppose to the contrary that λ 1 H x < 0. There exists K such that if k ≥ K, then λ 1 H k < 1/2 λ 1 H x .For all k ≥ K and for all Δ k > 0, we obtain

International Journal of Mathematics and Mathematical Sciences
Since the last quantity goes to zero as Δ k → 0 and since a Newton step is never taken for k > K, it follows from Step 3 ii of Algorithm 3.1 that for k ≥ K Δ k sufficiently small.Δ k cannot be decreased further.Thus, Δ is bounded below, but

5.41
Since f is bounded below, ared k Δ → 0, so Δ k → 0 which is a contradiction.Hence λ 1 H x ≥ 0. This completes the proof of the lemma.

Concluding Remarks
In this paper, we have shown that the implicitly restarted Arnoldi method can be combined with the Newton iteration and trust region strategy to obtain a globally convergent algorithm for solving large-scale NLS problems.
The main restriction of this scheme is that the subspace onto which a given Newton step is projected must solve the Newton equations with a certain accuracy which is monitored by the termination condition.This is enough to guarantee convergence.
This theory is sufficiently general that is hold for any algorithm that projects the problem on a lower dimensional subspace.The convergence results indicate promise for this research to solve large-scale NLS problems.Our next step is to investigate the performance of this algorithm on some NLS problems.The results will be reported in our forthcoming paper.

2 International
Journal of Mathematics and Mathematical SciencesGauss-Newton and Levenberg-Marquart methods 3 .The second type is closely related to unconstrained minimization techniques 4 .Of course the two types of algorithms are closely related to each other 5 .
handle the problem of singular H k , we can use a technique similar to that done in the full dimensional space.If H k is singular, or its condition number is greater than 1/ √ ν, where ν is the machine epsilon, then we perturb the quadratic model ψ y to