A Low Memory Solver for Integral Equations of Chandrasekhar Type in the Radiative Transfer Problems

The problems of radiative transfer give rise to interesting integral equations that must be faced with efficient numerical solver. Very often the integral equations are discretized to large-scale nonlinear equations and solved by Newton’s-like methods. Generally, these kind of methods require the computation and storage of the Jacobian matrix or its approximation. In this paper, we present a new approach that was based on approximating the Jacobian inverse into a diagonal matrix by means of variational technique. Numerical results on well-known benchmarks integral equations involved in the radiative transfer authenticate the reliability and efficiency of the approach. The fact that the proposed method can solve the integral equations without function derivative and matrix storage can be considered as a clear advantage over some other variants of Newton’s method.


Introduction
The study of Chandrasekhar's integral equation involved in the radiative transfer problem has been a foremost subject of much investigations and was first formulated by Chandrasekhar 1 in 1960.It arose originally in connection with scattering through a homogeneous semi-infinite plane atmosphere and since it has been used to model diverse forms of scattering via the H-function of Chandrasekhar 2 , defined by Chandrasekhar H-function plays a crucial role in radiative transfer and transport theory 3, 4 .Since then, there have been diverse solvers of 1.1 .It is well known that the numerical solution of Chandrasekhar integral equation is difficult to be obtained 5 , and thus it is convenient to have a reliable and efficient solver.The problem of finding approximate solution of such integral equations is still popular today, and various methods of solving these integral equations have been established 5-7 .The common approach for the approximate solution of 1.1 is at first discretizing 1.1 by a vector x ∈ R n , then replacing the integrals by quadrature sums and the derivatives by difference quotients involving only the components of x ∈ R n see 8 , e.g. .By doing so, 1.1 becomes a problem of finding the solution of system of n nonlinear equations with n unknowns where F : R n → R n is a nonlinear mapping.Often, the mapping F is assumed to satisfy the following assumptions: The famous iterative method for solving 1.2 is the classical Newton's method, where the Newtonian iteration is given by The convergence rate for the Newton's method is quadratic from any initial point x 0 in the neighborhood of x * 9 .However, an iteration of 1.3 turns to be expensive, because it requires to compute and store the Jacobian matrix, as well as solving Newton's system which is a linear system in each iteration.The major difficulty of Newton's type method is the matrix storage requirements especially when handling large systems of nonlinear equations 5, 6, 9 .There are quite a number of revised Newton's type methods being introduced, which include fixed Newton's and quasi-Newton's, to diminish the weakness of 1.3 .Fixed Newton method 10 for the determination of solution x * is given by The method avoids computation and storing the Jacobian in each iteration except at k 0 .However, it still requires solving the systems of n linear equations and may consume more CPU time as the system's dimension increases 10 .
A quasi-Newton's method is another variant of Newton's type methods, and it replaces the Jacobian or its inverse with an approximation which can be updated at each iteration 11 , and its updating scheme is given by where the matrix B k is the approximation of the Jacobian at x k .The main idea behind quasi-Newton's method is to eliminate the evaluation cost of the Jacobian matrix, in which if function evaluations are very expensive, the cost of finding a solution by quasi-Newton's methods could be much smaller than some other Newton's-like methods 7, 12, 13 .Various Jacobian approximations matrices such as the Broyden's method 11, 14 are proposed.However, the most critical part of such solvers is that they need the storage of full matrix of the approximate Jacobian, which can be a very expensive task as the dimension of systems increases 15 .In this paper, we propose an alternative approximation to the Jacobian inverse into a diagonal matrix by means of variational techniques.It is worth mentioning that the suggested method can be applied to solve 1.2 without the cost of computing or storing the true Jacobian.Hence, it can reduce computational cost, storage requirement, processing time CPU time and also eliminates the need for solving n linear equations in each iteration.The proposed method works efficiently, and the results so far are very encouraging.This paper is arranged as follows; we present our proposed method in Section 2; numerical results are reported in Section 3; finally conclusion is given in Section 4.

Chandrasekhar H-Equation
In this section, we present the detailed process of discretizing the Chandrasekhar-type integral equations in the radiative transfer problem.Chandrasekhar and Breen 16 compute Hequation as the solution of the nonlinear integral equation where c ∈ 0, 1 and H : 0, 1 → R is an unknown continuous function.From 2.1 , we obtain then the evaluation of 2.1 at every x i yields the equation After multiplying both sides of 2.2 by 1 − c/2 and performing some algebra, we arrive at 2.4 , which is known as the Chandrasekhar H-equation 15 Mathematical Problems in Engineering If 2.4 is discretized by using the midpoint quadrature formula for t j j − 0.5 h, 0 ≤ j ≤ 1, i 2, . . ., n, h 1/n, c ∈ 0, 1 , then we have the following: Function 2.6 is called the discretized Chandrasekhar H-equation which can be solved by some iterative methods.Nevertheless, the most difficult part in solving 2.6 arises dramatically as c approaches 1, since its Jacobian is singular at c 1. Due to this disadvantage, we aim to derive a method that hopefully will not be affected by this difficulty.

Derivation of the Method (LMSI)
Firstly, note that by the mean value theorem, we have Equation 3.2 is always regarded as the secant equation.Alternatively, we can rearrange 3.2 to obtain Here, we propose to use a diagonal matrix, say D, to approximate , that is, Let us consider an updating scheme for D, in which we should update D by adding a correction M which is also a diagonal matrix at every iteration In order to incorporate correct information of the Jacobian inverse into the updating matrix, D k 1 , we require that D k 1 satisfies the secant equation 3.2 , that is, However, since it is difficult to have a diagonal matrix that satisfies the secant equation, in particular, because Jacobian approximations are not usually done in element wise, we consider the use of the weak secant condition 17 instead, To encourage good condition number as well as numerical stability in the approximation, we attempt to control the growth error of the correction by minimizing its magnitude under some norms here, we consider the Frobenuis norm , such that 3.7 holds.To this end, we consider the following problem: where , the above problem can be expressed as follows: 3.9 Since the objective function and the constraint are convex, we will have unique solution for 3.9 .The solution can be obtained by considering the Lagrangian function of problem 3.9 where λ is the corresponding Lagrangian multiplier.
Taking the partial derivatives of 3.10 with respect to each β i and λ, respectively, and setting them equal to zero, we have

3.14
Invoking the constraint 3.12 , we have Equating 3.14 with 3.15 gives

3.17
Denoting Tr G 2 k where Tr is the trace operation, we obtain, therefore,

3.18
Finally, the proposed updating formula for the approximation of the Jacobian inverse is given as follows:

3.19
To safeguard possibly very small ΔF k and Tr G 2 k , we require that ΔF k ≥ 1 for some chosen small 1 > 0. Else, we will skip the update by setting D k 1 D k .Now, we can describe the algorithm for our proposed method LMSI as follows.

Algorithm LMSI
Steps are the following.
Step 2. Compute F x k and x k 1 x k − D k F x k . Step

Local Convergence Results
In this section, we will give some convergence properties of LMSI method.Before we proceed further, we will make the following standard assumptions on nonlinear systems F.
Assumption 4.1.We have the following.
ii There exists x * ∈ E such that F x * 0, and F x is continuous for all x.
iii F x satisfies Lipschitz condition of order one, that is, there exists a positive constant μ such that iv There exists constants c 1 ≤ c 2 such that c 1 ω 2 ≤ ω T F x ω ≤ c 2 ω 2 for all x ∈ E and ω ∈ R n .
We will also need the following result which is a special case of a more general theorem of 15 .
Theorem 4.2.Assume that Assumption 4.1 holds.If there exists K B > 0, δ > 0, and δ 1 > 0, such that for x 0 ∈ B δ and the matrix-valued function B x satisfies I − B x F x * ρ x < δ 1 for all x ∈ B δ , then the iteration For the proof of Theorem 4.2, see 15 .
Using Assumption 4.1 and Theorem 4.2, one has the following result.
For k 0 and assuming D 0 I, we have where ΔF

4.8
From Assumption 4.1 iv and D 0 I, 4.8 becomes

4.10
Hence, we obtain Suppose that α n 3/2 |c − 1|, then From the fact that D 0 F √ n, it follows that where β √ n α > 0. Therefore, if we assume that I − D 0 F x * F < δ, then

Numerical Results
In this section, we compare the performance of LMSI method with that of the Newton's method NM , fixed Newton's method FN , and Broyden's method BM .We apply the algorithms to the well-known benchmarks integral equations involved in radiative transfer.The comparison is based upon the following criterion: number of iterations, CPU time in seconds, and storage requirement.The computations are done in MATLAB 7.0 using double-precision computer.The stopping criterion used is

5.1
The starting point x 0 is given by 1, 1, . . . 1 T .The symbol "−" is used to indicate a failure due to the following: 1 The number of iteration is at least 200, but no point of x k satisfying 5.1 is obtained, 2 CPU time in second reaches 200, 3 insufficient memory to initial the run.
The numerical results of the methods when solving Chandrasekhar H-Equation in different parameter are reported in Table 1.The first column of the table contains the parameter of problem.Generally, with our choice of c, the corresponding Jacobian is not diagonally dominate; however, when c → 1, the Jacobian is nearly singular.From Table 1, it was shown that only LMSI is able to solve problems where n > 2000.This is due to the fact that LMSI requires very low-storage requirement in building the approximation of the Jacobian inverse.Indeed, the size of the updating matrix increases in O n as the dimension of the system increases, as opposed to NM, FN, and BM methods that increase in O n 2 .
Moreover, we observe that LMSI method has a 100% of success rate convergence to the solution when compared with NM method having 57%, FN method with 39% and BM with 71%, respectively.In addition, it is worth mentioning that the result of LMSI in solving  problem 1 when c 0.9999 shows that the method could be a good solver even when the Jacobian is nearly singular.Figures 1, 2, and 3 reveal that the CPU time of LMSI method increases linearly as the dimension of the systems increases, whereas for NM, FN, and BM, the rates grow exponentially.This also suggests that our solver is a good alternative when the dimension of the problem is very high.

Conclusion
In this paper, we present a low memory solver for integral equation of Chandrasekhar type in the radiative transfer problems.Our approach is based on approximating the Jacobian inverse into a diagonal matrix.The fact that the LMSI method can solve the discretized integral equations without computing and storing the Jacobian makes clear the advantage over NM and FN methods.It is also worth mentioning that the method is capable of significantly reducing the execution time CPU time , as compared to NM, FN, and BM methods while maintaining good accuracy of the numerical solution to some extend.Another fact that makes the LMSI method appealing is that throughout the numerical experiments it never fails to converge.Finally, we conclude that our method LMSI is a good alternative to Newton-type methods for solving large-scale nonlinear equations with nearly singular Jacobian.

Figure 1 :
Figure 1: Comparison of NM, FN, BM, and LMSI methods when c 0.9, in terms of CPU time.

Figure 2 :
Figure 2: Comparison of NM, FN, BM, and LMSI methods when c 0.99, in terms of CPU time.

Figure 3 :
Figure 3: Comparison of NM, FN, BM, and LMSI methods when c 0.9999, in terms of CPU time.

Table 1 :
Results of Chandrasekhar H-equation number of iteration/CPU time .