A modified classical method for preliminary orbit
determination is presented. In our proposal, the spread of the
observations is considerably wider than in the original method, as
well as the order of convergence of the iterative scheme involved.
The numerical approach is made by using matricial weight functions,
which will lead us to a class of iterative methods with a sixth
local order of convergence. This is a process widely used in the
design of iterative methods for solving nonlinear scalar equations,
but rarely employed in vectorial cases. The numerical tests confirm
the theoretical results, and the analysis of the dynamics of the
problem shows the stability of the proposed schemes.
1. Introduction
The analysis of linear systems has a well-developed mathematical and computational theory. However, many of the applied problems in science and engineering are nonlinear. This situation is more complicated than the linear one, and the estimation of its solution needs a numerical treatment.
While computational engineering has achieved significant maturity, computational costs can be extremely large when high accuracy simulations are required. The development of a practical high-order solution method could diminish this problem by significantly decreasing the computational time required to achieve an acceptable error level (see, for instance, [1]).
The existence of an extensive literature on higher order methods reveals that they are only limited by the nature of the problem to be solved: in particular, the numerical solution of nonlinear equations and systems is needed in the study of dynamical models of chemical reactors [2] or in radioactive transfer [3]. Moreover, many of numerical applications use high precision in their computations; in [4], high-precision calculations are used to solve interpolation problems in astronomy; in [5], the authors describe the use of arbitrary precision computations to improve the results obtained in climate simulations; the results of these numerical experiments show that the high order methods associated with a multiprecision arithmetic floating point are very useful, because it yields a clear reduction in iterations. A motivation for an arbitrary precision in interval methods can be found in [6], in particular for the calculation of zeros of nonlinear functions.
In last decades, many researchers have proposed different iterative methods to improve Newton’s one, which is still the most used scheme in practice. These variants of Newton’s method have been designed by means of different techniques, providing in the most of cases multistep schemes. Some of them are extensions of one-dimensional schemes (see, e.g., [7, 8]), and others come from Adomian decomposition (see e.g., [9, 10]), specifically the methods proposed by Darvishi and Barati in [11, 12] with super cubic convergence and the schemes proposed by Cordero et al. in [13] with order of convergence 4 and 5. Another procedure to develop iterative methods for nonlinear systems is the replacement of the second derivative in Chebyshev-type methods by some approximation. In [14], Traub presented a family of multipoint methods based on approximating the second derivative that appears in the iterative formula of Chebyshev’s scheme, and Babajee et al. in [15] designed two Chebyshev-like methods free from second derivatives.
A common way to generate new schemes is the direct composition of known methods with a later treatment to reduce the number of functional evaluations (see e.g., [16–19]). A variant of this technique is the so called pseudocomposition, introduced in [20, 21]. Let us note that if the initial approximation or any of the successive estimations make the jacobian matrix almost singular, the convergence is not guaranteed. In some of these cases, the problem can be avoided by using some kind of pseudoinverse to solve the linear system involved in each step of the iterative process (see, for instance, [22, 23]).
Recently, for n=1, the weight-function procedure has been used to increase the order of convergence of known methods [7]. This technique can be also used, with some restrictions, in the development of high order iterative methods for systems: see, for example the papers of Sharma et al. [24, 25] and Abad et al. [26], where the authors apply the designed method to the software improvement of the Global Positioning System.
1.1. Preliminary Orbit Determination
A classical reference in preliminary orbit determination is F. Gauss (1777–1855), who deduced the orbit of the minor planet Ceres, discovered in 1801 and afterwards lost. The calculation of its trajectory by means of the procedure designed by Gauss marked the international recognition of Gauss and his work.
The first step in orbit determination methods is to obtain preliminary orbits, as the motion analyzed is under the premises of the two bodies problem. It is possible to set a two-dimensional coordinate system (see Figure 1), where the X axis points to the perigee of the orbit, the closest point of the elliptical orbit to the focus and center of the system, the Earth. In this picture, the true anomaly ν and the eccentric anomaly E can be observed. In order to place this orbit in the celestial sphere and determine completely the position of a body in the orbit, some elements (called orbital or keplerian elements) must be determined. These orbital elements are as follows.
Ω (right ascension of the ascending node): defined as the equatorial angle between the Vernal point γ and the ascending node N; it orients the orbit in the equatorial plane.
ω (argument of the perigee): defined as the angle of the orbital plane, centered at the focus, between the ascending node N and the perigee of the orbit; it orients the orbit in its plane.
i (inclination): dihedral angle between the equatorial and the orbital planes.
a (semimajor axis): which sets the size of the orbit.
e (eccentricity): which gives the shape of the ellipse.
T0 (perigee epoch): time for the passing of the object over the perigee, to determine a reference origin in time. It can be denoted by a exact date, in Julian days, or by the amount of time ago the object was over the perigee.
Size, shape, and anomalies in orbital plane 2-dimensional coordinate system.
The so-called Gauss’ method is based on the rate y between the triangle and the ellipse sector defined by two position vectors, r1→ and r2→, from astronomical observations. This proportion is related with the geometry of the orbit and the observed position by
(1)y=1+X(l+x),
where
(2)l=r1+r24r1r2cos((ν2-ν1)/2)-12,x=sin2(E2-E14),X=E2-E1-sin(E2-E1)sin3((E2-E1)/2).
The angles Ei, νi, i=1,2, are the eccentric and true anomalies, respectively, associated to the observed positions r1→ and r2→ (let us denote by ri the modulus of vector r1→, i=1,2).
Equation (1) is, actually, the composition of the first Gauss equation
(3)y2=ml+x
and the second Gauss equation
(4)y2(y-1)=mX,
where m=μτ2/[2r1r2cos((ν2-ν1)/2)]3, μ is the gravitational parameter of the motion, and τ is a modified time variable.
The original scheme used by Gauss (see [27]) was based on applying fixed point method on unified (1). By using the initial estimation y0=1, the classical Gauss procedure gets to calculate the orbital elements only if the range of the true anomalies corresponding to the observed positions is lower than π/4. Our aim is to widen the admissible range of true anomalies up to π by solving the same problem but using the nonlinear system formed by (3) and (4), being the unknowns y and E2-E1.
In the second section of this paper, we present a class of sixth-order Newton-type methods by using the composition technique and matricial weight functions. The convergence results of this family have been obtained by means of the n-dimensional Taylor expansion of the involved functions, by using the notation introduced in [28]. Section 3 is devoted to analyze the efficiency of the proposed methods applied on preliminary orbit determination and other nonlinear problems, compared with the classical Newton’, Traub’ [14], and Jarratt’s [29] methods. In Section 4, the preliminary orbit determination problem is revisited in order to analyze the stability of the mentioned schemes by means of dynamical planes, comparing the set of starting points that leads each of the methods to converge to the solution. We finish the paper with some conclusions and the references used in it.
2. The New Family and Its Convergence
In this section, we present our three-step iterative methods designed from Newton's one composing with itself twice, once with a “frozen” function and other one with a “frozen” Jacobian matrix. By using matricial weight functions in the second and third step, we proof that the methods of the proposed family have order of convergence six, under certain conditions of function F and of weight functions.
Let us consider the following three-step iterative method which makes use of the weight functions:
(5)y(k)=x(k)-[F′(x(k))]-1F(x(k)),z(k)=y(k)-H(μ(k))[F′(y(k))]-1F(x(k)),x(k+1)=z(k)-G(μ(k))[F′(y(k))]-1F(z(k)),
where μ(k)=[F′(y(k))]-1F′(x(k)) and H,G:ℝn×n→ℝn×n are matricial functions.
Theorem 1.
Let α∈D be a zero of a sufficiently differentiable function F:D⊆ℝn→ℝn in a convex set D with non-singular Jacobian in α. Let H and G be any function with the following conditions: H(I)=0, H′(I)=(1/2)I, H′′(I)=0 and G(I)=I, G′(I)=0, G′′(I)=(1/2)I, being I the identity matrix. Then, the scheme defined in (5) provides sixth order of convergence, whose error equation is given by
(6)e(k+1)=[-32C3C2C3+14C2C32+6C3C23=[-C2C3C22+C23C3-4C2532]e(k)6+O(e(k)7),
where Ck=(1/k!)[F′(α)]-1F(k)(α), k=2,3,4,…, and e(k)=x(k)-α.
Proof.
If we use Taylor expansion of F(x(k)) and F′(x(k)) around α, we obtain
(7)F(x(k))=F′(α){e(k)+C2e(k)2+C3e(k)3=F′(α){+C4e(k)4+C5e(k)5+C6e(k)6}+O(e(k)7),F′(x(k))=F′(α){I+2C2e(k)+3C3e(k)2+4C4e(k)3=F′(α){+5C5e(k)4+6C6e(k)5}+O(e(k)6).
From (7), we calculate the expression of the inverse
(8)[F′(x(k))]-1={I+X2e(k)+X3e(k)2+X4e(k)3+X5e(k)4+X6e(k)5}×[F′(α)]-1+O(e(k)6),
where X2=-2C2, X3=4C22-3C3, X4=6C3C2-6C2C3-8C23-4C4, X5=-5C5+8C2C4+8C4C2-12C22C3-12C3C22-12C2C3C2+9C32+16C24, and X6=-6C6+10C2C5+10C5C2-16C22C4-16C4C22-16C2C4C2+12C3C4+12C4C3-18C3C2C3-18C2C32-18C32C2+24C23C3+24C22C3C2+24C2C3C22+24C3C23-32C25.
These values have been obtained by imposing the conditions
(9)[F′(x(k))]-1F′(x(k))=F′(x(k))[F′(x(k))]-1=I.
Then, the error expression in the first step of the method is
(10)y(k)-α=C2e(k)2+(-2C22+2C3)e(k)3+(4C23-4C2C3-3C3C2+4C23)e(k)4+(4C5-6C2C4-4C4C2+8C22C3+6C3C22+6C2C3C2-6C32-8C24)e(k)5+O(e(k)6).
Furthermore, we know that
(11)F(y(k))=F′(α){(y(k)-α)+C2(y(k)-α)2=F′(α){+C3(y(k)-α)3+C4(y(k)-α)4}+O((y(k)-α)5)
and if we replace in (11) the powers of (y(k)-α), we obtain after some operations
(12)F(y(k))=F′(α){C2e(k)2+2(-C22+C3)e(k)3=F′(α){+(3C4-4C2C3-3C3C2+5C2)e(k)4=F′(α){+(4C5-6C2C4-4C4C2+8C23C3=F′(α){(h+6C3C22+6C2C3C2+2C22C3=F′(α){(h+2C2C3C2-6C32-12C24)e(k)5}+O(e(k)6)
and also
(13)F′(y(k))=F′(α){I+2C22e(k)2+4(C2C3-C23)e(k)3=F′(α){+(6C2C4-8C22C3-6C2C3C2=F′(α){+(+8C24+3C3C22)e(k)4=F′(α){+(8C2C5-12C22C4-8C2C4C2+16C23C3=F′(α){+(+12C2C3C22+12C22C3C2-12C3C23=F′(α){+(-12C2C32+6C3C2C3=F′(α){+l+6C32C2-16C25)e(k)5}+O(e(k)6).
In a similar way as in (8),
(14)[F′(y(k))]-1={I+Y2e(k)+Y3e(k)2+Y4e(k)3+Y5e(k)4+Y6e(k)5}×[F′(α)]-1+O(e(k)6),
where Y2=0,Y3=-2C22,Y4=4(C23-C2C3),Y5=-6C2C4+8C22C3+6C2C3C2-4C24-3C3C22, and Y6=-8C2C5+12C22C4+8C2C4C2-16C23C3-12C2C3C22-12C22C3C2+12C3C23+8C23C3+8C2C3C22+12C2C32-6C3C2C3-6C32C2.
So,
(15)[F′(y(k))]-1F(x(k))=e(k)+C2e(k)2+(-2C22+C3)e(k)3+(C4+2C23-4C2C3)e(k)4+(C5+6C22C3+2C2C3C2-3C3C22-6C2C4)e(k)5+O(e(k)6).
On the other hand, we get the Taylor expansion of μ(k) by using (7) and (15):
(16)μ(k)=[F′(y(k))]-1F′(x(k))=I+2C2e(k)+(-2C22+3C3)e(k)2+4(C4-C2C3)e(k)3+(5C5+2C22C3-2C2C3C2+(-3C3C22+4C24-6C2C4)e(k)4+(6C6+4C22C4-4C2C4C2+4C23C3+4C22C3C2+(+8C2C3C22+6C3C23-6C3C2C3+(-6C32C2-8C25-8C2C5)e(k)5+O(e(k)6).
Let us note that μ(k) tends to the identity matrix when x(k) and y(k) tend to α, thereby the second order polynomial approximation of the weight function, H(μ(k)), is
(17)H(μ(k))≈H(I)+H′(I)(μ(k)-I)+H′′(I)2(μ(k)-I)2.
Let us consider now the following notation:
(18)H(I)=H0,H′(I)=H1,H′′(I)2=H2.
So,
(19)z(k)-α=y(k)-α-H(μ(k))[F′(y(k))]-1F(x(k))=-H0e(k)+(I-H0-2H1)C2e(k)2+[(-2I+2H0-4H2)C22+(2I-H0-3H1)C3]e(k)3+[(3I-H0-4H1)C4+(4I-2H0+6H1+4H2)C23+[+(-4I+4H0+2H1-6H2)C2C3+[+(-3I-3H1-6H2)C3C2C23]e(k)4+[C23(4I-H0-5H1)C5+(-6I-8H2+6H0+4H1)+[×C2C4+(-4I-8H2-4H1)C4C2+[+(8I+10H2-6H0+8H1)C22C3+[+(6I+3H0+9H1)C3C22+[+(6I-2H0+2H2+6H1)C2C3C2+[+(-6I-9H2-3H1)C32+[+(-8I+12H2-12H1)C24]e(k)5+O(e(k)6).
Let us consider the truncated Taylor expansion of order two of the weight function G(μ(k)),
(21)G(μ(k))≈G(I)+G′(I)(μ(k)-I)+G′′(I)2(μ(k)-I)2,
and let us denote by
(22)G(I)=G0,G′(I)=G1,G′′(I)2=G2.
Finally, the error equation is expressed as
(23)e(k+1)=H0(I-G0)e(k)+[(C23I-H0-G0+G0H0+2G1H0=H0(I-G0)e(k)[+(-G0H02-2H1+2G0H1)C2]e(k)2=H0(I-G0)e(k)+[C23(-2I+2G0-2G1+2H0-2G0H0=H0(I-G0)e(k)[+(+4G2H0-2G0H02-2G1H02=H0(I-G0)e(k)[+(+4G1H1-4G0H0H1=H0(I-G0)e(k)[+(-4H2+4G0H2)C22=H0(I-G0)e(k)[+(C232I-H0-3H1-2G0+G0H0=H0(I-G0)e(k)[+(+3G0H1+G0H03+3G1H0)C3]×e(k)3+O(e(k)4).
If we choose H0=0,H1=(1/2)I, and H2=0, we obtain
(24)e(k+1)=[(-2I+2G0)C22+(12I-G02)C3]e(k)3+O(e(k)4).
So, to increase the order of convergence up to four, the value of G0 must be G0=I, and then,
(25)e(k+1)=G1[-C3+4C22]e(k)4+O(e(k)5).
Moreover, in order to reach fifth order of convergence, G1 must be null. Therefore, the error equation is
(26)e(k+1)=[(I-2G2)C22C3+(-4I+8G2)C24]e(k)5+O(e(k)6).
Finally, if G2=(1/2)I, we have
(27)e(k+1)=[-32C3C2C3+14C2C32+6C3C23-C2C3C22+C23C3-4C2532]e(k)6+O(e(k)7),
and the theorem is proved.
From the previous theorem, (5) defines a family of sixth-order methods. We can find different weight functions satisfying the conditions of the theorem. Specifically, we propose the following examples for the next sections.
Example 2.
An element of our family of sixth-order is given by the weight functions
(28)H(t)=12(t-I),G(t)=[I+t]-1(2I-t+t2),
which is called NAJC1.
Example 3.
Another combination of weight functions that can be used is
(29)H(t)=12(t-I),G(t)=I+12(t-I)2.
We will refer to this element of the class as NAJC2.
3. Numerical Results
In this section, we analyze the computational efficiency of our methods and compare them with other classical ones in the problem of preliminary orbit determination as well as in academic examples. The classical methods used are Newton’, Traub’, and Jarratt’s ones of convergence order 2, 3, and 4, respectively, whose iterative expressions are as follows.
In our tests, we have used the following numerical settings: variable precision arithmetics of two hundred and fifty digits in Mathematica 8.0; moreover, in each iterative method, we have used the stopping criterium ∥F(x(k+1))∥+∥x(k+1)-x(k)∥<10-100, and the approximated computational order of convergence ρ (see [30]) has been calculated by using the formula:
(33)p≈ρ=ln(∥x(k+1)-x(k)∥/∥x(k)-x(k-1)∥)ln(∥x(k)-x(k-1)∥/∥x(k-1)-x(k-2)∥).
From this value, we have designed two practical indices to measure the computational efficiency: the approximated efficiency index
(34)I~=ρ1/d,
and the approximated computational index
(35)I~c=ρ1/op,d and op being the number of functional evaluations and the number of operations (products and quotients) per iteration, respectively.
In Tables 3, 4, 5, 6, and 7, we show the number of iterations, the previously defined indices, and the absolute errors committed between theoretical and practical values that we denote by ϵ.
Two reference orbits have been used in the test for the preliminary orbit determination. The first can be found in [27], and the second one is a commercial real orbit called Tundra. As the orbital elements of each one are known, the vector positions (measured in Earth radius) at the instants t1 and t2 have been recalculated with 500 exact digits. These vector positions are
(36)r→1≈[2.46080928705339,2.04052290636432,r→1≈[0.14381905768815],r→2≈[1.98804155574820,2.50333354505224,r→2≈[0.31455350605251],
for Reference Orbit I and
(37)r→1≈[-2.02862564034533,-0.74638890547506,r→1≈[-4.322222156844465],r→2≈[4.24372000256074,-1.689387746496,r→2≈[6.79724893784587],
for Tundra Orbit. Then, our aim is to gain from these positions the orbital elements showed in Tables 1 and 2 with a precision as high as possible by means of proposed iterative schemes.
Orbital elements and temporal distance between r1→ and r2→. Reference Orbit I.
a= 4.0 e.r.
i = 15°
Δt = 0.01044412 D.J.
e = 0.2 e.r.
Ω = 30°
Δν = 12.232°
T = January 1, 1964 0^{hr}, 0^{min}, 0^{seg}
ω = 10°
Orbital elements and temporal distance between r1→ and r2→. Tundra Orbit.
a = 6.62 e.r.
i = 63.43°
Δt = 0.399753 D.J.
e = 0.27 e.r.
Ω = 290.2°
Δν = 171°
M = 144°
ω = 270°
Results of reference Orbit I.
Method
Iter
ρ
I~
I~c
εa
εe
εi
εw
εΩ
Time (s)
N
7
1.9999
1.1112
1.1112
3.2757e-109
4.8982e-110
7.3653e-109
2.6237e-108
0
0.005049
T
5
2.9995
1.1396
1.1102
2.0466e-120
3.0603e-121
4.6017e-120
1.6393e-119
0
0.005463
J
4
4.0000
1.1264
1.0771
4.8431e-200
8.8034e-201
3.9324e-200
1.4008e-199
0
0.005906
NAJC1
3
5.7569
1.1347
1.0485
3.6731e-148
4.3049e-148
1.5905e-146
5.6659e-146
0
0.009218
NAJC2
3
5.7821
1.1352
1.0487
3.9057e-149
4.6174e-149
1.7089e-147
6.0879e-147
0
0.007306
Results of tundra Orbit.
Method
Iter
ρ
I~
I~c
εa
εe
εi
εw
εΩ
Time (s)
N
6
2.0121
1.1041
1.1041
3.1284e-16
1.6038e-17
2.4321e-16
6.5148e-15
0
0.005069
T
5
2.9852
1.1331
1.1051
3.1284e-16
1.6038e-17
2.4321e-16
6.5148e-15
0
0.005493
J
3
3.9931
1.1409
1.0859
3.1284e-16
1.6038e-17
2.4321e-16
6.5148e-15
0
0.004457
NAJC1
3
4.9478
1.1336
1.0482
3.1284e-16
1.6038e-17
2.4321e-16
6.5148e-15
0
0.009004
NAJC2
3
5.2465
1.1337
1.0482
3.1284e-16
1.6038e-17
2.4321e-16
6.5148e-15
0
0.007704
Results of system (a), x(0) = (4,-3)T.
Method
Iter
ρ
I~
I~c
εα1
εα2
Time (s)
N
8
1.9999
1.1734
1.1078
7.5993e-174
0
0.004658
T
6
3.0000
1.3841
1.1251
5.8838e-206
0
0.006098
J
4
3.9887
1.1174
1.0715
2.0217e-113
0
0.004620
NAJC1
4
6.0051
1.1259
1.0451
0
0
0.009178
NAJC2
4
6.0028
1.1289
1.0462
0
0
0.008030
Results of system (b), x(0) = (12,-2,-1)T.
Method
Iter
ρ
I~
I~c
εα1
εα2
εα3
Time (s)
N
13
1.9948
1.0281
1.0201
3.4121e-125
2.7759e-125
1.0794e-125
0.007766
T
—
—
—
—
—
—
—
—
J
8
3.9940
1.0301
1.0158
0
0
0
0.009805
NAJC1
5
4.9496
1.0646
1.0171
0
0
0
0.013722
NAJC2
6
4.9329
1.0716
1.0190
0
0
0
0.013029
Results of system (c), x(0) = (5,5,5,-1)T.
Method
Iter
ρ
I~
I~c
εα1
εα2
εα3
εα4
Time (s)
N
10
2.0244
1.0249
1.0249
6.5830e-102
6.5830e-102
6.5830e-102
2.7466e-103
0.007272
T
7
3.0909
1.0356
1.0162
2.9862e-145
2.9862e-145
2.9862e-145
7.8319e-147
0.009802
J
5
4.1871
1.0349
1.0141
6.5830e-102
6.5830e-102
6.5830e-102
2.7466e-103
0.007952
NAJC1
5
6.4193
1.0493
1.0104
0
0
0
0
0.017668
NAJC2
5
6.1729
1.0520
1.0110
0
0
0
0
0.014849
If we look at the numerical results of Table 3, our methods NAJC1 and NAJC2 have the least number of iterations, the highest order of convergence with the highest efficiency index. For this case, the absolute error committed by Jarratt’s method is lower than in our procedures, which is due to use of an initial estimation very close to the solution and a very small temporal distance.
If we observe the numerical results of Tundra Orbit (Table 4), we note that the absolute error is stabilized, and we maintain the good results of the Reference Orbit I. We can conclude that working with the two equations provided by Gauss as a system, we improve the original procedure which has the restriction that the difference of true anomalies cannot be greater than π/4. We are able to increase the range of the difference of true anomalies associated to the observations until values close to π.
3.1. Other Nonlinear Problems
In order to continue checking the computational efficiency of the proposed schemes, NAJC1 and NAJC2 we use, in the following, some academic examples.
F1(x)=(f1(x),f2(x))T: x=(x1,x2)T and fi:ℝ2→ℝ, i=1,2, α≈(3.47063096,-2.47063096),
(38)f1(x)=ex1ex2+x1cosx2,f2(x)=x1+x2-1.
F2(x)=(f1(x),f2(x),f3(x))T: x=(x1,x2,x3)T and fi:ℝ3→ℝ, i=1,2,3, α≈(2.14025,-2.09029,-0.223525),
(39)f1(x)=x12+x22+x32-9,f2(x)=x1x2x3-1,f3(x)=x1+x2-x32.
F3(x)=(f1(x),f2(x),f3(x),f4(x))T: x=(x1,x2,x3,x4)T and fi:ℝ4→ℝ, i=1,…,4, α=(±1/3,±1/3,±1/3,±2/3),
(40)f1(x)=x2x3+x4(x2+x3),f2(x)=x1x3+x4(x1+x3),f3(x)=x1x2+x4(x1+x2),f4(x)=x1x2+x1x3+x2x3-1.
The results are showed in Tables 5, 6, and 7, where εαi denotes |fi(α)|. From these tables, the best schemes in terms of precision are NAJC1 and NAJC2, even when the initial estimation is far from the solution. In system (b), Traub’s method does not converge after five hundred iterations from this initial estimation.
4. Dynamical Planes
From the numerical results presented in Section 3, our proposed methods show to be competitive with respect to existing ones. Nevertheless, under the dynamical point of view, we have checked that they can have better behavior in terms of stability and wideness of the region of convergence.
For the representation of the convergence basins of our procedures and classical methods, we have used the software described in [31]. We draw a mesh with two thousand points per axis; each point of the mesh is a different initial estimation which we introduce in each procedure. If the method reaches the final solution in less than five hundred iterations, this point is drawn in orange. The color will be more intense when the number of iterations is lower. Otherwise, if the method arrives at the maximum of iterations, the point will be drawn in black. In each axis, we will represent each of the variables with which we work. The ratio sector triangle is represented in the abscissas and the difference of eccentric anomalies in the ordinates. In addition, we will use the reference orbit I, which is defined in Table 1, and the solution of the nonlinear system is in this case around (1, 0.1). For this reason, we choose [0,3]×[-1,1] as the region of representation.
In Figure 2, we show the dynamical planes of the classical methods. It can be observed that, in general, higher order means lower stability. If we focus our attention on the attraction basins of each plane, the method with the greatest stability is Newton, and the procedure with the lowest number of iterations is Jarratt.
Dynamical planes from classical methods and Reference Orbit I.
Newton
Traub
Jarratt
As we can see in Figure 3, both NAJC1 and NAJC2 have large areas of stability, similar to Newton’s one, but with order of convergence six. For the intensity of the orange in the attraction basins, the two schemes will have the least number of iterations. Moreover, if we compare both procedures, the attraction basins of NAJC1 are more disperse than the convergence basins of NAJC2 which makes the first one more unstable than the second one.
Dynamical planes from new methods and Reference Orbit I.
NAJC1
NAJC2
5. Conclusions
The classical Gauss’ method for preliminary orbit determination has been improved, introducing a new performance by means of a nonlinear system. This fact increases the admissible spread of the observations (in order to ensure the convergence) from π/4 to π and reduces the number of iterations of the process.
The new sixth-order methods NAJC1 and NAJC2 belonging to the class of methods designed by using matricial weight functions have good global properties of convergence and stability, even for initial estimations far from the solution.
It is a well-known fact that the size of the area of convergence is inversely proportional to the order of convergence. However, our methods hold a basin of attraction comparable with Newton’s one in spite of their sixth-order of convergence.
Acknowledgments
The authors thank the anonymous referees for their valuable comments and suggestions. This research was supported by Ministerio de Ciencia y Tecnología MTM2011-28636-C02-02.
FidkowskiK. J.OliverT. A.LuJ.DarmofalD. L.p-Multigrid solution of high-order discontinuous Galerkin discretizations of the compressible Navier-Stokes equationsBrunsD. D.BaileyJ. E.Nonlinear feedback control for operating a nonisothermal CSTR near an unstable steady stateEzquerroJ. A.GutiérrezJ. M.HernándezM. A.SalanovaM. A.Chebyshev-like methods and quadratic equationsZhangY.HuangP.High-precision time-interval measurement techniques and methodsHeY.DingC. H. Q.Using accurate arithmetics to improve numerical reproducibility and stability in parallel applicationsRevolN.RouillierF.Motivations for an arbitrary precision interval arithmetic and the MPFI libraryPetkovićM.NetaB.PetkovićL.DžunićJ.DarvishiM. T.Some three-step iterative methods free from second order derivative for finding solutions of systems of nonlinear equationsAdomianG.BabajeeD. K. R.DauhooM. Z.DarvishiM. T.BaratiA.A note on the local convergence of iterative methods based on Adomian decomposition method and 3-node quadrature ruleDarvishiM. T.BaratiA.A third-order Newton-type method to solve systems of nonlinear equationsDarvishiM. T.BaratiA.Super cubic iterative methods to solve systems of nonlinear equationsCorderoA.MartínezE.TorregrosaJ. R.Iterative methods of order four and five for systems of nonlinear equationsTraubJ. F.BabajeeD. K. R.DauhooM. Z.DarvishiM. T.KaramiA.BaratiA.Analysis of two Chebyshev-like third order methods free from second derivatives for solving systems of nonlinear equationsSoleymaniF.LotfiT.BakhtiariP.A multi-step class of iterative methods for nonlinear systemsDarvishiM. T.BaratiA.A third-order Newton-type method to solve systems of nonlinear equationsAwawdehF.On new iterative method for solving systems of nonlinear equationsBabajeeD. K. R.CorderoA.SoleymaniF.TorregrosaJ. R.On a novel fourth-order algorithm for solving systems of nonlinear equationsCorderoA.TorregrosaJ. R.VassilevaM. P.Pseudocomposition: a technique to design predictor-corrector methods for systems of nonlinear equationsCorderoA.TorregrosaJ. R.VassilevaM. P.Increasing the order of convergence of iterative schemes for solving nonlinear systemsSoleymaniF.StanimirovićP. S.A higher order iterative methods for computing the Drazin inverseSoleymaniF.StanimirovićP. S.UllahM. Z.An accelerated iterative method for computing weighted Moore–Penrose inverseSharmaJ. R.GuhaR. K.SharmaR.An efficient fourth order weighted-Newton method for systems of nonlinear equationsSharmaJ. R.AroraH.On efficient weighted-Newton methods for solving systems of nonlinear equationsAbadM. F.CorderoA.TorregrosaJ. R.Fourth- and fifth-order methods for solving nonlinear systems of equations: an application to the global positioning systemEscobalP. R.CorderoA.HuesoJ. L.MartínezE.TorregrosaJ. R.A modified Newton-Jarratt's compositionJarrattP.Some fourth order multipoint iterative methods for solving equationsCorderoA.TorregrosaJ. R.Variants of Newton's method using fifth-order quadrature formulasChicharroF.CorderoA.TorregrosaJ. R.Drawing dynamical and parameter planes of iterative families and methods