AAA Abstract and Applied Analysis 1687-0409 1085-3375 Hindawi Publishing Corporation 960582 10.1155/2013/960582 960582 Research Article Preliminary Orbit Determination of Artificial Satellites: A Vectorial Sixth-Order Approach http://orcid.org/0000-0002-5699-8207 Andreu Carlos Cambil Noelia http://orcid.org/0000-0002-7462-9173 Cordero Alicia Torregrosa Juan R. Villanueva Rafael Jacinto Instituto de Matemática Multidisciplinar Universitat Politècnica de València Camino de Vera, s/n, 46022 Valencia Spain upv.es 2013 3 12 2013 2013 02 10 2013 13 11 2013 2013 Copyright © 2013 Carlos Andreu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

A modified classical method for preliminary orbit determination is presented. In our proposal, the spread of the observations is considerably wider than in the original method, as well as the order of convergence of the iterative scheme involved. The numerical approach is made by using matricial weight functions, which will lead us to a class of iterative methods with a sixth local order of convergence. This is a process widely used in the design of iterative methods for solving nonlinear scalar equations, but rarely employed in vectorial cases. The numerical tests confirm the theoretical results, and the analysis of the dynamics of the problem shows the stability of the proposed schemes.

1. Introduction

The analysis of linear systems has a well-developed mathematical and computational theory. However, many of the applied problems in science and engineering are nonlinear. This situation is more complicated than the linear one, and the estimation of its solution needs a numerical treatment.

While computational engineering has achieved significant maturity, computational costs can be extremely large when high accuracy simulations are required. The development of a practical high-order solution method could diminish this problem by significantly decreasing the computational time required to achieve an acceptable error level (see, for instance, ).

The existence of an extensive literature on higher order methods reveals that they are only limited by the nature of the problem to be solved: in particular, the numerical solution of nonlinear equations and systems is needed in the study of dynamical models of chemical reactors  or in radioactive transfer . Moreover, many of numerical applications use high precision in their computations; in , high-precision calculations are used to solve interpolation problems in astronomy; in , the authors describe the use of arbitrary precision computations to improve the results obtained in climate simulations; the results of these numerical experiments show that the high order methods associated with a multiprecision arithmetic floating point are very useful, because it yields a clear reduction in iterations. A motivation for an arbitrary precision in interval methods can be found in , in particular for the calculation of zeros of nonlinear functions.

In last decades, many researchers have proposed different iterative methods to improve Newton’s one, which is still the most used scheme in practice. These variants of Newton’s method have been designed by means of different techniques, providing in the most of cases multistep schemes. Some of them are extensions of one-dimensional schemes (see, e.g., [7, 8]), and others come from Adomian decomposition (see e.g., [9, 10]), specifically the methods proposed by Darvishi and Barati in [11, 12] with super cubic convergence and the schemes proposed by Cordero et al. in  with order of convergence 4 and 5. Another procedure to develop iterative methods for nonlinear systems is the replacement of the second derivative in Chebyshev-type methods by some approximation. In , Traub presented a family of multipoint methods based on approximating the second derivative that appears in the iterative formula of Chebyshev’s scheme, and Babajee et al. in  designed two Chebyshev-like methods free from second derivatives.

A common way to generate new schemes is the direct composition of known methods with a later treatment to reduce the number of functional evaluations (see e.g., ). A variant of this technique is the so called pseudocomposition, introduced in [20, 21]. Let us note that if the initial approximation or any of the successive estimations make the jacobian matrix almost singular, the convergence is not guaranteed. In some of these cases, the problem can be avoided by using some kind of pseudoinverse to solve the linear system involved in each step of the iterative process (see, for instance, [22, 23]).

Recently, for n=1, the weight-function procedure has been used to increase the order of convergence of known methods . This technique can be also used, with some restrictions, in the development of high order iterative methods for systems: see, for example the papers of Sharma et al. [24, 25] and Abad et al. , where the authors apply the designed method to the software improvement of the Global Positioning System.

1.1. Preliminary Orbit Determination

A classical reference in preliminary orbit determination is F. Gauss (1777–1855), who deduced the orbit of the minor planet Ceres, discovered in 1801 and afterwards lost. The calculation of its trajectory by means of the procedure designed by Gauss marked the international recognition of Gauss and his work.

The first step in orbit determination methods is to obtain preliminary orbits, as the motion analyzed is under the premises of the two bodies problem. It is possible to set a two-dimensional coordinate system (see Figure 1), where the X axis points to the perigee of the orbit, the closest point of the elliptical orbit to the focus and center of the system, the Earth. In this picture, the true anomaly ν and the eccentric anomaly E can be observed. In order to place this orbit in the celestial sphere and determine completely the position of a body in the orbit, some elements (called orbital or keplerian elements) must be determined. These orbital elements are as follows.

Ω (right ascension of the ascending node): defined as the equatorial angle between the Vernal point γ and the ascending node N; it orients the orbit in the equatorial plane.

ω (argument of the perigee): defined as the angle of the orbital plane, centered at the focus, between the ascending node N and the perigee of the orbit; it orients the orbit in its plane.

i (inclination): dihedral angle between the equatorial and the orbital planes.

a (semimajor axis): which sets the size of the orbit.

e (eccentricity): which gives the shape of the ellipse.

T0 (perigee epoch): time for the passing of the object over the perigee, to determine a reference origin in time. It can be denoted by a exact date, in Julian days, or by the amount of time ago the object was over the perigee.

Size, shape, and anomalies in orbital plane 2-dimensional coordinate system.

The so-called Gauss’ method is based on the rate y between the triangle and the ellipse sector defined by two position vectors, r1 and r2, from astronomical observations. This proportion is related with the geometry of the orbit and the observed position by (1)y=1+X(l+x), where (2)l=r1+r24r1r2cos((ν2-ν1)/2)-12,x=sin2(E2-E14),X=E2-E1-sin(E2-E1)sin3((E2-E1)/2).

The angles Ei, νi, i=1,2, are the eccentric and true anomalies, respectively, associated to the observed positions r1 and r2 (let us denote by ri the modulus of vector r1, i=1,2).

Equation (1) is, actually, the composition of the first Gauss equation (3)y2=ml+x and the second Gauss equation (4)y2(y-1)=mX, where m=μτ2/[2r1r2cos((ν2-ν1)/2)]3, μ is the gravitational parameter of the motion, and τ is a modified time variable.

The original scheme used by Gauss (see ) was based on applying fixed point method on unified (1). By using the initial estimation y0=1, the classical Gauss procedure gets to calculate the orbital elements only if the range of the true anomalies corresponding to the observed positions is lower than π/4. Our aim is to widen the admissible range of true anomalies up to π by solving the same problem but using the nonlinear system formed by (3) and (4), being the unknowns y and E2-E1.

In the second section of this paper, we present a class of sixth-order Newton-type methods by using the composition technique and matricial weight functions. The convergence results of this family have been obtained by means of the n-dimensional Taylor expansion of the involved functions, by using the notation introduced in . Section 3 is devoted to analyze the efficiency of the proposed methods applied on preliminary orbit determination and other nonlinear problems, compared with the classical Newton’, Traub’ , and Jarratt’s  methods. In Section 4, the preliminary orbit determination problem is revisited in order to analyze the stability of the mentioned schemes by means of dynamical planes, comparing the set of starting points that leads each of the methods to converge to the solution. We finish the paper with some conclusions and the references used in it.

2. The New Family and Its Convergence

In this section, we present our three-step iterative methods designed from Newton's one composing with itself twice, once with a “frozen” function and other one with a “frozen” Jacobian matrix. By using matricial weight functions in the second and third step, we proof that the methods of the proposed family have order of convergence six, under certain conditions of function F and of weight functions.

Let us consider the following three-step iterative method which makes use of the weight functions: (5)y(k)=x(k)-[F(x(k))]-1F(x(k)),z(k)=y(k)-H(μ(k))[F(y(k))]-1F(x(k)),x(k+1)=z(k)-G(μ(k))[F(y(k))]-1F(z(k)), where μ(k)=[F(y(k))]-1F(x(k)) and H,G:n×nn×n are matricial functions.

Theorem 1.

Let αD be a zero of a sufficiently differentiable function F:Dnn in a convex set D with non-singular Jacobian in α. Let H and G be any function with the following conditions: H(I)=0, H(I)=(1/2)I, H′′(I)=0 and G(I)=I, G(I)=0, G′′(I)=(1/2)I, being I the identity matrix. Then, the scheme defined in (5) provides sixth order of convergence, whose error equation is given by (6)e(k+1)=[-32C3C2C3+14C2C32+6C3C23=[-C2C3C22+C23C3-4C2532]e(k)6+O(e(k)7), where Ck=(1/k!)[F(α)]-1F(k)(α), k=2,3,4,, and e(k)=x(k)-α.

Proof.

If we use Taylor expansion of F(x(k)) and F(x(k)) around α, we obtain (7)F(x(k))=F(α){e(k)+C2e(k)2+C3e(k)3=F(α){+C4e(k)4+C5e(k)5+C6e(k)6}+O(e(k)7),F(x(k))=F(α){I+2C2e(k)+3C3e(k)2+4C4e(k)3=F(α){+5C5e(k)4+6C6e(k)5}+O(e(k)6).

From (7), we calculate the expression of the inverse (8)[F(x(k))]-1={I+X2e(k)+X3e(k)2+X4e(k)3+X5e(k)4+X6e(k)5}×[F(α)]-1+O(e(k)6), where X2=-2C2, X3=4C22-3C3, X4=6C3C2-6C2C3-8C23-4C4, X5=-5C5+8C2C4+8C4C2-12C22C3-12C3C22-12C2C3C2+9C32+16C24, and X6=-6C6+10C2C5+10C5C2-16C22C4-16C4C22-16C2C4C2+12C3C4+12C4C3-18C3C2C3-18C2C32-18C32C2+24C23C3+24C22C3C2+24C2C3C22+24C3C23-32C25.

These values have been obtained by imposing the conditions (9)[F(x(k))]-1F(x(k))=F(x(k))[F(x(k))]-1=I.

Then, the error expression in the first step of the method is (10)y(k)-α=C2e(k)2+(-2C22+2C3)e(k)3+(4C23-4C2C3-3C3C2+4C23)e(k)4+(4C5-6C2C4-4C4C2+8C22C3+6C3C22+6C2C3C2-6C32-8C24)e(k)5+O(e(k)6).

Furthermore, we know that (11)F(y(k))=F(α){(y(k)-α)+C2(y(k)-α)2=F(α){+C3(y(k)-α)3+C4(y(k)-α)4}+O((y(k)-α)5) and if we replace in (11) the powers of (y(k)-α), we obtain after some operations (12)F(y(k))=F(α){C2e(k)2+2(-C22+C3)e(k)3=F(α){+(3C4-4C2C3-3C3C2+5C2)e(k)4=F(α){+(4C5-6C2C4-4C4C2+8C23C3=F(α){(h+6C3C22+6C2C3C2+2C22C3=F(α){(h+2C2C3C2-6C32-12C24)e(k)5}+O(e(k)6) and also (13)F(y(k))=F(α){I+2C22e(k)2+4(C2C3-C23)e(k)3=F(α){+(6C2C4-8C22C3-6C2C3C2=F(α){+(+8C24+3C3C22)e(k)4=F(α){+(8C2C5-12C22C4-8C2C4C2+16C23C3=F(α){+(+12C2C3C22+12C22C3C2-12C3C23=F(α){+(-12C2C32+6C3C2C3=F(α){+l+6C32C2-16C25)e(k)5}+O(e(k)6).

In a similar way as in (8), (14)[F(y(k))]-1={I+Y2e(k)+Y3e(k)2+Y4e(k)3+Y5e(k)4+Y6e(k)5}×[F(α)]-1+  O(e(k)6), where Y2=0,Y3=-2C22,Y4=4(C23-C2C3),Y5=-6C2C4+8C22C3+6C2C3C2-4C24-3C3C22, and Y6=-8C2C5+12C22C4+8C2C4C2-16C23C3-12C2C3C22-12C22C3C2+12C3C23+8C23C3+8C2C3C22+12C2C32-6C3C2C3-6C32C2.

So, (15)[F(y(k))]-1F(x(k))=e(k)+C2e(k)2+(-2C22+C3)e(k)3+(C4+2C23-4C2C3)e(k)4+(C5+6C22C3+2C2C3C2-3C3C22-6C2C4)e(k)5+O(e(k)6).

On the other hand, we get the Taylor expansion of μ(k) by using (7) and (15): (16)μ(k)=[F(y(k))]-1F(x(k))=I+2C2e(k)+(-2C22+3C3)e(k)2+4(C4-C2C3)e(k)3+(5C5+2C22C3-2C2C3C2+(-3C3C22+4C24-6C2C4)e(k)4+(6C6+4C22C4-4C2C4C2+4C23C3+4C22C3C2+(+8C2C3C22+6C3C23-6C3C2C3+(-6C32C2-8C25-8C2C5)e(k)5+O(e(k)6).

Let us note that μ(k) tends to the identity matrix when x(k) and y(k) tend to α, thereby the second order polynomial approximation of the weight function, H(μ(k)), is (17)H(μ(k))H(I)+H(I)(μ(k)-I)+H′′(I)2(μ(k)-I)2.

Let us consider now the following notation: (18)H(I)=H0,H(I)=H1,H′′(I)2=H2.

So, (19)z(k)-α=y(k)-α-H(μ(k))[F(y(k))]-1F(x(k))=-H0e(k)+(I-H0-2H1)C2e(k)2+[(-2I+2H0-4H2)C22+(2I-H0-3H1)C3]e(k)3+[(3I-H0-4H1)C4+(4I-2H0+6H1+4H2)C23+[+(-4I+4H0+2H1-6H2)C2C3+[+(-3I-3H1-6H2)C3C2C23]e(k)4+[C23(4I-H0-5H1)C5+(-6I-8H2+6H0+4H1)+[×C2C4+(-4I-8H2-4H1)C4C2+[+(8I+10H2-6H0+8H1)C22C3+[+(6I+3H0+9H1)C3C22+[+(6I-2H0+2H2+6H1)C2C3C2+[+(-6I-9H2-3H1)C32+[+(-8I+12H2-12H1)C24]e(k)5+O(e(k)6).

Then, (20)F(z(k))=F(α){-H0e(k)+(I-H0-2H1+H02)C2e(k)2=F(α){+[(-2I+2H02-4H2+4H0H1)C22=F(α){+[+(2I-H0-3H1-H03)C3]e(k)3=F(α){+[(3I-H0-4H1+H04)C4=F(α){+[+(5I+2H1+4H2-3H02+8H0H2=F(α){+[+(+4H12+4H0H1)C23=F(α){+[+(-4I+2H1-6H2+2H02+6H0H1)C2C3=F(α){+[+(-3I-3H1-6H2+3H02=F(α){+[+(-6H02H1-3H03)C3C2]e(k)4=F(α){+[(4I-H0-H05-5H1)C5=F(α){+[+(-6I+4H1-8H2+2H02+8H0H1)C2C4=F(α){+[+(-4I-4H1-8H2-4H03+4H04+8H03H1)=F(α){+[×C4C2+(C2310I-H0+H1+10H2+H0H1=F(α){+[+(hhhhhh+12H0H2-7H02+6H12)C22C3=F(α){+[+(6I+9H1+12H0H1-12H02H1=F(α){+[+(-12H0H12-12H02H2+3H03)C3C22=F(α){+[+(C238I+H0-H1+2H2+11H0H1=F(α){+[+(+12H0H2+H02+6H12)C2C3C2=F(α){+[+(-6I+6H02-3H03-3H1=F(α){+[+(-9H02H1-9H2)C32=F(α){+[+(-12I-4H1-20H0H1=F(α){+[+(+4H2+16H1H2)C24]e(k)5}+O(e(k)6).

Let us consider the truncated Taylor expansion of order two of the weight function G(μ(k)), (21)G(μ(k))G(I)+G(I)(μ(k)-I)+G′′(I)2(μ(k)-I)2, and let us denote by (22)G(I)=G0,G(I)=G1,G′′(I)2=G2.

Finally, the error equation is expressed as (23)e(k+1)=H0(I-G0)e(k)+[(C23I-H0-G0+G0H0+2G1H0=H0(I-G0)e(k)[+(-G0H02-2H1+2G0H1)C2]e(k)2=H0(I-G0)e(k)+[C23(-2I+2G0-2G1+2H0-2G0H0=H0(I-G0)e(k)[+(+4G2H0-2G0H02-2G1H02=H0(I-G0)e(k)[+(+4G1H1-4G0H0H1=H0(I-G0)e(k)[+(-4H2+4G0H2)C22=H0(I-G0)e(k)[+(C232I-H0-3H1-2G0+G0H0=H0(I-G0)e(k)[+(+3G0H1+G0H03+3G1H0)C3]×e(k)3+O(e(k)4).

If we choose H0=0,H1=(1/2)I, and H2=0, we obtain (24)e(k+1)=[(-2I+2G0)C22+(12I-G02)C3]e(k)3+O(e(k)4).

So, to increase the order of convergence up to four, the value of G0 must be G0=I, and then, (25)e(k+1)=G1[-C3+4C22]e(k)4+O(e(k)5).

Moreover, in order to reach fifth order of convergence, G1 must be null. Therefore, the error equation is (26)e(k+1)=[(I-2G2)C22C3+(-4I+8G2)C24]e(k)5+O(e(k)6).

Finally, if G2=(1/2)I, we have (27)e(k+1)=[-32C3C2C3+14C2C32+6C3C23-C2C3C22+C23C3-4C2532]e(k)6+O(e(k)7), and the theorem is proved.

From the previous theorem, (5) defines a family of sixth-order methods. We can find different weight functions satisfying the conditions of the theorem. Specifically, we propose the following examples for the next sections.

Example 2.

An element of our family of sixth-order is given by the weight functions (28)H(t)=12(t-I),G(t)=[I+t]-1(2I-t+t2), which is called NAJC1.

Example 3.

Another combination of weight functions that can be used is (29)H(t)=12(t-I),G(t)=I+12(t-I)2.

We will refer to this element of the class as NAJC2.

3. Numerical Results

In this section, we analyze the computational efficiency of our methods and compare them with other classical ones in the problem of preliminary orbit determination as well as in academic examples. The classical methods used are Newton’, Traub’, and Jarratt’s ones of convergence order 2, 3, and 4, respectively, whose iterative expressions are as follows.

Newton (N) (30)x(k+1)=x(k)-[F(x(k))]-1F(x(k)).

Traub (T) (31)y(k)=x(k)-[F(x(k))]-1F(x(k)),x(k+1)=y(k)-[F(x(k))]-1F(y(k)).

Jarratt (J) (32)z(k)=x(k)-23  [F(x(k))]-1F[x(k)],x(k+1)=x(k)-12  [3F(z(k))-F(x(k))]-1×(3F(z(k)))-F(x(k))[F(x(k))]-1F(x(k)).

In our tests, we have used the following numerical settings: variable precision arithmetics of two hundred and fifty digits in Mathematica 8.0; moreover, in each iterative method, we have used the stopping criterium F(x(k+1))+x(k+1)-x(k)<10-100, and the approximated computational order of convergence ρ (see ) has been calculated by using the formula: (33)pρ=ln(x(k+1)-x(k)/x(k)-x(k-1))ln(x(k)-x(k-1)/x(k-1)-x(k-2)).

From this value, we have designed two practical indices to measure the computational efficiency: the approximated efficiency index (34)I~=ρ1/d, and the approximated computational index (35)I~c=ρ1/op,d and op being the number of functional evaluations and the number of operations (products and quotients) per iteration, respectively.

In Tables 3, 4, 5, 6, and 7, we show the number of iterations, the previously defined indices, and the absolute errors committed between theoretical and practical values that we denote by ϵ.

Two reference orbits have been used in the test for the preliminary orbit determination. The first can be found in , and the second one is a commercial real orbit called Tundra. As the orbital elements of each one are known, the vector positions (measured in Earth radius) at the instants t1 and t2 have been recalculated with 500 exact digits. These vector positions are (36)r1[2.46080928705339,2.04052290636432,r1[0.14381905768815],r2[1.98804155574820,2.50333354505224,r2[0.31455350605251], for Reference Orbit I and (37)r1[-2.02862564034533,-0.74638890547506,r1[-4.322222156844465],r2[4.24372000256074,-1.689387746496,r2[6.79724893784587], for Tundra Orbit. Then, our aim is to gain from these positions the orbital elements showed in Tables 1 and 2 with a precision as high as possible by means of proposed iterative schemes.

Orbital elements and temporal distance between r1 and r2. Reference Orbit I.

 a    = 4.0 e.r. i = 15° Δ t = 0.01044412 D.J. e = 0.2 e.r. Ω = 30° Δ ν = 12.232° T = January 1, 1964 0hr, 0min, 0seg ω = 10°

Orbital elements and temporal distance between r1 and r2. Tundra Orbit.

 a = 6.62 e.r. i = 63.43° Δ t = 0.399753 D.J. e = 0.27 e.r. Ω = 290.2° Δ ν = 171° M = 144° ω = 270°

Results of reference Orbit I.

Method Iter ρ I ~ I ~ c ε a ε e ε i ε w ε Ω Time (s)
N 7 1.9999 1.1112 1.1112 3.2757 e - 109 4.8982 e - 110 7.3653 e - 109 2.6237 e - 108 0 0.005049
T 5 2.9995 1.1396 1.1102 2.0466 e - 120 3.0603 e - 121 4.6017 e - 120 1.6393 e - 119 0 0.005463
J 4 4.0000 1.1264 1.0771 4.8431 e - 200 8.8034 e - 201 3.9324 e - 200 1.4008 e - 199 0 0.005906
NAJC1 3 5.7569 1.1347 1.0485 3.6731 e - 148 4.3049 e - 148 1.5905 e - 146 5.6659 e - 146 0 0.009218
NAJC2 3 5.7821 1.1352 1.0487 3.9057 e - 149 4.6174 e - 149 1.7089 e - 147 6.0879 e - 147 0 0.007306

Results of tundra Orbit.

Method Iter ρ I ~ I ~ c ε a ε e ε i ε w ε Ω Time (s)
N 6 2.0121 1.1041 1.1041 3.1284 e - 16 1.6038 e - 17 2.4321 e - 16 6.5148 e - 15 0 0.005069
T 5 2.9852 1.1331 1.1051 3.1284 e - 16 1.6038 e - 17 2.4321 e - 16 6.5148 e - 15 0 0.005493
J 3 3.9931 1.1409 1.0859 3.1284 e - 16 1.6038 e - 17 2.4321 e - 16 6.5148 e - 15 0 0.004457
NAJC1 3 4.9478 1.1336 1.0482 3.1284 e - 16 1.6038 e - 17 2.4321 e - 16 6.5148 e - 15 0 0.009004
NAJC2 3 5.2465 1.1337 1.0482 3.1284 e - 16 1.6038 e - 17 2.4321 e - 16 6.5148 e - 15 0 0.007704

Results of system (a), x(0) = (4,-3)T.

Method Iter ρ I ~ I ~ c ε α 1 ε α 2 Time (s)
N 8 1.9999 1.1734 1.1078 7.5993 e - 174 0 0.004658
T 6 3.0000 1.3841 1.1251 5.8838 e - 206 0 0.006098
J 4 3.9887 1.1174 1.0715 2.0217 e - 113 0 0.004620
NAJC1 4 6.0051 1.1259 1.0451 0 0 0.009178
NAJC2 4 6.0028 1.1289 1.0462 0 0 0.008030

Results of system (b), x(0) = (12,-2,-1)T.

Method Iter ρ I ~ I ~ c ε α 1 ε α 2 ε α 3 Time (s)
N 13 1.9948 1.0281 1.0201 3.4121 e - 125 2.7759 e - 125 1.0794 e - 125 0.007766
T
J 8 3.9940 1.0301 1.0158 0 0 0 0.009805
NAJC1 5 4.9496 1.0646 1.0171 0 0 0 0.013722
NAJC2 6 4.9329 1.0716 1.0190 0 0 0 0.013029

Results of system (c), x(0) = (5,5,5,-1)T.

Method Iter ρ I ~ I ~ c ε α 1 ε α 2 ε α 3 ε α 4 Time (s)
N 10 2.0244 1.0249 1.0249 6.5830 e - 102 6.5830 e - 102 6.5830 e - 102 2.7466 e - 103 0.007272
T 7 3.0909 1.0356 1.0162 2.9862 e - 145 2.9862 e - 145 2.9862 e - 145 7.8319 e - 147 0.009802
J 5 4.1871 1.0349 1.0141 6.5830 e - 102 6.5830 e - 102 6.5830 e - 102 2.7466 e - 103 0.007952
NAJC1 5 6.4193 1.0493 1.0104 0 0 0 0 0.017668
NAJC2 5 6.1729 1.0520 1.0110 0 0 0 0 0.014849

If we look at the numerical results of Table 3, our methods NAJC1 and NAJC2 have the least number of iterations, the highest order of convergence with the highest efficiency index. For this case, the absolute error committed by Jarratt’s method is lower than in our procedures, which is due to use of an initial estimation very close to the solution and a very small temporal distance.

If we observe the numerical results of Tundra Orbit (Table 4), we note that the absolute error is stabilized, and we maintain the good results of the Reference Orbit I. We can conclude that working with the two equations provided by Gauss as a system, we improve the original procedure which has the restriction that the difference of true anomalies cannot be greater than π/4. We are able to increase the range of the difference of true anomalies associated to the observations until values close to π.

3.1. Other Nonlinear Problems

In order to continue checking the computational efficiency of the proposed schemes, NAJC1 and NAJC2 we use, in the following, some academic examples.

F1(x)=(f1(x),f2(x))T: x=(x1,x2)T and fi:2, i=1,2, α(3.47063096,-2.47063096), (38)f1(x)=ex1ex2+x1cosx2,f2(x)=x1+x2-1.

F2(x)=(f1(x),f2(x),f3(x))T: x=(x1,x2,x3)T and fi:3, i=1,2,3, α(2.14025,-2.09029,-0.223525), (39)f1(x)=x12+x22+x32-9,f2(x)=x1x2x3-1,f3(x)=x1+x2-x32.

F3(x)=(f1(x),f2(x),f3(x),f4(x))T: x=(x1,x2,x3,x4)T and fi:4, i=1,,4, α=(±1/3,±1/3,±1/3,±2/3), (40)f1(x)=x2x3+x4(x2+x3),f2(x)=x1x3+x4(x1+x3),f3(x)=x1x2+x4(x1+x2),f4(x)=x1x2+x1x3+x2x3-1.

The results are showed in Tables 5, 6, and 7, where εαi denotes |fi(α)|. From these tables, the best schemes in terms of precision are NAJC1 and NAJC2, even when the initial estimation is far from the solution. In system (b), Traub’s method does not converge after five hundred iterations from this initial estimation.

4. Dynamical Planes

From the numerical results presented in Section 3, our proposed methods show to be competitive with respect to existing ones. Nevertheless, under the dynamical point of view, we have checked that they can have better behavior in terms of stability and wideness of the region of convergence.

For the representation of the convergence basins of our procedures and classical methods, we have used the software described in . We draw a mesh with two thousand points per axis; each point of the mesh is a different initial estimation which we introduce in each procedure. If the method reaches the final solution in less than five hundred iterations, this point is drawn in orange. The color will be more intense when the number of iterations is lower. Otherwise, if the method arrives at the maximum of iterations, the point will be drawn in black. In each axis, we will represent each of the variables with which we work. The ratio sector triangle is represented in the abscissas and the difference of eccentric anomalies in the ordinates. In addition, we will use the reference orbit I, which is defined in Table 1, and the solution of the nonlinear system is in this case around (1, 0.1). For this reason, we choose [0,3]×[-1,1] as the region of representation.

In Figure 2, we show the dynamical planes of the classical methods. It can be observed that, in general, higher order means lower stability. If we focus our attention on the attraction basins of each plane, the method with the greatest stability is Newton, and the procedure with the lowest number of iterations is Jarratt.

Dynamical planes from classical methods and Reference Orbit I.

Newton

Traub

Jarratt

As we can see in Figure 3, both NAJC1 and NAJC2 have large areas of stability, similar to Newton’s one, but with order of convergence six. For the intensity of the orange in the attraction basins, the two schemes will have the least number of iterations. Moreover, if we compare both procedures, the attraction basins of NAJC1 are more disperse than the convergence basins of NAJC2 which makes the first one more unstable than the second one.

Dynamical planes from new methods and Reference Orbit I.

NAJC1

NAJC2

5. Conclusions

The classical Gauss’ method for preliminary orbit determination has been improved, introducing a new performance by means of a nonlinear system. This fact increases the admissible spread of the observations (in order to ensure the convergence) from π/4 to π and reduces the number of iterations of the process.

The new sixth-order methods NAJC1 and NAJC2 belonging to the class of methods designed by using matricial weight functions have good global properties of convergence and stability, even for initial estimations far from the solution.

It is a well-known fact that the size of the area of convergence is inversely proportional to the order of convergence. However, our methods hold a basin of attraction comparable with Newton’s one in spite of their sixth-order of convergence.

Acknowledgments

The authors thank the anonymous referees for their valuable comments and suggestions. This research was supported by Ministerio de Ciencia y Tecnología MTM2011-28636-C02-02.

Fidkowski K. J. Oliver T. A. Lu J. Darmofal D. L. p-Multigrid solution of high-order discontinuous Galerkin discretizations of the compressible Navier-Stokes equations Journal of Computational Physics 2005 207 1 92 113 2-s2.0-18344378961 10.1016/j.jcp.2005.01.005 Bruns D. D. Bailey J. E. Nonlinear feedback control for operating a nonisothermal CSTR near an unstable steady state Chemical Engineering Science 1977 32 3 257 264 2-s2.0-0017419635 Ezquerro J. A. Gutiérrez J. M. Hernández M. A. Salanova M. A. Chebyshev-like methods and quadratic equations Revue d'Analyse Numérique et de Théorie de l'Approximation 1999 28 1 23 35 MR1881347 Zhang Y. Huang P. High-precision time-interval measurement techniques and methods Progress in Astronomy 2006 24 1 1 15 He Y. Ding C. H. Q. Using accurate arithmetics to improve numerical reproducibility and stability in parallel applications Journal of Supercomputing 2001 18 3 259 277 2-s2.0-0035276894 10.1023/A:1008153532043 Revol N. Rouillier F. Motivations for an arbitrary precision interval arithmetic and the MPFI library Reliable Computing 2005 11 4 275 290 10.1007/s11155-005-6891-y MR2158338 ZBL1078.65543 Petković M. Neta B. Petković L. Džunić J. Multipoint Methods for Solving Nonlinear Equations 2013 New York, NY, USA Elsevier Darvishi M. T. Some three-step iterative methods free from second order derivative for finding solutions of systems of nonlinear equations International Journal of Pure and Applied Mathematics 2009 57 4 557 573 MR2597522 ZBL1195.65068 Adomian G. Solving Frontier Problems of Physics: The Decomposition Method 1994 60 Dordrecht, The Netherlands Kluwer Academic Publishers xiv+352 Fundamental Theories of Physics MR1282283 Babajee D. K. R. Dauhoo M. Z. Darvishi M. T. Barati A. A note on the local convergence of iterative methods based on Adomian decomposition method and 3-node quadrature rule Applied Mathematics and Computation 2008 200 1 452 458 10.1016/j.amc.2007.11.009 MR2421660 ZBL1160.65018 Darvishi M. T. Barati A. A third-order Newton-type method to solve systems of nonlinear equations Applied Mathematics and Computation 2007 187 2 630 635 10.1016/j.amc.2006.08.080 MR2323069 ZBL1116.65060 Darvishi M. T. Barati A. Super cubic iterative methods to solve systems of nonlinear equations Applied Mathematics and Computation 2007 188 2 1678 1685 10.1016/j.amc.2006.11.022 MR2335022 ZBL1119.65045 Cordero A. Martínez E. Torregrosa J. R. Iterative methods of order four and five for systems of nonlinear equations Journal of Computational and Applied Mathematics 2009 231 2 541 551 10.1016/j.cam.2009.04.015 MR2549721 ZBL1173.65034 Traub J. F. Iterative Methods for the Solution of Equations 1982 New York, NY, USA Chelsea Publishing Babajee D. K. R. Dauhoo M. Z. Darvishi M. T. Karami A. Barati A. Analysis of two Chebyshev-like third order methods free from second derivatives for solving systems of nonlinear equations Journal of Computational and Applied Mathematics 2010 233 8 2002 2012 10.1016/j.cam.2009.09.035 MR2564035 ZBL1204.65050 Soleymani F. Lotfi T. Bakhtiari P. A multi-step class of iterative methods for nonlinear systems Optimization Letters 2013 10.1007/s11590-013-0617-6 Darvishi M. T. Barati A. A third-order Newton-type method to solve systems of nonlinear equations Applied Mathematics and Computation 2007 187 2 630 635 2-s2.0-34247232542 10.1016/j.amc.2006.08.080 Awawdeh F. On new iterative method for solving systems of nonlinear equations Numerical Algorithms 2010 54 3 395 409 10.1007/s11075-009-9342-8 MR2661330 ZBL1197.65048 Babajee D. K. R. Cordero A. Soleymani F. Torregrosa J. R. On a novel fourth-order algorithm for solving systems of nonlinear equations Journal of Applied Mathematics 2012 2012 12 10.1155/2012/165452 165452 MR2997244 ZBL1268.65072 Cordero A. Torregrosa J. R. Vassileva M. P. Pseudocomposition: a technique to design predictor-corrector methods for systems of nonlinear equations Applied Mathematics and Computation 2012 218 23 11496 11504 10.1016/j.amc.2012.04.081 MR2943995 Cordero A. Torregrosa J. R. Vassileva M. P. Increasing the order of convergence of iterative schemes for solving nonlinear systems Journal of Computational and Applied Mathematics 2013 252 86 94 10.1016/j.cam.2012.11.024 MR3054560 Soleymani F. Stanimirović P. S. A higher order iterative methods for computing the Drazin inverse The Scientific World Journal 2013 2013 11 10.1155/2013/708647 708647 Soleymani F. Stanimirović P. S. Ullah M. Z. An accelerated iterative method for computing weighted Moore–Penrose inverse Applied Mathematics and Computation 2013 222 365 371 10.1016/j.amc.2013.07.039 MR3115875 Sharma J. R. Guha R. K. Sharma R. An efficient fourth order weighted-Newton method for systems of nonlinear equations Numerical Algorithms 2013 62 2 307 323 10.1007/s11075-012-9585-7 Sharma J. R. Arora H. On efficient weighted-Newton methods for solving systems of nonlinear equations Applied Mathematics and Computation 2013 222 497 506 10.1016/j.amc.2013.07.066 MR3115887 Abad M. F. Cordero A. Torregrosa J. R. Fourth- and fifth-order methods for solving nonlinear systems of equations: an application to the global positioning system Abstract and Applied Analysis 2013 2013 10 10.1155/2013/586708 586708 MR3055945 ZBL06209354 Escobal P. R. Methods of Orbit Determination 1965 Robert E. Krieger Cordero A. Hueso J. L. Martínez E. Torregrosa J. R. A modified Newton-Jarratt's composition Numerical Algorithms 2010 55 1 87 99 10.1007/s11075-009-9359-z MR2679752 ZBL1251.65074 Jarratt P. Some fourth order multipoint iterative methods for solving equations Mathematics Computation 1966 20 434 437 Cordero A. Torregrosa J. R. Variants of Newton's method using fifth-order quadrature formulas Applied Mathematics and Computation 2007 190 1 686 698 10.1016/j.amc.2007.01.062 MR2338747 ZBL1122.65350 Chicharro F. Cordero A. Torregrosa J. R. Drawing dynamical and parameter planes of iterative families and methods The Scientific World Journal 2013 2013 11 10.1155/2013/780153 780153