Optimization problems defined by (objective) functions for which derivatives are unavailable or available at an expensive cost are emerging in computational science. Due to this, the main aim of this paper is to attain as high as possible of local convergence order by using fixed number of (functional) evaluations to find efficient solvers for one-variable nonlinear equations, while the procedure to achieve this goal is totally free from derivative. To this end, we consider the fourth-order uniparametric family of Kung and Traub to suggest and demonstrate two classes of three-step derivative-free methods using only four pieces of information per full iteration to reach the optimal order eight and the optimal efficiency index 1.682. Moreover, a large number of numerical tests are considered to confirm the applicability and efficiency of the produced methods from the new classes.
This paper focuses on finding approximate solutions to nonlinear scalar and sufficiently smooth equations by derivative-free methods. Techniques such as the false position method in the root-finding of a nonlinear equation
Convergence in open methods is not guaranteed but if the method does converge, it does so much faster than the bracketing methods [
Apart from being involved in the first derivative, Newton’s iteration will use the second derivative of the function, when it is applied in
For the first time, Steffensen in [
In this work, we suggest novel classes of three-step four-point iterative methods, which are without memory, derivative-free, optimal, and therefore consistent with hard problems. For this reason, the contents of the paper unfold as comes next. Section
In order to contribute and provide a class of methods, which are derivative-free with high efficiency index, we take into consideration the optimal two-step fourth-order uniparametric family of Kung and Traub [
This scheme includes four evaluations of the function and one of its first-order derivative to reach the order eight and 1.516 as its efficiency index. To improve the efficiency index, we first consider the same approximation as used in the second step of (
Let
We expand any terms of (
Now, any optimal three-step four-point derivative-free without memory method can be produced by using (
We should here recall that per computing step for any of the methods from the new class, the values of
Many of the nonlinear functions are arising from solving complex environmental engineering problems, where the objective depends on the output of a numerical simulation of a physical process. These simulators are expensive to evaluate because they involve numerically solving systems of partial differential equations governing the underlying physical phenomena. However, function evaluation remains the dominant expense in optimization problems since the savings in time are often offset by increased accuracy of the simulation. For these reasons, algorithms like (
Before going to the next sections, it is required to have a discussion on another similar class of derivative-free methods that is attainable by choosing a different approximation in the first step of (
Let
The proof of this theorem is similar to the proof of Theorem
Now, by using (
In this section, we shortly present some of the well-known high-order derivative-free techniques to find the simple zeros of nonlinear equations for the sake of comparison. Kung and Traub in [
In terms of computational point of view, the efficiency index of our classes of derivative-free without memory methods (
We now check the effectiveness of the novel derivative-free classes of iterative methods. In order to do this, we choose (
We have compared (
The examples considered in this study.
Test functions | Zeros |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Results of convergence for different derivative-free methods.
|
Guess | ( |
( |
( |
( |
( | |
---|---|---|---|---|---|---|---|
|
IT | 9 | 4 | 3 | 3 | 3 | |
TNE | 18 | 12 | 12 | 12 | 12 | ||
0.3 |
|
|
|
|
|
|
|
0.2 |
|
|
|
|
|
|
|
| |||||||
|
IT | 9 | 4 | 3 | 3 | 3 | |
TNE | 18 | 12 | 12 | 12 | 12 | ||
−0.9 |
|
|
|
|
|
|
|
−0.4 |
|
|
|
|
|
|
|
| |||||||
|
IT | 9 | 4 | 3 | 3 | 3 | |
TNE | 18 | 12 | 12 | 12 | 12 | ||
1.25 |
|
|
|
|
|
|
|
1.6 |
|
|
|
|
|
|
|
| |||||||
|
IT | 9 | 4 | 3 | 3 | 3 | |
TNE | 18 | 12 | 12 | 12 | 12 | ||
−1.3 |
|
|
|
|
|
|
|
−1 |
|
|
|
|
|
|
|
| |||||||
|
IT | 10 | 5 | 3 | 3 | 3 | |
TNE | 20 | 15 | 12 | 12 | 12 | ||
0 |
|
|
|
|
|
|
|
−0.7 |
|
|
|
|
|
|
|
| |||||||
|
IT | 8 | 4 | 3 | 3 | 3 | |
TNE | 16 | 12 | 12 | 12 | 12 | ||
−2.2 |
|
|
|
|
|
|
|
−1.7 |
|
|
|
|
|
|
|
| |||||||
|
IT | 9 | 4 | 3 | 3 | 3 | |
TNE | 18 | 12 | 12 | 12 | 12 | ||
3 |
|
|
|
|
|
|
|
0.6 |
|
|
|
|
|
|
|
| |||||||
|
IT | 9 | 4 | 3 | 3 | 3 | |
TNE | 18 | 12 | 12 | 12 | 12 | ||
1.3 |
|
Div. |
|
|
|
|
|
0.3 |
|
|
|
|
|
|
|
| |||||||
|
IT | 9 | 4 | 3 | 3 | 3 | |
TNE | 18 | 12 | 12 | 12 | 12 | ||
0.6 |
|
|
|
|
|
|
|
0.4 |
|
|
|
Div. |
|
|
|
| |||||||
|
IT | 8 | 4 | 3 | 3 | 3 | |
TNE | 16 | 12 | 12 | 12 | 12 | ||
1.36 |
|
|
|
|
|
|
|
1.32 |
|
|
|
|
|
|
|
| |||||||
|
IT | 9 | 4 | 3 | 3 | 3 | |
TNE | 18 | 12 | 12 | 12 | 12 | ||
−1 |
|
|
|
|
Div. |
|
|
0.3 |
|
|
|
|
|
|
|
| |||||||
|
IT | 9 | 4 | 3 | 5 | 3 | |
TNE | 18 | 12 | 12 | 20 | 12 | ||
1.4 |
|
|
|
|
|
|
|
1.3 |
|
Div. |
|
Div. |
|
|
|
| |||||||
|
IT | 18 | 4 | 3 | 3 | 3 | |
TNE | 36 | 12 | 12 | 12 | 12 | ||
0.3 |
|
|
|
|
|
|
|
0.8 |
|
0.1 |
|
|
|
|
|
| |||||||
|
IT | 8 | 4 | 3 | 3 | 3 | |
TNE | 16 | 12 | 12 | 12 | 12 | ||
0.2 |
|
|
|
|
|
|
|
0.5 |
|
|
|
|
|
|
|
| |||||||
|
IT | 8 | 4 | 3 | 3 | 3 | |
TNE | 16 | 12 | 12 | 12 | 12 | ||
0.3 |
|
|
|
|
|
|
|
0.4 |
|
|
|
|
|
|
|
| |||||||
|
IT | 9 | 4 | 3 | 3 | 3 | |
TNE | 18 | 12 | 12 | 12 | 12 | ||
9 |
|
|
|
|
|
|
|
9.2 |
|
|
|
|
|
|
|
| |||||||
|
IT | 9 | 4 | 3 | 3 | 3 | |
TNE | 18 | 12 | 12 | 12 | 12 | ||
0.1 |
|
Div. |
|
|
|
|
|
0.8 |
|
|
|
|
|
|
|
| |||||||
|
IT | 9 | 4 | 3 | 3 | 3 | |
TNE | 18 | 12 | 12 | 12 | 12 | ||
0.3 |
|
|
|
|
|
|
|
0.2 |
|
|
|
|
|
|
|
| |||||||
|
IT | 10 | 4 | 3 | 3 | 3 | |
TNE | 20 | 12 | 12 | 12 | 12 | ||
−0.7 |
|
|
|
|
|
|
|
−0.9 |
|
Div. |
|
Div. | Div. |
|
|
| |||||||
|
IT | 8 | 4 | 3 | 3 | 3 | |
TNE | 16 | 12 | 12 | 12 | 12 | ||
0.9 |
|
|
|
|
|
|
|
0.7 |
|
|
|
|
|
|
It is important to review the proof of convergence for our proposed classes of methods (or the compared methods in Table
It can be observed from Table
We have completed Table
An important aspect of implementing high-order nonlinear solvers is in finding very robust initial guesses to start the process, when high precision computing is needed. As discussed in Section
Thus now, we have an efficient list of initial approximations for the zeros of a nonlinear once differentiable function with finitely many zeros in an interval. The number of zeros and the graph of the function including the positions of the zeros can be given by the following commands (see Figure
The graph of the function
For this test, there are 33 zeros in the considered interval which can easily be used as the starting points for our proposed high-order derivative-free methods. Note that the output of the vector “
The importance and application of nonlinear solvers made the construction of new methods by the beginning of the new century. On the other hand, when the cost of derivative evaluation is expensive or in some cases not available, the need for higher-order methods with high efficiency index, which do not require derivative evaluations per full cycle, are more and more felt in the scientific computing.
Hence, this paper has recommended two wide classes of optimal eighth-order methods without memory to solve nonlinear scalar equations numerically. The merits of the produced methods from our classes were high efficiency index, being totally free from derivative, high accuracy in numerical examples and also consistency with the conjecture of Kung and Traub. The analytical proof of the main contribution was given in Section
Providing with memory iterations using the classes (