A Generalized Regula Falsi Method for Finding Zeros and Extrema of Real Functions

Many zero-finding numerical methods are based on the Intermediate Value Theorem, which states that a zero of a real function f : R → R is bracketed in a given interval [A, B] ⊂ R if f(A) and f(B) have opposite signs; that is, f(A) ⋅ f(B) < 0. But, some zeros cannot be bracketed this way because they do not satisfy the precondition f(A) ⋅ f(B) < 0. For example, local minima and maxima that annihilate f may not be bracketed by the Intermediate Value Theorem. In this case, we can always use a numerical method for bracketing extrema, checking then whether it is a zero of f or not. Instead, this paper introduces a single numerical method, called generalized regula falsi (GRF) method to determine both zeros and extrema of a function. Consequently, it differs from the standard regula falsi method in that it is capable of finding any function zero in a given interval [A, B] ⊂ R even when the Intermediate Value Theorem is not satisfied.

Although this class of methods satisfying the Intermediate Value Theorem succeeds in finding most zeros (i.e., crossing zeros), they fail to bracket zeros that are also extrema, here called extremal zeros.We say that an extremum is bracketed in the interval [, ] if () and (), but not their derivatives, have identical signs; that is, () ⋅ () > 0 and   () ⋅   () < 0. If () > 0 and () > 0, "go downhill, taking steps of increasing size, until your function starts back uphill" [4].At this point, we have only to check that such a minimum is a zero.Analogously, if () < 0 and () < 0, basically, "go uphill, taking steps of increasing size, until your function starts back downhill" [4].Then, we have only to check that such a maximum is a zero.
Despite the existence of algorithms for finding extrema (maxima and minima) of functions in the numerical analysis literature, there is no single iterative formula to find both crossing and extremum zeros of a real function.Such a method is described in the next section and is called generalized regula falsi method.

Generalized Regula Falsi Method
The generalized regula falsi (GRF) method is based on the ratio of similar triangles.Its main novelty is that it can be used to compute both zeros and extrema through a single interpolation formula.
Let us consider the interval [, ], as illustrated in Figures 1(a  As known, the vectorial equation of the line that passes through the points  and  is given by with  = (0) and  = (1).
Remarkably, as explained below, the iterative formula (1) applies to both zeros and extrema.

Zeros. The computation of a zero is illustrated in
that is, the interpolation formula of the well-known regula falsi method [1].It is used to determine crossing zeros.
that is, the next estimate  of a local minimum.
Thus, formula (5) works equally well for local minima and maxima, independently of whether () and () are either positive or negative.In fact, looking at (4) and (5), we see that it is enough to change the sign of either () or () to determine the next estimate  of an extremum through the regula falsi technique.
However, the condition () ⋅ () > 0 only indicates that there may be an extremum between  and .We have to evaluate the finite difference () = (( + ℎ) − ())/ℎ, with an infinitesimal ℎ, at  and  and test whether they have opposite signs, that is, () ⋅ () < 0, to guarantee that an extremum exists in [, ].We use the finite difference-the discrete counterpart of the derivative-of a function at a given point for two main reasons.
(i) Consistency: the regula falsi method does not use derivatives.So, the generalized regula falsi method does not use them for consistency either.
(ii) Differentiability: there are zeros and extrema at which derivatives do not exist.For example, () = || 1/3 in Figure 3(f) has a cusp at  = 0 that is also a minimum.
Derivatives cannot be used in this case because the program would break down.
Therefore, we have to check whether the finite difference () of the estimate  is very close to zero.If so, we have an extremum at  approximately.Besides, if () ≈ 0, with an accuracy of 10 −6 , then  is a zero.
In short, in the literature, we find numerical methods for calculating zeros and extrema.The main novelty of the method introduced in this paper is its ability to compute zeros and extrema by means of a single interpolation formula (cf.formula (1)), that is, using the same formula.

Convergence Analysis.
The convergence of the GRF method is guaranteed by the following theorem.
Proof.Without loss of generality, that is equivalent to prove the existence of a sequence {  } +∞ =1 of intervals   = [  ,  +1 ] converging to , with  ∈   for all .
Let (  ) be the length of the interval Now, from (1) we obtain In general, we can write Taking into account that we have (  ) < ( −1 ) for all .Thus, the sequence {(  )} +∞ =1 is monotone decreasing.In addition, it is bounded because 0 ≤ (  ) ≤ | − |.So, by the Monotone Convergence Theorem we can conclude that {(  )} +∞ =1 is convergent.Taking into account that  ∈   for all , we conclude that the GRF method converges to , being  a zero or an extremum of .
It is known that the zero-finding regula falsi method converges linearly [9].Intuitively, we expect that the GRF method also converges linearly because it uses the same interpolation formula and bracketing strategy for both zeros and extrema.

Theorem 2. The GRF method has linear convergence.
Proof.Let  0 ,  1 ,  2 , . . .,   , . . .be a sequence of approximations to either a zero or an extremum  produced by the GRF method, where lim  → ∞   = .Let   =   −  be the error in the th iterate.

GRF Implementation.
The C function that implements the generalized regula falsi method returns either a zero or an extremum bracketed in a given interval [, ] (see Algorithm 1).Note that, for the sake of convenience, we have used the setting ℎ = 1.0 − 7 for evaluating finite differences, but this is clearly prone to cancellation errors.A better choice to ℎ would be √, where  denotes the machine precision, which is typically of the order 2.2 × 10 −16 for doubles on a 32-bit computer.

Moving GRF Method
As illustrated in Figure 2(a), the regula falsi method may converge slowly to a zero, spending many iterations in pulling distant bounds closer.For example, the crossing zero of the function  4 () = tan() tan() − 10 3 on [1.3, 1.4] (Figure 3(d)) was found after 222 iterations.Likewise, the crossing zero of the function  5 () =   − 10 on [−10.0,10.0] (Figure 3(e)) was only found after 999 iterations.The reason behind the slow convergence to some zeros is due to the fact that GRF method always retains a fixed endpoint of interval [, ].
The leading idea of speeding up the convergence of the GRF method is then moving the fixed point as well.To achieve that we replace the fixed endpoint of the interval where () and () are the finite differences at  and , respectively.Note that the estimate  may be closer to the solution than the estimate produced by Newton's method that results from the intersection between a single tangent and the -axis (Algorithm 2).So, the C function that implements the moving generalized regula falsi (mGRF) method differs from GRF in that we have to include statements concerning the calculation of the moving fixed point  that results from intersecting tangents at  and , as well as some statements to make sure that  remains bracketed in the interval.

Convergence of mGRF Method.
The convergence of the mGRF method is stated by the following theorem.Proof.The mGRF method uses two preliminary estimates  and  (i.e., the endpoints of the interval [, ]) to generate two sequences of estimates, {  } and {  }, that supposedly march towards the solution .
Without loss of generality, we can say that the GRF method generates the sequence {  } ∞ =1 converging to , as proved by Theorem 1.
It remains to prove that the sequence {  } ∞ =1 generated by the computation of  (cf.formula (15)) also converges to

Mathematical Problems in Engineering
Let (  ) be the length of the interval But, as said before, the new estimate   is only taken into account in the convergence process if it lies in the interval ] −2 ,  −1 [; otherwise,   equals to the estimate   ∈ ] −2 ,  −1 [ determined by the GRF method; that is,   =   .From ( 17) and (19), this implies that 0 <  < 1 and (  ) < ( −1 ) for all .Thus, the sequence {(  )} +∞ =1 is monotone decreasing.In addition, it is bounded because 0 < (  ) ≤ | − |.So, by the Monotone Convergence Theorem we can conclude that {(  )} +∞ =1 is convergent.Taking into account that  ∈   for all , we can also conclude that the mGRF method converges because the sequence {  } ∞ =1 converges to , being  a zero or an extremum of .

Experimental Results
We have carried out a number of the convergence tests on computer in order to assess the convergence of the superlinear version of generalized regula falsi (GRF) method.For that purpose, we have used a MacBook Pro laptop powered by a 2.33 GHz Intel Core 2 Duo processor, with 3 GB 667 MHz DDR2 SDRAM, running Mac OS X operating system (version 10.6.8).
4.1.Functions, Zeros, and Extrema.We have used a number of real analytic functions in testing of the method convergence, including algebraic (or polynomial) and transcendental functions, some of which are depicted in Figure 3; for example,  1 and  3 are algebraic functions, but not  2 and  4 which are transcendental functions.These functions have been chosen because they cover most types of zeros and (local) extrema, namely, as shown in what follows.For the purpose of testing, we carried out the convergence experiments for crossing zeros and extrema separately, as described below.

Convergence to Crossing Zeros. The experimental results
shown in Table 1 put in evidence how different GRF and mGRF methods are in terms of convergence to crossing zeros.According to Theorem 2, the GRF has linear convergence.But, following the results shown in Figure 4(top), mGRF seems to have superlinear convergence.
As known, the GRF converges very slowly to a crossing zero when the slope of the function graph changes very fast near to such a zero, as it is the case of those shown in Figures 3(d)-3(e).For example, in Figure 3(d), the crossing zero of  4 on the narrow interval ]1.3, 1.4[ was found after 222 iterations, while that one of  5 in Figure 3(e) was found after 999 iterations, though in the bigger interval [−10, 10] (cf.Table 1).
As explained above, the mGRF is faster than GRF because both endpoints of the bracketing interval converge to the solution, while GRF keeps one of the endpoints fixed.The mGRF convergence rate seems to be at least quadratic because the estimates produced by (15) are usually closer to  the solution than those produced by Newton's method.This is confirmed by the error semilogarithmic graphs versus number of iterations shown in Figure 4(top).The convergence curve in black concerns the crossing zero of  1 in the interval [3.0, 3.3], while the red, blue, and green curves represent the convergence to zero crossings of  3 ,  4 , and  5 (cf.Table 1).

Convergence to Extrema.
Unlike the numerical methods found in the literature, the GRF and mGRF methods are capable of computing both zeros and extrema of a real function through a single interpolation formula.However, as shown in Figure 4(bottom), the convergence rate for extrema is lower than for crossing zeros and tends to be linear.This is explained by the fact that some estimates of the lateral sequence {  } +∞ =1 jump to the sequence {  } ∞ =1 , and vice versa.
The data shown in Table 2 concerns some extrema of  1 ,  2 , and  6 , whose convergence curves are plotted in Figure 4(bottom); the red curve concerns the minimum of  1 in the interval [2.0, 3.0], the magenta curve has to do with the convergence to the maximum  2 in the interval [1, 1.8], and the cyan curve refers to zeroed minimum (cusp) of  6 in the interval [−0.5, 1.5].This latter case shows that the method proposed in this paper is also capable of calculating a zero at which  is not differentiable.These semilogarithmic graphs suggest that mGRF has linear convergence for extrema.Nevertheless, the red convergence curve associated to the local maximum of  1 in the interval [−0.9, −0.4] may suggest that we can possibly obtain superlinear convergence for extrema too, but that would require redesigning the algorithm somehow.

Conclusions
The experimental results presented in the previous section show us the following.
(i) It is feasible to have a single numerical method to determine both zeros and extrema of a real function.
(ii) The algorithm (m)GRF works well even under conditions of lack of differentiability.So, singular zeros such as cusps and corners can be also computed.
(iii) It is also feasible to compute inflection points.For that, the algorithm must include the computation of the second finite differences.
This algorithm has been designed and implemented to solve a difficult problem in computer graphics and geometric modeling, which has to do with rendering sign-invariant components of implicit curves.For example, the circle described by function (, ) = ( 2 +  2 − 1) 2 = 0 in R 2 cannot be sampled and, thus, rendered by algorithms that make usage of the Intermediate Value Theorem because the function is positive everywhere, with the exception of the curve points on which the function takes on the value 0; that is, all the circle points are extrema.

Figure 3 :
Figure 3: Graphs of six real functions: (a)  1 has 2 local maxima, 1 local minimum, 2 zeroed minima, and 2 crossing zeros; (b)  2 has an alternate sequence of local maxima and zeroed minima; (c)  3 has a single crossing zero; (d)  4 has a single crossing zero; (e)  5 has also a single crossing zero; (f)  6 has a zeroed minimum that is a cusp.

( i )
Crossing Zeros (cZ): a crossing zero is a point at which the function graph crosses the -axis.We find 5 crossing zeros in Figure3.More specifically,  1 has 2 crossing zeros in the interval [1.5, 3.5],  3 has 1 crossing zero in the interval [−2.0, −1.0],  4 has 1 crossing zero in the interval [1.3, 1.4], and  5 has 1 crossing zero in the interval [1.5, 2.5].(ii) Minima (m): we find 1 minimum in Figure 3;  1 has 1 local minimum in the interval [2.0, 3.0].(iii) Maxima (M): we have 4 maxima in Figure 3;  1 has 2 local maxima in the interval [−1.0, 2.0], while  2 possesses 2 local maxima, the first in the interval [1.0, 2.0] and the second in the interval [4.0, 5.0].(iv) Zeroed Minima (Zm): a local zeroed minimum is a point at which the function takes on the value 0 and has a local minimum; that is, the function graph touches but does not cross the -axis.We have 5 zeroed minima in Figure 3.In particular,  1 has 2 local zeroed minima in the interval [−1.0, 1.0],  2 also possesses 2 local zeroed minima, the first at  = 0 and the second in the interval [3.0, 4.0], whereas  6 has 1 local minimum at  = 0 that is a cusp.(v) Zeroed Maxima (ZM): a local zeroed maximum is a point at which the function takes on the value 0 and has a local maximum.

Figure 4 :
Figure 4: Semilogarithmic graphs relative to convergence to zeros and extrema.
To determine the speed of convergence subtract  and divide by   =   −  to get ).Now, formula (15) can be rewritten as follows:

Table 1 :
Experimental convergence results for zeros using GRF and mGRF methods.

Table 2 :
Experimental convergence results for extrema using GRF and mGRF methods.