Polynomiography Based on the Nonstandard Newton-Like Root Finding Methods

and Applied Analysis 3 (viii) In 2012 Chugh et al. introduced the CR iteration in [22]: x n+1 = (1 − α n ) y n + α n T (y n ) , y n = (1 − β n ) T (x n ) + β n T (z n ) , z n = (1 − γ n ) x n + γ n T (x n ) , n = 0, 1, 2, . . . , (10) where α n , β n , γ n ∈ [0, 1] for all n ∈ N and ∑∞ n=0 α n =


Introduction
Polynomial root-finding has played a key role in the history of mathematics.It is one of the oldest and most deeply studied mathematical problems.In 2000 BC Babylonians solved quadratic equation (quadratics).Seventeen centuries later Euclid solved quadratics with geometrical construction.In 1539 Cardan gave complete solution to cubics.In 1699 Newton introduced numerical iteration for root-finding.About seventy years later Lagrange showed that polynomial of degree 5 or higher cannot be solved by the methods used for quadratics, cubics, and quartics.In 1799 Gauss proved the Fundamental Theorem of Algebra.27 years later Abel proved the impossibility of generally solving equations of degree higher than 4. General root-finding method has to be iterative and can only be done approximately.Cayley in 1879 observed strange and unpredictable chaotic behaviour of the roots approximation process while applying Newton's method to the equation  3 − 1 = 0 in the complex plane.The solution of Caley's problem was found in 1919 by Julia.Julia sets became an inspiration for the great discoveries in 1970s, the Mandelbrot set and fractals [1].The last interesting contribution to the polynomials root finding history was made by Kalantari [2], who introduced the polynomiography.It defines the visualization process of the approximation of the roots of complex polynomials, using fractal and nonfractal images created via the mathematical convergence properties of iteration functions.An individual image is called a polynomiograph.Polynomiography combines both art and science aspects.As a method which generates nice looking graphics, it was patented by Kalantari in USA in 2005 [3].
It is known that any complex polynomial  of degree  having  roots, according to the Fundamental Theorem of Algebra, can be uniquely defined by its coefficients {  ,  −1 , . . .,  1 ,  0 }:  () =     +  −1  −1 + ⋅ ⋅ ⋅ +  1  +  0 (1) or by its zeros (roots) { 1 ,  2 , . . .,  −1 ,   }: Iterative roots finding process can be obviously applied to both representations of .The polynomiographs are generated as the result of this process' visualization.The degree of the polynomial defines the number of basins of attraction (root's basin of attraction is an area of the complex plane in which each point is convergent to the root using the root finding method).Localizations of the basins can be controlled by changing the roots positions on the complex plane manually.
Usually, polynomiographs are coloured based on the number of iterations needed to obtain the approximation of some polynomial root with a given accuracy and a chosen iteration method.The description of polynomiography, its theoretical background and artistic applications are described in [2,4].
Fractals and polynomiographs are generated by iterations.Fractals are self-similar, have complicated and nonsmooth structure, and are not dependent on a resolution.Polynomiographs are different.Their shape can be controlled and designed in a more predictable way in opposition to fractals.Generally, fractals and polynomiographs belong to different classes of graphical objects.
Summing up, polynomiography can be treated as a visualization tool based on the root finding process.It has many possible applications in education, math, sciences, art, and design [2].
In [5] the authors used Mann and Ishikawa iterations instead of the standard Picard iteration to obtain some generalization of Kalantari's polynomiography and presented some polynomiographs for the cubic equation  3 −1 = 0, permutation, and double stochastic matrices.Latif et al. in [6], using the ideas from [5], have used the -iteration in polynomiography.Earlier, the other types of iterations have been used in [7] for superfractals and in [8] for fractals generated by IFS.Julia sets and Mandelbrot sets [9] and the antifractals [10] have been also investigated using Noor iteration instead of the standard Picard iteration.
The paper is organised as follows.In Section 2 different kinds of iterations are presented.Section 3 presents the known root finding methods, starting from the known Newton's method up to the different generalizations of it.Section 4 treats different convergence tests used in iteration processes together with their modifications.In Section 5 the colouring methods of polynomiographs are introduced.Section 6 summarizes the theory of polynomiograph generation.As a result the full algorithm of polynomiograph generation is given.Section 7 presents many polynomiographs obtained experimentally as the result of the proposed algorithm.In Section 8 the time complexity of this algorithm is discussed.Section 9 concludes the paper and shows the future directions.

Iterations
Obviously, the equation of the form () = 0 can be equivalently transformed into a fixed point problem  = (), where  is some operator [11].Then, by applying the approximate fixed point theorem one can get information on existence, or sometimes both on existence and uniqueness, of the fixed point that is the solution of this equation.
Let (, ) be a complete metric space and  :  →  a self-map on .The set { * ∈  : ( * ) =  * } is the set of all fixed points of .Many iterative processes have been described for the approximation of fixed points in the ample literature [12][13][14][15][16][17][18][19].We recall below some iteration processes known from the literature.Assume that each iteration process starts from any initial point  0 ∈ .
The standard Picard iteration is used in the Banach Fixed Point Theorem [12] to obtain existence of the fixed point  * of the operator .Fixed point approximation is found under additional assumptions on the space  that it has to be a Banach one and the mapping  has to be contractive.The Mann [16], Ishikawa [13], and other iterations [12,14,15,[17][18][19] allow the weakening of the assumptions on the mapping  and generally allow the approximation of fixed points.The dependencies among the presented types of iterations are shown in Figure 1.Our further considerations will be conducted in the space  = C that is obviously a Banach one.We take  0 ∈ C and   = ,   = ,   = ,   = , and   =  for all  ∈ N such that  ∈ (0, 1], , , ,  ∈ [0, 1],  +  ∈ (0, 1], and

Newton's Root Finding Method and Its Generalizations
At first, we recall the well-known Newton's method for finding roots of a complex polynomial.Then, following [2] some generalizations, which use higher order iterations described with the help of the Basic Family of Iterations and Euler-Schröder Family of Iterations, will be presented.At the end of this section a set of formulas for solving polynomial equation with a complex variable will be given.In those formulas the standard Picard iteration will be replaced by different types of nonstandard iterations defined in Section 2.

The Standard Newton's Method with Picard's Iteration.
Let us denote any complex polynomial as .The standard Newton's root finding procedure for  is given by the formula: where and  0 ∈ C is a starting point.

The Basic
The elements of the Basic Family of Iterations are then defined as Let us see how the first three elements of the Basic Family look like: One can easily see that  2 is Newton's method, whereas  3 is Halley's method.By using functions   in [2], Kalantari defined the Parametric Basic Family: where  = 2, 3, . . .and  ∈ C. Let us note that for  = 1 the Parametric Basic Family reduces to the Basic Family.
One can easily see that  2 is Newton's method.The construction of the other elements of the family can be found in [2].(i) The generalized Newton's method with the Mann iteration (4):
All the above presented iteration processes are convergent to the roots of polynomial .Only the speed and the character of the convergence are different and the basins of attraction to roots of  look different for different kinds of iterations used.
The application of nonstandard iterations perturbs the shape of polynomial basins and makes the polynomiographs look more "fractal." The aim of using more general iterations, instead of the Picard iteration, was not to improve the speed of convergence but to create images that are interesting from the aesthetic point of view.

Convergence Tests
In the numerical algorithms that are based on iterative processes we need a stop criterion for the process, that is, a test that tells us that the process has converged or it is very near to the solution.This type of test is called a convergence test.Usually, in the iterative process that use a feedback, like the root finding methods, the standard convergence test has the following form: where  +1 ,   are two successive points in the iteration process and  > 0 is a given accuracy.In 1988 Pickover in [24] proposed a different convergence test for Halley's root finding method.By changing the standard convergence test (31) with Pickover obtained new and diverse shapes of the polynomiographs.Later, Gdawiec in [25] introduced methods of creating new convergence tests, which we will briefly present in the rest of this section.
When we look at (31) we can note that the calculation of the modulus is equivalent to the computation of the distance (in the complex plane) between the two elements.So, one way of changing the test is the use of different metrics in C. We know that the complex plane C is isometric with R 2 , where the isometry  : C → R 2 is given by [26]  () = (R () , I ()) , (33) for every  ∈ C, and where R() and I() denote the real and imaginary part of , respectively.Using the isometry we can define metric  : C × C → [0, +∞) using metric  : R 2 × R 2 → [0, +∞) in the following way [26]: where  1 ,  2 ∈ C. For instance, we can use some well-known metrics defined in R 2 [26]: (i) the taxicab metric (ii) the supremum metric where 1 ≤  ≤ +∞.
Moreover, we can use different facts about metric spaces to create new metrics.For instance, let (, ) be a metric space; then [26] (i) if  :  →  is injective, then is a metric on ; is a metric on .
If we are interested in generating diverse patterns using polynomiography, we can take a function that does not fulfil some of the metric axioms, for example,   for  ∈ (0, 1) does not fulfil the triangle inequality.We can also omit the assumption about the injectivity of  in (38).For instance, if we take the complex plane C with the modulus metric and () = || 2 , which of course is not injective, we obtain When we look at (40), then we can see that this is the function used by Pickover in (32).Another way to modify the tests is to add some weights in the metric function.The weights could cause that the metric function will lose the properties of the metric.For instance, if we use (38) for creation of the test we can add weights  1 ,  2 ∈ R in the following way: If  1 ̸ =  2 , then we lose the symmetry property of the metric.
All the tests discussed so far were based on a single metric function, but we can create tests using several terms that use metric functions or modified metric functions; for example, where  1 ,  2 ∈ R and  1 ,  2 > 0.
In the Mandelbrot and Julia sets we use the escape criteria to stop the iterative process.In this criteria we check if the computed value is greater than the given threshold value  > 0 [27] where arg() is an argument of the complex number , and  1 ,  2 ∈ R.

Colouring Methods
After satisfying the convergence test in the iteration process of the root-finding method for a considered starting point we need to determine the colour for that point.The method of colour determination for a given point is called the colouring method.We briefly introduce three basic colouring methods [2] in this section.
In the first method we need to know all the roots { 1 ,  2 , . . .,   } of the polynomial .So, if we want to use the method, it is comfortable to use the polynomial representation by its roots (2), because we do not need to compute the roots.Each root   of the polynomial gets a distinct colour   .So, after the iteration process we take the obtained approximation of the root   and find the closest root of  using the modulus metric.Having the closest root we colour the starting point with the colour that corresponds to this root.In this way we obtain visualization of the polynomial basins of attraction introduced in Section 3.4.
Unlike the first method, the second colouring method does not need the information about the roots, so we can use any of the two polynomial representation methods (coefficients, roots).In this method we deal with a colour map, that is, the table of different colours.After the iteration process we take the number of iteration  at which the process has stopped and map it to an index of colour in the colour map.If the number of colours in the colour map is equal to the maximum number of iterations, then we have one-to-one correspondence between iterations and colours.In the other case we need to use some mapping.Most often the linear interpolation is used; that is, a mapping  : {0, 1, . . ., } → {0, 1, . . .,  − 1}, where  is the maximum number of iterations and  is the number of colours in the colour map, of the following form: By using this method we are able to visualize the speed of convergence of the root-finding method.The use of specific colour maps often reveals a hidden unrepeatable beauty of the root finding visualization process.
In [28] Pickover used this method to create contour lines that helps to visually emphasize different regions of behaviour of the considered function (root-finding method for a given polynomial).For this purpose he used two colours (black, white) and a mapping function  : {0, 1, . . ., } → {0, 1} of the following form: The last colouring method combines the features of the two previous ones.It visualizes, at the same time, basins of attractions and speed of convergence of the root-finding method.In this method, like in the first one, we need to know the roots { 1 ,  2 , . . .,   } of the polynomial  and each root get a distinct colour { 1 ,  2 , . . .,   }.After stopping the iteration process we take the obtained approximation of the root   and we find the closest root of  using the modulus metric.Now, we set the colour of the starting point to the colour of the closest root and we use the iteration number  to set the shade of that colour.To have ease in operating on the colours they should be represented in the HSB (Hue, Saturation, Brightness) colour space.In this space hue represents the colour's type (the root colour) and saturation represents the shade of the colour.Moreover, we can use linear interpolation to map the iteration number to the saturation, what would be difficult if we have used the RGB (Red, Green, Blue) colour space.In this way, visualization shows the basins of attraction by using distinct colours and shows the speed of convergence represented by the shade of the colour.

Polynomiograph Generation
In the previous sections we introduced several parts of the polynomiograph generation method.Putting them together we obtain algorithm that is presented as a pseudocode in Algorithm 1.
The input for the algorithm consists of the polynomial  given by equation ( 1) or ( 2), the area of the complex plane  for which the polynomiograph is generated, and the maximum number of iterations which will be made for each point in .The last input parameter is the iteration method   from Section 3.4 for a chosen root-finding method.The index  is a vector of parameters of the iteration method, that is,  ∈ C  , where  is the number of parameters of the iteration.For the Picard's method the iteration  is used instead of   .Moreover, some convergence test and a colouring method have to be fixed.
In Section 3.4 the parameters used in the iterations were real numbers, but when we look at Algorithm 1 we see that they are complex.According to the authors' knowledge the iterations with complex parameters have not been studied so far.The experiments carried out for this type of iterations show that the generated polynomiographs form interesting patterns, what will be shown in Section 7.2.
In this algorithm, for each point  0 in the considered area , this point is iterated by the method   .If the convergence test is satisfied, then it is assumed that the generated sequence converges to a root of  and the iteration is stopped.In the other case the algorithm goes to the next iteration.If the maximum number of iterations  is reached, it is assumed that the generated sequence does not converge to any root of .At the end a colour is given to the considered point by using a fixed method of colouring.

Examples of Polynomiographs
In this section some examples of the polynomiographs obtained by using the methods described in the previous sections are presented.First, we show the use of different iterations with both the real and complex parameters.Next, the influence of the convergence tests on the polynomiograph's shape will be presented.The last example shows the use of different colour maps.In all examples we use the same colouring method-the second method from Section 5 with different colour maps.(k) Picard-S:  = 0.9,  = 0.6.
In Figure 2 one can see that different iteration processes produce unique polynomiographs that are different comparing to the polynomiograph obtained with the standard Picard iteration and to each other.In each polynomiograph one can find seven main areas with different shapes and with fractal boundaries.Moreover, looking at the colours and shapes of the polynomiographs one can see that the use of different iteration processes change the speed of convergence of the root finding method, for some points the convergence is faster and for some others it is slower.The speed depends on the iteration and the parameters used.All images, that are very decorative, have visible symmetries what is a consequence of placing the roots in a symmetrical way.The symmetry introduces the order which stress the static appearance of polynomiographs.
The second example (Figure 3) presents the polynomiographs generated with the use of the following parameters: In the polynomiographs for  2 one can see that different iteration processes produce eight main areas with very subtle fractal boundaries.Moreover, one obtains very diverse and different patterns in comparison to the polynomiograph with the standard Picard iteration.We can also observe that the use of different iterations has impact on the speed of convergence.For instance, for the Karakaya or Picard-S iteration one can see that the red areas that occur for the Picard iteration have shrunk and the light blue areas changed the colour to navy blue.This means that in those areas the convergence is faster.The change of speed depends on the iteration used and the value of its parameters.

Polynomiographs with Complex-Valued Parameters of Iterations.
Figure 4 presents the examples of polynomiographs with the same parameters as in Figure 2; that is,  1 () =  7 +  2 − 1,  = 15, the standard convergence test with  = 0.001,  = [−1.5,1.5] 2 , the Newton's root finding method ( 2 ,  2 ).The only change is the addition of imaginary part to the values of the iteration parameters: The use of a nonzero imaginary part of the parameters, generally, adds the clockwise or the anticlockwise rotations to the polynomiographs.The angle of rotation is dependent on the value and the sign of the imaginary part.In the case of the multiparameter iterations some of the parameters (their imaginary parts) have global and some have only local influence on the shape of the polynomiograph.The effect is easily seen in Figure 4 by comparison with Figure 2. Swirls and twists present in the polynomiographs from Figure 4 make those images look more dynamical and live in opposite to the polynomiographs from Figure 2. The imaginary parts of the parameters have also the influence on the speed of convergence, which can be easily seen, for instance, in Figure 4(i) by comparison with Figure 2(j).
The next example (Figure 5) presents polynomiographs generated with the use of the same parameters as in Figure 3, that is,  2 () =  4 + 4,  = 40, the standard convergence test with  = 0.001,  = [−2, 2] 2 ,  3 root finding method.The only change is the addition of the imaginary parts to the values of iteration parameters:      (i) Karakaya:  = 0.9 + 0.75i,  = 0.01,  = 0.01,  = 0.9 − 0.34i,  = 0.01 − 0.7i, (j) Picard-S:  = 0.75 − i,  = 0.99 + 0.73i.As in the previous example, in this case the addition of imaginary parts to the iterations' parameters causes those swirls and twists to appear in the polynomiographs from Figure 5.This makes the polynomiographs more dynamic and vivid in comparison to those from Figure 3.When one looks at the colours of the polynomiographs one can observe that in some cases the speed of convergence has increased in some areas (e.g., Figure 5(b)) and in some cases it has decreased (e.g., Figure 5(g)) in comparison to the polynomiographs from Figure 3.

Polynomiographs with Different Convergence Tests.
In the next examples we present the use of different convergence tests.The tests that were used are as follows: and in all cases  = 0.001.
Figure 6 presents the examples of the use of different convergence tests in the polynomiographs for  1 () =  7 +  2 − 1.From the examples presented in Figures 6 and 7 one can see that the different convergence tests have significantly changed the shape of regions of fast convergence.Moreover, one can observe a small change in the areas of slow convergence.When one looks at the polynomiographs obtained with the standard modulus test and with the use of the tests 1-5 one can observe that the areas from the original polynomiographs are very regular and circular, whereas in Figures 6 and 7 the areas have an irregular nature.The change of the shape is very different when one compares the images obtained with different convergence tests.

Polynomiographs with Different Colour
Maps. Figure 8 presents polynomiographs generated with the same parameters that have been used to obtain polynomiograph from Figure 2(h) but with the use of different colour maps.From this example we see that the polynomiographs are strongly dependent not only on iterations but also on the colour maps used.The same graphical information contained in the polynomiograph may be drastically different for different colour maps.The explanation of this is the following.The colours made of the red hues, such as red, magenta, and orange, are warm colours.They are vivid and energetic, and tend to advance in space.The colours made of blue hues, such as blue, cyan, and green, are cold colours.They give an impression of calm and appear to recede from the viewer, so they are good to use for backgrounds.The complementary colours that are opposite to each other on the colour wheel (e.g., red and green) are used to obtain contrast in the image.The analogous colours that are close to each other on the colour wheel (e.g., yellow and orange) create harmony and they are pleasing to the eye.By adding white or black to any colour it is lightened or darkened and called the tint or shade, respectively.Summing up, colours create the depth, movement and mood of an image.Those effects, made by different colour maps, can be observed on the polynomiographs from Figure 8.

Time Tests of Polynomiographs Generation
It is a difficult task to estimate theoretically the complexity of the algorithms used for polynomiographs generation.It is because of many factors that should be taken into account.Among these factors are the following: the degree of the polynomial, the root finding method, the computation accuracy, the type of iteration, the type of convergence test, the maximal number of iterations, the polynomiograph's resolution, to mention a few.So, instead of the theoretical complexity estimates we performed some time tests which are presented in this section.
All the experiments were performed on a computer with the following specification: Intel Core i5-4570 processor, 16 GB RAM, and Windows 7 (64-bit).The software for polynomiographs' generation has been implemented in Processing, a programming language based on Java.
In our experiments we focused on the comparison of the different iteration processes.The iterations have been compared in four groups depending on the number of their parameters: 1 parameter (Mann, Khan), 2 parameters (Ishikawa, S, Picard-S), 3 parameters (Noor, SP, CR), and 5 parameters (Suantai, Karakaya).The experiments have been limited to parameters with real parts only and performed for all possible combinations of discrete values of parameters changed by different steps within each test group.The steps' values and the total numbers of parameters' values combinations are given in Table 1.From a huge number of the experiments only the most representative results are presented in Tables 2-9.Moreover, each polynomiograph has  2 and 3 present slices of the results obtained for the iterations with one parameter for polynomials  1 and  2 , respectively.From the results we see that for the values of  close to 1 the Mann iteration is faster than the Khan iteration.Then, starting from  equal to about 0.77 for  1 and 0.97 for  2 the Khan iteration is faster than the Mann iteration and this remains true up to the lowest values of .Moreover, we can observe that when the value of  decreases then the time difference between the two iterations increases and the time of polynomiograph's generation also increases.Tables 4 and 5 present slices of the results obtained for the iterations with two parameters for polynomials  1 and  2 , respectively.When we look at the times for Ishikawa and  iteration we can observe that for the fixed value of  < 1 the time difference between those two iterations is increasing when  is also increasing and that the  iteration is faster.We can also observe that the lower the value of  the greater the time difference.In the case of Ishikawa and Picard-S iteration for the values of  close to 1 the Ishikawa iteration is faster.Then, starting from  equal to about 0.8 for  1 and 0.7 for  2 the Picard-S iteration is faster than the Ishikawa iteration and the lower the values of  the greater the time differences.Finally, for the  and Picard-S iterations we can observe that for the fixed value of  the  iteration is faster and that the time difference is increasing when the value of  is also increasing.The difference is the smallest for small values of  and increases with the growth of .Moreover, we can observe that the time of Ishikawa iteration has downward trend with the growth of , and that the times for  and Picard-S iterations are oscillating is some interval.
For three and five parameter iterations the comparison among iteration types is more complex and difficult to make.Despite the difficulties we made some observations concerning the times.Tables 6 and 7 present slices of the results obtained for the iterations with three parameters for the polynomials  1 and  2 , respectively.When we look at the times for Noor and SP iterations we can observe that the Noor iteration is slower and that for fixed values of  and  the time difference is increasing with the growth of .Moreover, we can observe that with the value drop of  the time differences are getting larger.In the case of the Noor and CR iteration we can observe similar dependencies as in the previous case (substituting SP iteration with CR).Furthermore, for  = 1 the two iterations have comparable times.Finally, for the SP and CR iterations we can observe that the SP iteration is faster and that for the fixed values of  and  the time difference is increasing with the growth of .Furthermore, when we look at the times of consecutive iterations we see that the iterations have downward trend with the value growth of  for the Noor iteration,  +  +  for the SP iteration, and  +  for the CR iteration.
Finally, slices of the results for the last group of iterations (with five parameters) for polynomials  1 and  2 are presented in Tables 8 and 9, respectively.From these results we can observe that if  +  = 1, then the times of the iterations are close to each other.When  +  < 1 then the time difference between the iterations is noticeable.The lower the sum the greater the difference in favour of the Karakaya iteration.When we look at the formulas of the iterations we see that the parameters  and  play similar role as  and , but in the other step of the iteration process.This may suggest that there may be a similar dependency between the sum of those parameters and the time difference, but the obtained results show that there is no such a dependency.
Additionally, the examples showed that polynomiographs change their shapes in a smooth way with the change of parameters lying in their admissible intervals.

Conclusions and Future Work
It is known that mathematics and art are closely connected to each other.A good example of such a connection delivers polynomials and related to them polynomiographs.In the paper by the use of different iteration processes, colouring methods, convergence tests and colourmaps we generalized Kalantari's polynomiography.The polynomiographs presented in this paper look nicely and many of them can be classified as aesthetic ones.Patterns of polynomiographs can be altered by changing the parameters of iterations and convergence tests.Those parameters effect on the complexity, level of details, and more or less fractal appearance of the polynomiographs.Real parts of the parameters alter symmetry, whereas imaginary ones cause asymmetric twisting of the polynomiographs and influence on statics or dynamics of the images.Colourmaps with cold or warm colours alter drastically a visual expression of polynomiographs.This expression can be classified as nostalgic, sad, calm or cheerful, full of energy or sometimes flat or spatial.
Polynomiographs can be helpful to those who are interested in generating of nicely looking images in an automatic way.Those images can inspire graphics designers in their designs.
The results of this paper can be further extended by using the multipoint methods [29][30][31].Similar investigations to those presented in this paper can be carried out for complex fractals (Julia and Mandelbrot sets) and biomorphs.Some aspects of such investigations have been reported in the literature [9,[32][33][34].Another interesting direction rely on replacing complex numbers by more general: dual and double numbers used in [35] for defining the Q-Systems Fractals.Additionally, by following Kalantari's paper [36] one can generalize polynomiography to analytic functions or even to the quasipolynomials [37] that have infinitely many roots on complex plane, in contrary to polynomials.The problems mentioned above show the possible future work on the polynomiography generalization.
Methods with Nonstandard Iterations.Let us denote by  one of the operators:  representing the standard Newton's method,   for  = 2, 3, . . .or  , for  = 2, 3, . ..,  ∈ C or   for  = 2, 3, . . .representing elements of the Basic, Parametric Basic, and Euler-Schöder's Families of Iterations, respectively.And let us replace the standard Picard iteration by one of the nonstandard iterations described in Section 2. Then we get the following formulas for finding roots of complex polynomial  iteratively:

Figure 2 :
Figure 2: Examples of polynomiographs for  1 with real-valued parameters of iterations.

Figure 3 :Figure 4 :
Figure 3: Examples of polynomiographs for  2 with real-valued parameters of iterations.

Figure 5 :
Figure 5: Examples of polynomiographs for  1 with complex-valued parameters of iterations.

Figure 6 :
Figure 6: Examples of different convergence tests application for  1 , the iterations with: real-valued parameters (a, b, and c) and complexvalued parameters (d, e, and f).

Figures 6 (
a), 6(b), and 6(c) were obtained with the use of the parameters that have been used to generate Figures 2(g), 2(i), and 2(k) but with the use of the tests 1, 2, 4, respectively.On the other hand, Figures6(d), 6(e), and 6(f) were obtained with the use of the parameters that have been used to generate

Figure 7 :
Figure 7: Examples of different convergence tests application for  2 , the iterations with: real-valued parameters (a, b, and c) and complexvalued parameters (d, e, and f).

Figures 4 (
Figures 4(e), 4(g), and 4(d) but with the use of the tests 1, 2, 4, respectively.Figure7presents the examples of the use of different convergence tests in the polynomiographs for  2 () =  4 + 4. Figures 7(a), 7(b), and 7(c) were obtained with the use of the parameters that have been used to generate Figures 3(d), 3(k), and 3(c) but with the use of the tests 1, 3, 5, respectively.On the other hand Figures 7(d), 7(e), and 7(f) were obtained with the use of the parameters that have been used to generate Figures 5(a), 5(g), and 5(d) but with the use of the tests 1, 3, 5, respectively.From the examples presented in Figures6 and 7one can see that the different convergence tests have significantly changed the shape of regions of fast convergence.Moreover, one can observe a small change in the areas of slow convergence.When one looks at the polynomiographs obtained with the standard modulus test and with the use of the tests 1-5 one can observe that the areas from the original polynomiographs are very regular and circular, whereas in Figures6 and 7the areas have an irregular nature.The change of the shape is very different when one compares the images obtained with different convergence tests.

Figure 8 :
Figure 8: Examples of different colour maps application for the fixed polynomiograph.

Table 1 :
Steps' values and the total numbers of values combinations used in the time tests.

Table 2 :
Times of polynomiograph's generation for  1 using one parameter iterations.

Table 3 :
Times of polynomiograph's generation for  2 using one parameter iterations.

Table 4 :
Times of polynomiograph's generation for  1 using two parameters iterations.

Table 5 :
Times of polynomiograph's generation for  2 using two parameters iterations.

Table 6 :
Times of polynomiograph's generation for  1 using three parameters iterations.

Table 9 :
Times of polynomiograph's generation for  2 using five parameters iterations.