^{1}

^{1}

^{1}

We consider a family of algorithms for approximate implicitization of rational parametric curves and surfaces. The main approximation tool in all of the approaches is the singular value decomposition, and they are therefore well suited to floating-point implementation in computer-aided geometric design (CAGD) systems. We unify the approaches under the names of commonly known polynomial basis functions and consider various theoretical and practical aspects of the algorithms. We offer new methods for a least squares approach to approximate implicitization using orthogonal polynomials, which tend to be faster and more numerically stable than some existing algorithms. We propose several simple propositions relating the properties of the polynomial bases to their implicit approximation properties.

Implicitization algorithms have been studied in both the CAGD and algebraic geometry communities for many years. Traditional approaches to implicitization have focused on exact methods such as Gröbner bases, resultants and moving curves and surfaces, or syzygies [

Implicitization is the conversion of parametrically defined curves and surfaces into curves and surfaces defined by the zero set of a single polynomial. Exact implicit representations of rational parametric manifolds often have very high polynomial degrees, which can cause numerical instabilities and slow floating-point calculations. In cases where the geometry of the manifold is not sufficiently complicated to justify this high degree, approximation is often desirable. Moreover, for CAGD systems based on floating point arithmetic, exact implicitization methods are often unfeasible due to performance issues. The methods we present attempt to find “best fit” implicit curves or surfaces of a given degree

For simplicity of notation, we proceed for the majority of the paper to describe the implicitization of curves. In Sections

A parametric curve in

All the methods to be described require a choice of degree

Since we are searching for implicit representations and want to avoid the trivial solution

The techniques in this paper focus on minimization of the objective function

In 1997, a class of techniques for approximate implicitization of rational parametric curves, surfaces, and hypersurfaces was introduced in [

Convergence rates for approximate implicitization of sufficiently smooth parametric curves in

Algebraic degree | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|---|

Convergence rate | 2 | 5 | 9 | 14 | 20 | 27 | 35 | 44 |

Convergence rates for approximate implicitization of sufficiently smooth parametric surfaces in

Algebraic degree | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
---|---|---|---|---|---|---|---|---|

Convergence rate | 2 | 3 | 5 | 7 | 10 | 12 | 14 | 17 |

The guiding principle behind these methods is to find a polynomial

We notice that the expression

In this paper, we use the following “normalization” to compare the approximation qualities of different polynomial bases:

Given a choice of basis functions

Input: a rational parametric curve

For each basis function

Construct a matrix

Perform an SVD

This algorithm is known as the original method in the

Two approaches to approximate implicitization by continuous least squares minimization of the objective function were introduced simultaneously in 2001 in [

After choosing a basis for the implicit representation, we obtain a linear algebra problem as before:

This problem, unlike the minimax problem, can be solved directly if the parametric components are integrable. We simply take

We summarize this algorithm, for a given weight function

Input: a parametric curve

Construct a matrix

Compute the eigendecomposition

Select

Algorithms following the procedure above are known under different names in the literature; weak approximate implicitization in [

As previously mentioned, the methods of this section are suitable for either exact or approximate implicitization. They can be performed using either symbolic or numerical integration; however, the former is generally only required when performing exact implicitizations in exact precision. For applications where floating-point precision is sufficient, numerical quadrature rules provide a much faster alternative. The methods also have wide generality since they can be applied to any parametric forms with integrable components, not only rational parametric forms. There are, however, some significant disadvantages in choosing this method in practice. Firstly, due to the high degree of the integrand, the integrals can take a relatively long time to evaluate, even when numerical quadrature rules are used. Secondly, and more importantly, the condition numbers of the matrices

Since

To make the connection between the original method and the weak method in the previous section, we consider the factorization (

Let

By (

We notice that the right singular vectors of

We should note that the matrix

In order to compare the numerical stability of the two approaches to least squares minimization of the objective function, we turn to a familiar example; exact implicitization of a rational parametric circular arc, which is defined in projective space by

The previous sections justified why the original approach is preferable to the weak approach in cases where the expression

The most commonly used orthogonal polynomial bases in approximation theory are the Legendre basis

Let

Since an orthogonal polynomial basis is degree ordered, one of the functions must be identically a nonzero constant, which, by the normalization condition, is equal to 1. Consider the vector

One property of Chebyshev expansions of a continuous function is that the error introduced by truncating the expansion is dominated by the first term after the truncation, if the coefficients decay quickly enough [

The Chebyshev basis is well known for giving good approximations to minimax problems in approximation theory (see [

Our experience in the choice between Legendre and Chebyshev polynomials is that the difference in approximation quality is minor. Chebyshev expansions are slightly quicker to compute and require less programming effort than their Legendre counterparts [

One of the simplest and fastest implementations of approximate implicitization is to perform discrete least squares approximation of points sampled on the parametric manifold, similar to the methods in [

The result of approximate implicitization in the Lagrange basis depends both on the number of points sampled and the density of the point distribution in the parameter domain. Since Lagrange polynomials are neither orthogonal nor degree ordered, they do not solve a least squares problem of type (

Let

Since each of the parametric components of the curve is bounded and piecewise continuous on the interval and

Sampling more points gives ever closer approximations of the true least squares approximation (for any given

Alternative choices of nodes are also interesting to investigate. Using the inequality (

A point distribution of particular interest is that of the Chebyshev points. On

Let

We use the change of variable

The approach in [

The average uniform algebraic error,

The Bernstein method is closely related to both the Lagrange and Legendre methods seen previously. It is in fact easy to see that the

Let

It is well known that the Bernstein coefficients of a polynomial tend to the values of the polynomial as the degree is raised, as follows [

We can thus deduce the following convergence property of the Bernstein method as an immediate consequence of Propositions

Let

As mentioned previously, when the degree

When using the original method for approximate implicitization, we represent the error function

Suppose one is given a nondegenerate rational parametric planar curve

Since the implicitization is exact, we know that there exists a unique polynomial

To see that choosing fewer than

When searching for exact implicitizations, we generally want the implicit polynomial of

So far, we have presented several approaches to exact and approximate implicitization using linear algebra. The approaches exhibit different qualities in terms of approximation, conditioning, and computational complexity. The intention of this section is to provide a comparison of the algorithms.

Figure

For each degree up to the exact degree,

It should be noted that, for the Bernstein and Lagrange methods, the maximum of the algebraic error normally occurs at the end points of the interval and is normally much higher than the average error across the interval (see Figure

As discussed previously, minimizing the algebraic error does not necessarily minimize the geometric distance between the implicit and parametric curves. In order to visually compare how the methods perform in terms of geometric approximations, Figure

A parametric Bézier curve of degree seven, with control points

Implicit plots of the approximations of the degree seven Bézier curve pictured in Figure

Typical algebraic error distributions

Monomial

Bernstein

Lagrange

Chebyshev

We see that, for the quartic approximation, the Lagrange and Chebyshev methods are already performing fairly well with only some detail lost close to the double point singularities. Despite exhibiting several intersections with the parametric curve, the Bernstein method gives little reproduction of the shape. The monomial approximation bears almost no resemblance to the original curve. For the quintic approximation, the Chebyshev and Lagrange bases again perform very similarly, giving excellent approximations that replicate the singularities well. These approximations would be sufficient for many applications. The Bernstein method performs similarly to the Chebyshev and Lagrange approximations of degree four, with only some loss of detail at each of the double points. Again the monomial basis gives almost no replication of the curve. It is also interesting to note the presence of extraneous branches visible in the Bernstein, Lagrange, and Chebyshev approximations at degree five. This is a feature which may occur with any of the methods. At degree six, the Bernstein, Lagrange, and Chebyshev methods all give excellent results over the entire interval. The monomial method is beginning to show good approximation at the centre of the interval; however, this deteriorates towards the ends. At degree seven, we expect exact results, up to numerical stability, for all of the algorithms. Visually, the implicitizations in all of the bases agree very closely.

For degree seven, we can also perform the Lagrange method in exact precision as described in Section

It is also interesting to note that when attempting to use the weak method for approximate implicitization as an exact method here, we obtain a completely different solution, with relative error given approximately by

Typical algebraic error distributions obtained from the methods in this section are displayed in Figure

In the example of Section

A qualitative comparison of the algorithms. The least squares and uniform columns refer to how well the algorithms perform in terms of producing such approximations in the algebraic error function.

Least squares | Uniform | Stability | Generality | |
---|---|---|---|---|

Lagrange | Good | Ok | Ok | Any |

Legendre | Very good | Good | Ok | Rational |

Chebyshev | Very good | Very good | Ok | Rational |

Bernstein | Ok | Ok | Very good | Rational |

Weak | Very good | Ok | Very bad | Integrable |

One undesirable property of approximate implicitization is the possibility of introducing new singularities that are not present in the parametric curve. As the implicit polynomial representation is global, we cannot control what happens outside the interval of approximation. In particular, there could appear self-intersections of the curve within the interval of interest. This is an artifact that can appear using any of the methods described in this paper. However, such problems can be avoided by adding constraints to the algebraic approximation [

The computation times for each of the methods vary. In all the current implementations of the methods, the matrix generation is the dominant part of the algorithm, and the SVD is generally fast. When constructing the matrices, the monomial and Bernstein methods suffer from computationally expensive expansions for high degrees, whereas the Chebyshev and Lagrange methods are based on point sampling and FFT, which can be implemented in parallel. Computational features of the methods will be the subject of further research, including exploiting the parallelism of GPUs to enhance the algorithms.

In this section, we will discuss how the methods presented for curves in the preceding sections generalize to surface implicitization. We will also provide a visual example of approximate implicitization of surfaces.

A parametric surface in

Although we have the option of using tensor-product polynomials for the implicit representation, here we choose polynomials of total degree

When applying the original algorithm for approximate implicitization, we observe that the expression

The weak method presented in Section

For rational tensor-product surfaces of bidegree

The univariate bases

Let

Similar to Theorem

For rational surfaces of total degree

Surfaces on triangular domains may be considered a more fundamental generalization than tensor-product surfaces; however, they often exhibit several difficulties not present in the tensor-product case. For example, most practical applications of the weak method of Section

Certain methods for approximate implicitization are, however, easy to generalize. For example, the Bernstein basis has a natural representation on simplex domains using barycentric coordinates, and thus the use of approximate implicitization on triangular surfaces in this basis is simple [

The Lagrange basis method from Section

Orthogonal polynomials on triangular domains also exist, and an extension of Theorem

As an example of approximate implicitization of tensor-product surfaces, we will consider the problem of approximating the well-known Newells' teapot model. It is stated in [

Each of the 32 bicubic parametric surfaces has been approximated using the tensor-product Chebyshev method and the degrees stated in Table

Exact implicit degrees of the 32 Newells' teapot patches and the degrees used for approximate implicitization in Section

Exact degree | Approximate degree | |
---|---|---|

4 × rim | 9 | 4 |

4 × upper body | 9 | 3 |

4 × lower body | 9 | 3 |

2 × upper handle | 18 | 4 |

2 × lower handle | 18 | 4 |

2 × upper spout | 18 | 5 |

2 × lower spout | 18 | 6 |

4 × upper lid | 13 | 3 |

4 × lower lid | 9 | 4 |

4 × bottom | 15 | 3 |

Teapot defined by 32 approximately implicitized patches from Section

This example shows one potential application of approximate implicitization; however, there are several factors that should be noted. Firstly, a significant amount of user input was required to generate the approximations of the teapot patches. This involved choosing degrees that were suitable for each patch and also choosing approximations without extra branches in the region of interest. This was done by considering approximations corresponding to other singular values than the smallest. For example, for the upper handle patches we chose the approximation corresponding to the fourth smallest singular value. For each increased singular value, the convergence rate of the method is reduced by one [

Another feature of this example is that the continuity between the parametric patches has been approximated very well in the implicit model. This is mainly due to the high convergence rates, which give good approximations over the entire surface region. However, in this case, there is also symmetry in the model meaning the edge curves where the patches meet can be approximated in a symmetric way. To achieve this, we have used the monomial basis for the implicit representation since it is symmetric around the

We have presented and unified several new and existing methods for approximate implicitization of rational curves using linear algebra. Theoretical connections between the different methods have been made together with qualitative comparisons. Extensions of the methods to both tensor-product and triangular surfaces have been discussed. By considering various issues such as approximation quality and computational complexity, we regard the Chebyshev and Legendre methods as the algorithms of choice for approximation of most rational parametric curves. However, to obtain good numerical stability when using floating-point arithmetic for exact implicitization, the Bernstein basis is a more favourable choice. Future research could include how the methods can be improved, for example, by exploiting sparsity as in [

The research leading to these results has received funding from the (European Community's) Seventh Framework Programme (FP7/2007-2013) under Grant agreement no. (PITN-GA-2008-214584) and from the Research Council of Norway through the IS-TOPP program.