JAMJournal of Applied Mathematics1687-00421110-757XHindawi Publishing Corporation31298510.1155/2012/312985312985Research ArticleLeast Squares Problems with Absolute Quadratic ConstraintsSchöneR.1HanningT.2PeñaJuan Manuel1Institute for Software Systems in Technical Appliations of Computer Science (FORWISS)University of PassauInnstraBe 4394032 PassauGermanyuni-passau.de2Department of Mathematics and Computer ScienceUniversity of PassauInnstraBe 4394032 PassauGermanyuni-passau.de20121592011201222062011140720112012Copyright © 2012 R. Schöne and T. Hanning.This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This paper analyzes linear least squares problems with absolute quadratic constraints. We develop a generalized theory following Bookstein's conic-fitting and Fitzgibbon's direct ellipse-specific fitting. Under simple preconditions, it can be shown that a minimum always exists and can be determined by a generalized eigenvalue problem. This problem is numerically reduced to an eigenvalue problem by multiplications of Givens' rotations. Finally, four applications of this approach are presented.

1. Introduction

The least squares methods cover a wide range of applications in signal processing and system identification . Many technical applications need robust and fast algorithms for fitting ellipses to given points in the plane. In the past, effective methods were Bookstein's conic-fitting or Fitzgibbon's direct ellipse-specific fitting, where an algebaic distance with a quadratic constraint is minimized [6, 7]. In this paper, we develop an extended theory of minimization of least squares with a quadratic constraint based on the ideas of Bookstein and Fitzgibbon. Thereby, we show the existence of a minimal solution and present the uniqueness regarding to the smallest positive generalized eigenvalue. So, arbitrary conic fitting problems with quadratic constraints are possible.

Let An×m be matrix with nm2, Cm×m be symmetric matrix, and d a real value. We consider the problem of finding a vector xm which minimizes the function F:m defined byF(x):=Ax22subject  toxtCx=d. The side condition xtCx=d introduces an absolute quadratic constraint. The problem (1.1) is not a special case of Gander's optimization as presented in , because in our case C is a real symmetric matrix in contrast to the approach of Gander, where the side condition considers real symmetric matrices CtC which are positive definite. For our considerations, we require the following three assumptions

Assumption 1.1.

By replacing C by (-C), we consider d0. For d=0, the trivial solution x=0m fulfills (1.1). Therefore, we demand d>0.

Assumption 1.2.

The set N:={xm,  xtCx=d} is not empty, that is, the matrix C has not been less than one positive eigenvalue. If we assume that C has only nonpositive eigenvalues, it would be negative semidefinite and xtCx0 holds. With d>0 it follows that the set N would be empty in this case.

Assumption 1.3.

In the following, we set S=AtA and assume that S is regular. S is sometimes called scatter matrix.

In the following two sections, we introduce the theoretical basics of this optimization. The main result is the solution of a generalized eigenvalue problem. Afterwards, we reduce this system numerically to an eigenvalue problem. In Section 5, we present four typical applications for conic fitting problems with quadratic constraints. These approximations contain the ellipse fitting of Fitzgibbon, the hyperbola fitting of O'Leary, the conic fitting of Bookstein, and an optical application of shrinked aspheres [6, 7, 9, 10].

2. Existence of a Minimal SolutionTheorem 2.1.

If Sm×m is regular, then there exists a global minimum to the problem (1.1).

Proof.

The real regular matrix S=AtA is symmetric and positive definite. Therefore, a Cholesky decomposition S=RtR exists with a regular upper triangular matrix Rm×m. In (1.1), we are looking for a solution xm minimizing F(x)=Ax2=xtSx=xtRtRxsubject  toxtCx=d. With R regular we substitute x by R-1y for ym. Thus, we obtain an equivalent problem to (2.1), where we want to find a vector ym, minimizing F(y)=yty=y2subject  toyt(R-1)tCR-1y=d. Now, we define G:m with G(y)=yt(R-1)tCR-1y-d and look for a solution y on the zero-set NG of G with minimal distance to the point of origin. Let y0NG and Kr0m¯(0) be the closed sphere of m in 0 with radius r0=y0. Because of y0NGKr0m¯(0) and G being continuous, the set NGKr0m¯(0)={yKr0m¯(0),    G(y)=0} is nonempty, closed and bounded. Therefore, for the continuous function F(y)=y2 exists a minimal value yM on NGKr0m¯(0) with minyNGKr0m¯(0)F(y)r02. For all y from NGKr0m¯(0), it is F(y)>r02. So, yM is a minimal value of F in NG. By the equivalence of (2.1), and (2.2) the assumption follows.

3. Generalized Eigenvalue Problem

The minimization problem in (1.1) induces a generalized eigenvalue problem. The following theorem is already proven by Bookstein and Fitzgibbon for the special case of ellipse-fitting [6, 7].

Theorem 3.1.

If xs is an extremum of F(x) subject to xtCx=d, then a positive λ0 exists with (S-λ0C)xs=0, that is, xs is an eigenvector to the generalized eigenvalue  λ0 and F(xs)=λ0d holds.

Proof.

Let G:m be a defined as G(x):=d-xtCx. For G(x)=0 and d>0 follows x0. Further, G is continuously differentiable with dG/dx=-2Cx0 for all x of the zero-set of G. So, if xs is a local extremum of F(x) subject to G(x)=0, then it is rank(dG/dx)(xs)=1. Since F is also a continuously differentiable function in m with m>1, it follows by using a Lagrange multiplier : if xs is a local extremum of F(x) subject to G(x)=0, then a λ0 exists, such that the Lagrange function ϕ:m+1 given as ϕ(x,λ)=F(x)+λG(x)=xtAtAx+λ(d-xtCx) has a critical point in (xs,λ0). Therefore, xs fulfills necessarily the equations: gradxϕ(x,λ)=2Sx-2λCx=0,dϕ(x,λ)dλ=xtCx-d=0. The first equation describes a generalized eigenvalue problem with (S-λC)x=0. With d>0, xs0 and xs fulfills (3.6), λ must be a generalized eigenvalue, and xs is a corresponding eigenvector to λ of (3.6), so that (S-λC) is a singular matrix. If λ0 is an eigenvalue and x00 a corresponding eigenvector to λ0 of (3.6), then every vector αx0 is also a solution of (3.6) for λ0. Now, we are looking for α, such that xs=αx0 satisfies (3.5). For λ00 and (3.4), it follows d=α2x0tCx0=α2x0tSx0λ0. Because the left side and the numerator are positive, the denominator must also be chosen positive, that is, only positive eigenvalues solve (3.4) and (3.5). By the multiplication with λ0, dλ0=α2x0tSx0=xstSxs=F(xs) follows and xs=α·x0 fulfills the constraint G(xs)=0.

Remark 3.2.

Let x0 be a generalized eigenvector to a positive eigenvalue λ0 of problem (3.6). Then x1,2=±dλ0x0tSx0x0 are solutions of (3.8).

Lemma 3.3.

If S is regular and C is symmetric, then all eigenvalues of (3.1) are real-valued and different to zero.

Proof.

With det(S)0,λ00 in (3.1). The Cholesky decomposition S=RtR with a regular upper triangular matrix R yields (3.1) (RtR-λ0C)xs=Rt(I-λ0(Rt)-1CR-1)Rxs=0. With R invertible and the substitution   μ0=λ0-1,ys=Rxs, we obtain an eigenvalue problem to the matrix (Rt)-1CR-1: (μ0I-(Rt)-1CR-1)ys=0. Furthermore, we have ((Rt)-1CR-1)t=(R-1)tCt((Rt)-1)t=(Rt)-1CtR-1=(Rt)-1CR-1. Therefore, the matrix (Rt)-1CR-1 is symmetric and all eigenvalues μ0 are real. With λ0=μ0-1 for μ00 follows the proposition.

Remark 3.4.

Because of S regular and λ0 in (S-λ0C)xs=0, we can consider the equivalent problem with μ0=λ0-1 instead of (3.11): (μ0I-S-1C)xs=0. This system is called inverse eigenvalue problem to (3.1). Here, the eigenspaces to the generalized eigenvalue λ0 in (3.1) and to the eigenvalue 1/λ0 in (3.14) are identical. Therefore, the generalized eigenvectors in (3.1) are perpendicular.

Definition 3.5.

The set of all eigenvalues of a matrix C is called spectrum σ(C). We call the set of all eigenvalues to the generalized eigenvalue problem in (3.1) also spectrum and denote σ(S,C). σ+(S,C) is defined as the set of all positive values in σ(S,C).

Remark 3.6.

In case of rg(C)<m=rg(S), the inverse problem in (3.14) has eigenvalue 0 with multiplicity rg(S)-rg(C). Otherwise, for μ0 and μσ(S-1C) follows 1/μσ(S,C). Analogously, for μσ((Rt)-1CR-1) with μ0,1/μσ(S,C).

The following lemma is a modified result of Fitzgibbon .

Lemma 3.7.

The signs of the generalized eigenvalues of (3.1) are the same as those of C.

Proof.

With S being nonsingular, every generalized eigenvalue λ0 of (3.1) is not zero. Therefore, it follows for the equivalent problem (3.11) that μ0=λ0-1 is also a positive eigenvalue to (R-1)tCR-1, where R is an upper triangular matrix to the Cholesky decomposition of S. With Sylvester's Inertia Law, we know that the signs of eigenvalues of the matrices (R-1)tCR-1 are the same as those of C.

For the following proofs, we need the lemma of Lagrange (see, e.g., ).

Lemma 3.8 (Lemma of Lagrange).

For Mn, f:M, g=(g1,,gk):Mk, and Ng={xM,g(x)=0k}, let λk, so that xsNg is a minimal value of the function Φλ:M with Φλ(x)=f(x)+i=1kλigi(x). Then xs is a minimal solution of f in Ng.

Definition 3.9.

Let λ* be the smallest positive value of σ+(S,C) and xs* a corresponding generalized eigenvector to λ* to the constraint xs*tCxs*=d.

Lemma 3.10.

Let S-λC be a positive semidefinite matrix for λσ+(S,C). Then a generalized eigenvector xs corresponding to λ is a local minimum of (1.1).

Proof.

We consider Φ:m with Φ(x)=xtSx+λ(d-xtCx). With gradxΦ(x)=2(Sx-λCx) holds gradxΦ(xs)=0 and since the Hessian matrix (S-λC) of Φ is positive semidefinite, the vector xs is a minimal solution of Φ. Then, xs minimizes also F(x) subject to xtCx=d by the Lemma 3.8.

Remark 3.11.

In fact, in Lemma 3.10, we require only a semidefinite matrix, because xs0 fulfills xst(S-λC)xs=0 in (3.1).

Lemma 3.12.

The matrix (S-λ*C) is positive semidefinite.

Proof.

Let μ be an arbitrary eigenvalue of ((λ*)-1I-(Rt)-1CR-1), where R is the upper triangular matrix of the Cholesky decomposition from S. With det((λ*)-1I-(Rt)-1CR-1-μI)=det(((λ*)-1-μ)I-(Rt)-1CR-1)=0, it follows that ((λ*)-1-μ) is an eigenvalue of (Rt)-1CR-1. This value is corresponding in (3.11) with the inverse eigenvalue of problem (3.1). Furthermore, it yields 1λ*-μmax{1λ,  λσ+(S,C)}=1min{λ,λσ+(S,C)}=1λ*. So μ0 follows, that is, ((λ*)-1I-(Rt)-1CR-1) is positive semidefinite, and for ym we obtain yt((λ*)-1I-(Rt)-1CR-1)    y0. By setting y=Rx and with regular R, we get 0xt((λ*)-1RtR-C)x=xt((λ*)-1S-C)x. With λ*>0, xt(S-λ*C)x0 for xm, that is, S-λ*C is positive semidefinite.

Theorem 3.13.

For the smallest value λ* in σ+(S,C) exists a corresponding generalized eigenvector xs*, which minimizes F(x) subject to xtCx=d.

Proof.

The matrix (S-λ*C) is positive semidefinite by Lemma 3.12, and with Lemma 3.10 it follows that xs* is a local minimum of problem (1.1). Furthermore, we know by Theorem 3.1 that if xs is a local extremum of F(x) subject to xtCx=d, then a positive value λs exists with (S-λsC)xs=0 and it is F(xs)=λsd. Because of the existence of a minimum xE in Theorem 2.1 a value λEσ+(S,C) exists in problem (3.21) regarding to xE. Otherwise, for an arbitrary local minimum xs, F(xs*)=λ*d=min    {λd,  λσ+(S,C)}λsd=F(xs). So, λ*=λE follows and xs* is a minimum of F(x) subject to xtCx=d.

Example 3.14.

We minimize F:2 with F(ξ1,ξ2)=ξ12+ξ22 subject to ξ12-ξ22=1. So, we have d=1, S the identity matrix I22×2, and C2×2 a diagonal matrix with values −1 and 1. Then, we get the following generalized eigenvalue problem: det(I2-λC)=(1-λ)(-1-λ)=0 with eigenvalues 1 and −1. Because of Theorem 3.1, we consider a generalized eigenvector (α,0)t with α{0} only for λ=1. Then, (1,0)t and (-1,0)t are solutions subject to ξ12-ξ22=1. This result is conform to the geometric interpretation, since we are looking for x=(ξ1,ξ2)t on the hyperbola ξ12-ξ22=1 with minimal distance to the origin.

4. Reduction to an Eigenvalue Problem of Dimension <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M255"><mml:mi>r</mml:mi><mml:mi>g</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>C</mml:mi><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula>

In numerical applications, a generalized eigenvalue problem is mostly reduced to an eigenvalue problem, for example, by multiplication with S-1. Thus, we obtain the inverse problem (3.14) from (3.1) (see, e.g., ). But, S may be ill-conditioned, so that a solution of (3.14) may be numerical instable. Therefore, we present another reduction of (3.1).

Many times, C is a sparse matrix with r:=rank(C)rank(S). This symmetric matrix C is diagonalizable in C=PtDP with P orthogonal and D diagonal. Further, we assume that the first r diagonal entrees in D are different to 0. For the characteristic polynomial in (3.1), it followsp(λ)=det(S-λC)=det(PSPt-λD)=0. The order of p is r. We decompose these matrices in PSPt=(S1S2S2tS3),D=(D1000) with S1,D1r×r, S2r×(m-r), and S3(m-r)×(m-r). Now, we eliminate S2 in PSPt by multiplications with Givens rotations Gkm×m,    k=1,,l, so that it follows: GlG2G1PSPt=(Σ10Σ2Σ3),GlG2G1D=(Δ10Δ20) with Σ1,Δ1r×r, Σ2,Δ2(m-r)×r, and Σ3(m-r)×(m-r). In (4.1), we achieve with orthogonal Gk,    k=1,,lp(λ)=det(PSPt-λD)=det(GlG2G1(PSPt-λD))=det(Σ1-λΔ10Σ2-λΔ2Σ3)=det(Σ3)det(Σ1-λΔ1)=0. Because of p(0)=det(PSPt)=det(S)0 and p(0)=det(Σ3)det(Σ1), the submatrices Σ1,Σ3 are regular and the generalized eigenvalues of det(Σ1-λΔ1) are different to zero. So, with yr(Σ1-λΔ1)y=0 can be transformed to an equivalent eigenvalue problem with(Δ1-1Σ1-λI)y=0. This system can be solved by finding the matrix X with Δ1X=Σ1 using the Gaussian elimination and determining the eigenvalues of X computing the QR algorithm . Because all steps are equivalent, we have σ(Δ1-1Σ1)=σ(S,C), that is, the eigenvalues of (3.1) and (4.6) are the same.

With Theorem 3.13, we are looking for the smallest value λ*σ+(S,C) and a corresponding generalized eigenvector xs* to minimize the problem (3.1). So, 0=(S-λ*C)xs*=(SPt-λ*CPt)Pxs* yields. By substitution of ys* for Pxs*, we obtain0=GlG2G1(PSPt-λ*D)ys*=(Σ1-λ*Δ10Σ2-λ*Δ2Σ3)ys*. We decompose ys* into the subvectors ys,r*r and ys,m-r*m-r with (ys*)t=(ys,r*|ys,m-r*)t. Then, ys,r* is a generalized eigenvector for λ* of the problems (4.5) and (4.6).

Let ys,r* be an eigenvector to the smallest positive eigenvalue λ* of (4.6). Since Σ3 is regular, it follows in (4.8) ys,m-r*=Σ3-1(Σ2-λ*Δ2)ys,r*

and a generalized eigenvector xs* for λ* in (3.1) is given as: xs*=Pt(ys,r*Σ3-1(Σ2-λ*Δ2)ys,r*).

5. Applications in Conic Fitting5.1. Fitzgibbon's Ellipse Fitting

First, we would like to find an ellipse for a given set of points in 2. Generally, a conic in 2 is implicitly defined as the zero set of f:6×2 to a constant parameter a=(α1,,α6)t6: f(a,ξ,η)=α1ξ2+α2ξη+α3η2+α4ξ+α5η+α6. The equation f(a,ξ,η)=0 can also be written with x=(ξ,η)t as 0=xtAx+btx+cwithA=(α1α2/2α2/2α3),  b=(α4α5),  c=α6. The eigenvalues λ1,λ2 of A characterize a conic uniquely . Thus, we need 4λ1λ2=4α1α3-α22>0 for ellipses in f(a,x)=0. Furthermore, every scaled vector μa with μ{0} describes the same zero-set of f. So, we can impose the constraint for ellipses with 4α1α3-α22=1. For n    (n6) given points (ξi,ηi)t2, we want to find a parameter a6, which minimizes F:6 withF(a)=i=1nf(a,ξi,ηi)2 subject to  4α1α3-α22=1. This ellipse fitting problem is established and solved by Fitzgibbon . With the following matrices Dn×6, C6×6D=(ξ12ξ1η1η12ξ1η11ξ22ξ2η2η22ξ1η21ξn2ξnηnηn2ξnηn1),C=(0020000-10000200000000000000000000000), and F(a)=i=1nf(a,ξi,ηi)2=Da22, we achieve the equivalent problem:minaR6Da22subject  toatCa=1. For S=DtD, we have a special case of (1.1). Assuming S is a regular matrix and since the eigenvalues of C are −2, −1, 0, and 2, by lemma 3.3 we know that the generalized eigenvalue problem (S-λC)a=0 has exactly one positive solution λ*. Because of Theorem 3.13 a corresponding generalized eigenvector a* to λ* minimizes the problem (5.5) and a* consists of the coefficients of an implicit given ellipse.

A numerically stable noniterative algorithm to solve this optimization problem is presented by Halir and Flusser . In comparison with Section 4, their method uses a special block decomposition of the matrices D and C.

5.2. Hyperbola Fitting

Instead of ellipses, O'Leary and Zsombor-Murray want to find a hyperbola to a set of scattered data xi2 . A hyperbola is a conic, which can uniquely be characterized by 4λ1λ2=4α1α3-α22<0 . So, we consider the constraint α22-4α1α3=1 and obtain the optimization problem: minaR6Da22subject  toat(-C)a=1 with D and C being chosen in 5.1. The matrix (-C) has two positive eigenvalues. In this case, a solution is given by a generalized eigenvector to the smallest value in σ+(S,-C). But O'Leary and Zsombor-Murray determine the best hyperbolic fit by evaluation of κi=α2,i2-4α1,iα3,i, where the eigenvector ai=(ai,1,,ai,6)t is associated to a positive value of σ+(S,-C).

5.3. Bookstein's Conic Fitting

In Bookstein's method, the conic constraint is restricted to2(trace2A-2detA)=2(λ12+λ22)=2α12+α22+2α32=1, where λ1,λ2 are the eigenvalues of A in f . There, it is λ1,2[-2/2,2/2] and at least one of them different to 0. But the constraint (5.8) is not a restriction to a class of conics. Here, we determine an arbitrary conic, which minimizesF(a)=i=1nf(a,ξi,ηi)2subject  to2α12+α22+2α32=1. The resulting data matrix   Dn×6 is the same as for Fitzgibbon's problem. The constraint matrix C6×6 has a diagonal shape with the entrees (2,1,2,0,0,0), that is, all eigenvalues of C are nonnegative. In the case of a regular matrix S, the problem (5.9) is solved for a generalized eigenvector to the smallest value in σ+(S,C).

5.4. Approximation of Shrinked Aspheres

After the molding process in optical applications, the shrinkage of rotation-symmetric aspheres is implicitly defined for x=(ξ,ζ)t in α1ζ2+α2ζ+α3+α4ξ2=0subject  toα22-4α1α3=4r2, where r{0} and a=(a1,,a4)t are aspheric-specific constants . For i=1,,n with n4, the scattered data xi=(ξi,ζi)t2 of a shrinked asphere are given in this approximation problem. Here, we are looking for the conic parameter a=(α1,,α4)t for a fixed value rref, which minimizes F(a)=i=1n(α1ζi2+α2ζi+α3+α4ξi2)2subject  toα22-4α1α3=4rref2. Analogously to Fitzgibbon, we have the matrices Dn×4 and C4×4 with D=(ζ12ζ11ξ12ζ22ζ21ξ22ζn2ζn1ξn2),  C=(00-200100-20000000) and with F(a)=Da22 we get the following optimization problem:minaR4Da22subject  toatCa=4rref2. This is also an application of (1.1). The matrix C has the eigenvalues -2,0,1, and 2. So, the generalized eigenvalue problem in (3.1) with regular S=DtD4×4 has two positive values in σ+(S,C). With Theorem 3.13, a generalized eigenvector a*4 to the smaller of both values solves (5.13).

The coefficients αi in the problems (5.5) and (5.13) correspond not to the same monomialsξkζl. Hence, we have different matrices D and C.

6. Conclusion

In this paper, we present a minimization problem of least squares subject to absolute quadratic constraints. We develop a closed theory with the main result that a minimum is a solution of a generalized eigenvalue problem corresponding to the smallest positive eigenvalue. Further, we show a reduction to an eigensystem for numerical calculations. Finally, we study four applications about conic approximations. We analyze Fitzgibbon's method for direct ellipse-specific fitting, O'Leary's direct hyperbola approximation, Bookstein's conic fitting, and an optical application of shrinked aspheres. All these systems are attribute to the general optimization problem.

LiuX.LuJ.Least squares based iterative identification for a class of multirate systemsAutomatica20104635495542-s2.0-7754908434210.1016/j.automatica.2010.01.007DingF.LiuP. X.LiuG.Multiinnovation least-squares identification for system modelingIEEE Transactions on Systems, Man, and Cybernetics, Part B20104037677782-s2.0-7795258518310.1109/TSMCB.2009.2028871XiaoY.ZhangY.DingJ.DaiJ.The residual based interactive least squares algorithms and simulation studiesComputers and Mathematics with Applications2009586119011972554351ZBL1189.621492-s2.0-6874908704310.1016/j.camwa.2009.02.037BaoB.1986jmbaobo@163.comXuY.xyqwx@163.comShengJ.shengj2@u.washington.eduDingR.rfding@yahoo.cnLeast squares based iterative parameter estimation algorithm for multivariable controlled ARMA system modelling with finite measurement dataMathematical and Computer Modelling2011539-101664166910.1016/j.mcm.2010.12.034DingF.LiuP. X.LiuG.Gradient based and least-squares based iterative identification methods for OE and OEMA systemsDigital Signal Processing20102036646772-s2.0-7794948666710.1016/j.dsp.2009.10.012BooksteinF. L.Fitting conic sections to scattered dataComputer Graphics and Image Processing19799156712-s2.0-0018286490FitzgibbonA. W. J.Stable segmentation of 2D curves, Ph.D. thesis1997University of EdinburghGanderW.Least squares with a quadratic constraintNumerische Mathematik198136329130710.1007/BF01396656613070O'LearyP.Zsombor-MurrayP.Direct and specific least-square fitting of hyperbolæ and ellipsesJournal of Electronic Imaging20041334925032-s2.0-494424860810.1117/1.1758951SchoeneR.schoene@forwiss.uni-passau.deHintermannD.HanningT.GregoryG. G.HowardJ. M.KoshelR. J.Approximation of shrinked aspheres6342International Optical Design Conference2006Vancouver, CanadaProceedings of SPIE10.1117/12.692247JostJ.Postmodern Analysis1998Berlin, GermanySpringerxviii+353Universitext1490832KosmolP.Optimierung und Approximation1991Berlin, GermanyWalter de Gruyterxiv+394de Gruyter Lehrbuch1155629GolubG. H.Van LoanC. F.Matrix Computations19963rdBaltimore, Md, USAJohns Hopkins University Pressxxx+698Johns Hopkins Studies in the Mathematical Sciences1417720ZBL0862.65007GanderW.GolubG. H.StrebelR.Least-squares fitting of circles and ellipsesBIT Numerical Mathematics199434455857810.1007/BF019342681430909ZBL0817.65008HalirR.FlusserJ.SkalaV.Numerically stable direct least squares fitting of ellipsesProceedings of the International Conference in Central Europe on Computer Graphics, Visualization and Interactive Digital Media1998125132