On degrees of modular common divisors and the Big prime gcd algorithm

We consider a few modifications of the Big prime modular $\gcd$ algorithm for polynomials in $\Z[x]$. Our modifications are based on bounds of degrees of modular common divisors of polynomials, on estimates of the number of prime divisors of a resultant and on finding preliminary bounds on degrees of common divisors using auxiliary primes. These modifications are used to suggest improved algorithms for $\gcd$ calculation and for coprime polynomials detection. To illustrate the ideas we apply the constructed algorithms on certain polynomials, in particular, on polynomials from Knuth's example of intermediate expression swell.


Introduction
This work is one of the articles in which we would like to present parts from new Introduction to computer algebra [11], that currently is under preparation. In [11] we try to give a "more algebraic" and detailed view on some of the areas of computer algebra, such as, algorithms on Euclidean rings, extensions of fields, operators in spaces on finite fields, factorization in UFD's, etc.. The Big prime modular gcd algorithm is one of the first and most popular algorithms of computer algebra. In its classical form it allows to calculate the greatest common divisor gcd f (x), g(x) for any non-zero polynomials f (x), g(x) ∈ Z[x]. There are a few modifications of this algorithm for other UFD's, such as multivariate polynomial rings. Attention to the gcd calculation is partially explained by the first examples that were built to explain importance of application of algebraic methods to computer science. In particular, Knuth's well known example of intermediate expression swell discusses the polynomials (1) f (x) = x 8 + x 6 − 3x 4 − 3x 3 + 8x 2 + 2x − 5, g(x) = 3x 6 + 5x 4 − 4x 2 − 9x + 21, and it shows that calculation of gcd f (x), g(x) by traditional Euclidean algorithm on rational numbers generates very large integers to deal with, whereas consideration of these polynomials modulo p, that is, consideration of their images under ring homomorphism ϕ p : is the polynomial ring over the residue ring Z p ∼ = Z/pZ) very easily shows that gcd f (x), g(x) = 1 (see [7] and also [1,15,4,14]). We are going to use the polynomials (1)  The main idea of the Big prime modular gcd algorithm is that for the given polynomials f (x), g(x) ∈ Z[x] one may first consider their images f p (x) = ϕ p (f (x)), g p (x) = ϕ p (g(x)) ∈ Z p [x] under ϕ p . Unlike Z[x], the ring Z p [x] is an Euclidean domain, since it is a polynomial ring over a field, so the gcd f p (x), g p (x) can be computed in it by the well known Euclidean algorithm. There remains "to lift" a certain fold t·gcd f p (x), g p (x) of it to the ring Z[x] to reconstruct the pre-image gcd f (x), g(x) . The "lifting" procedure consists of selecting the suitable value for prime p, then finding in Z[x] an appropriate pre-image for gcd f p (x), g p (x) , then checking if that pre-image divides both f (x) and g(x). If yes, it is the gcd f (x), g(x) we are looking for. If not, then a new p need be selected to repeat the process. Arguments based on resultants and on Landau-Mignotte bounds show that we can effectively choose p such that the number of required repetitions is "small".
The first aim of this work is to present in Sections 2-5 a slightly modified argumentation of the algorithm, based on comparison of the degrees of common divisors of f (x) and g(x) in Z [x], and of f p (x) and g p (x) in Z p [x] (see Algorithm 5.1). This approach allows some simplification of a step of the algorithm: for some primes p we need not reconstruct the pre-image of t · gcd f p (x), g p (x) , but we immediately get an indication that this prime is not suitable, and we should proceed to a new p (see Remark 5.1).
Then in Section 6 we discuss the problem if or not the Big prime modular gcd algorithm could output the correct answer using just one prime p. The answer is positive, but for some reasons it should not be used to improve the algorithm (to make it work with one p) because it evolves a too large prime (see Remark 6.3). Instead, we show that we can estimate the maximal number of p's (repetitions of steps) that may be used in traditional Big prime modular gcd algorithm. For example, for the polynomials (1) of Knuth's example this number is at most 31. Estimates of this type can be found in literature elsewhere. We just make the bound considerably smaller (see Remark 6.7).
The obtained bounds on the number of primes p are especially effective when we are interested not in gcd, but just in detection if the polynomials f (x), g(x) ∈ Z[x] are coprime or not. We consider this in Section 7 (see Algorithm 7.1).
In Section 8 we consider four other ideas to modify the Big prime modular gcd algorithm. Two first ideas are based on checking the number of primes p. The third idea is based on using an auxiliary prime q to estimate the degree of gcd f (x), g(x) by means of the degree of gcd f q (x), g q (x) (see Algorithm 8.1). Example 8.1 shows how much better results we may get by this modification. The fourth idea combines both approaches: it uses a set of auxiliary primes q 1 , . . . , q k+1 to correctly find the degree of gcd f (x), g(x) , and then we use a modified version of Landau-Mignotte bound to find a single big prime p by which we can calculate the gcd f p (x), g p (x) .
The arguments used here can be generalized for the case of polynomials on general UFD's. From the unique factorization in a UFD it easily follows, that gcd always exists, and it is easy to detect if or not the given common divisor of maximal degree is a gcd or not. The less simple part is to find ways to compute gcd (without having the prime-power factorization). That can be done for some classes of UFD's, such as, multivariate polynomials on fields. The case of general UFD's will be considered later [12].

The gcd in polynomial rings and the degrees of common divisors
The problem of finding the greatest common divisor gcd(a, b) of any non-zero elements a, b in a ring R can be separated to two tasks: (1) finding out if gcd(a, b), in general, exists for a, b ∈ R; and then: (2) finding an effective way to calculate the gcd(a, b).
The Euclidean algorithm gives an easy answer to both of these tasks in any Euclidean domain, that is, an integrity domain R possessing Euclidean norm δ : R\0 → N ∪ {0}, such that δ(ab) ≥ δ(a) hold for any non-zero elements a, b ∈ R; and for any a, b ∈ R, where b = 0, there exists elements q, r ∈ R, such that a = qb + r, where either r = 0, or r = 0 and δ(r) < δ(b) [10,5,3,8,15,4] The situation is less simple in non-Euclidean domains, even in such a widely used ring as the ring Z[x] of polynomials with integer coefficients. That Z[x] is not an Euclidean domain is easy to show by elements x, 2 ∈ Z[x]. If Z[x] were an Euclidean domain, it would contain elements u(x), v(x) such that x · u(x) + 2 · v(x) = gcd(x, 2) = ±1, which is not possible.
The first of two tasks mentioned above, namely, existence of gcd can be accomplished for Z[x] by proving that Z[x] is a UFD, that is, an integrity domain in which every non-zero element a has a factorization a = ǫ p 1 · · · p k , where ǫ ∈ R * is a unit (invertible) element in R, the elements p i are prime for all i = 1, . . . , k, and where the factorization above is unique in the sense that if a has another factorization of that type θ q 1 · · · q s , where θ ∈ R * and the elements q i are prime, then k = s and (perhaps after some reordering of the prime factors) the respective prime elements are associated: p i ≈ q i for all i = 1, . . . , k. For briefness, in the sequel we will often omit the phrase "perhaps after some reordering of the prime factors" and this will cause no confusion.
After merging the associated prime elements together, we get a unique factorization into prime-power elements: (2) a = ν p α 1 1 · · · p αn n , ν ∈ R * , α i ∈ N and p i ≈ p j for any i = j; i, j = 1, . . . , n (in some arguments below we may admit some of the factors p α i i participate with degrees α i = 0, this makes some notations simpler). From this it is easy to see that in a UFD R the gcd(a, b) exists for any non-zero elements a, b ∈ R. Assume b ∈ R has the factorization b = κ p α ′ 1 1 · · · p α ′ n n , κ ∈ R * (we use the same primes p i in both factorizations because if, say, p i is not actually participating in one of those factorizations, we can add it as p α i i with α i = 0). Then This follows from uniqueness of factorization in UFD. For, if h is a common divisor of a, b, and if p i is a prime divisor of h, then it also is a prime divisor of a and of b. The elements p i cannot participate in factorization of h by a power greater than min{α i , α ′ i }, because then a (or b) would have an alternative factorization in which p i occurs more than α i (or α ′ i ) times. The shortest way to see that Z[x] is a UFD is to apply Gauss's Theorem: if the ring R is a UFD, then the polynomial ring R[x] also is a UFD [10,5,2,8,15]. Since Z is a UFD (that fact is known as "the fundamental theorem of arithmetic"), Z[x] also is a UFD.
Clearly, gcd(a, b) is defined up to a unit multiplier from R * . For integers from R = Z or for polynomials from R = Z[x] this unit multiplier can be just −1 or 1. So to say: gcd(a, b) is defined "up to the sign ±1" because Z * = Z[x] * = {−1, 1}. And for polynomials from R = Z p [x] the gcd(a, b) is defined up to any non-zero multiplier t ∈ Z * p = {1, . . . , p − 1}. Taking this into account we can use gcd(a, b) = 1 and gcd(a, b) ≈ 1 as equivalent notations, since associated elements are defined up to a unit multiplier. Notice that in some sources they prefer to additionally introduce a normal form of the gcd to distinguish one fixed instance of the gcd. Instead of using that extra term, we will just in a few places refer to the "positive gcd", meaning that we take, say, 2 = gcd (6,8), and not −2.
Furthermore, since the content cont (f (x)) of a polynomial f (x) is a gcd for some elements (coefficients of the polynomials), the constant and the primitive part pp (f (x)) = f (x)/cont (f (x)) can also be considered up to a unit multiplier. For a non-zero polynomial f (x) ∈ Z[x] we can choose the cont (f (x)) so that sgn cont (f (x)) = sgn lc (f (x)), that is, the cont (f (x)) has the same sign as the leading coefficient of f (x). Then the leading coefficient lc (pp (f (x))) of the primitive part pp (f (x)) = f (x)/cont (f (x)) will be positive. We will use this below without special notification. Now we would like to a little restrict the algebraic background we use. Two main algebraic systems, used in the Big prime modular gcd algorithm, are the Euclidean domains and the UFD's. However, their usage is "asymetric" in the sense that the Euclidean domains and Euclidean algorithm are used in many parts of the Big prime modular gcd algorithm, whereas the UFD's are used just to prove that gcd does exist. Moreover, it is easy to understand that (2) and (3) may hardly be effective tools to calculate a gcd, since they are using factorization of elements to primes, while finding such a factorization is a more complicated task than finding just the gcd. Thus, it is reasonable to drop the UFD's from consideration, and to obtain (2) directly using Gauss's Lemma on primitive polynomials in By Gauss's Lemma, a product of two primitive polynomials is primitive in Z[x] [10,5,2,8,15]. So if (4) f (x) = cont (f (x)) · pp (f (x)) and g(x) = cont (g(x)) · pp (g(x)), The following is easy to deduce from Gauss's Lemma The unique factorization of any non-zero f (x) ∈ Z[x] is easy to obtain from decompositions (5) above and from Lemma 2.1. Let us just outline it, the details can be found in [10,5,15,4,11]. By the fundamental theorem of arithmetic cont (f (x)) can in a unique way be presented as a products of powers of primes: cont (f (x)) = ν p α 1 1 · · · p αn n . So, if deg f (x) = 0, then we are done.
is not prime, then by repeatedly splitting it to products of factors of lower degree as many times as needed, we will eventually get a presentation of f (x) as a product of cont (f (x)) and of some finitely many primitive prime polynomials q i (x) of degrees greater than 0. We do not yet have the uniqueness of this decomposition, but we can still group the associated elements together to get the presentation: has another, alternative presentation of this sort and if t(x) is one of the primitive prime factors (of degree greater than 0) of that presentation, then the product , we repeat the process. If not, we turn to other primitive prime polynomials (of degree greater than 0) dividing what remains from (6) after eliminations. After finitely many steps (6) will become ν p α 1 1 · · · p αn n , and also from the other, alternative presentation a constant should be left only. So we apply the fundamental theorem of arithmetic one more time to get that (6) is the unique factorization.
We see that (6) is a particular case of (2). The proof above avoided usage of Gauss's Theorem and the formal definitions of the UFD's. And we see that the prime elements of Z[x] are of two types: prime numbers and primitive prime polynomials of degrees greater than 0.
Existence of gcd f (x), g(x) for any two non-zero polynomials in can be deduced from (6) in analogy with (3). If . . , n; j = 1, . . . , m). However, like we admitted earlier, (3) and (8) are no effective tools to calculate the gcd. We will turn to gcd calculation algorithm in the next section.
(3) and (8) allow us to get some information that we will be essential later. Observe that the following definition of gcd, often used in elementary mathematics, is no longer true for general polynomial rings: "d(x) is the greatest common divisor of f (x) and g(x) if it is their common divisor of maximal degree". For example, for f (x) = 12x 2 + 24x + 12 and g(x) = 8x + 8 the maximum of degrees of their common divisors is 1.
We can detect the cases when the divisor of highest degree is the gcd.
The lemma easily follows from (6), (7) and (8). We see that in example above the condition was missing: cont (x + 1) = 1 but gcd cont (f (x)) , cont (g(x)) = gcd (12,8) In the case if polynomials are over a field, the situation is simpler. For any field K the polynomial ring K[x] is a UFD (and even an Euclidean domain). Any non-zero . . , m, which is unique in the sense mentioned above. Since all non-zero scalars in K are units, what we in (6) above had as a product of some prime numbers, actually "merges" in K into a unit: Comparing factorizations of type (9) for any non-zero polynomials f (x), g(x) ∈ K[x] we easily get: This, in particular, is true for rings mentioned above: . We will use this fact later to construct the Big prime modular gcd algorithm and its modifications.
The analog of Lemma 2.4 was not true for Z[x] because in factorization (8) we have the non-unit prime-power factors p γ i i which do participate in factorization of d(x) = gcd f (x), g(x) , but which add nothing to the degree of d(x). This is why maximality of the degree is no longer the only criterion in Z[x] to detect if the given h(x) is the gcd or not.

Some notations for modular reductions
The following notations, adopted from [11], are to make our arguments shorter and more uniform when we deal with numerals, polynomials and matrices. As above, let Z p be the residue ring (finite Galois field Z p = F p = {0, . . . , p − 1}) and We use the same symbol ϕ p to denote the homomorphism is the ring of polynomials over Z p , and ϕ p is mapping each of the coefficients a i of f (x) ∈ Z[x] to the reminder after division of a i by p.
Similarly, we define the homomorphism of matrix rings which maps each of the elements a ij of a matrix A ∈ M m,n (Z) to the reminder after division of a ij by p.
Using the same symbol ϕ p for numeric, polynomial and matrix homomorphisms causes no misunderstanding below, and it is more comfortable for some reasons. These homomorphisms are called "modular reductions" or just "reductions". We can also specify these homomorphism as: "numeric modular reduction", "polynomial modular reduction" or "matrix modular reduction" where needed [11]. (10) f (x) = a 0 x n + · · · + a n then: x n + · · · + ϕ p (a n ) = a 0,p x n + · · · + a n,p ∈ Z p [x].
And for a matrix A ∈ M m,n (Z) denote ϕ p (A) = A p ∈ M m,n (Z p ). If A = a i,j m×n then A p = ϕ p (a i,j ) m×n = a i,j,p m×n .

Problems at lifting the modular gcd to Z[x]
Now we turn to the second task mentioned earlier: effective calculation of the actual gcd f (x), g(x) for the given non-zero polynomials f ( The ring Z p [x] is an Euclidean domain, unlike the ring Z[x]. So we can use the Euclidean algorithm to calculate gcd for any non-zero polynomials in Z p [x], including the modular images f p (x) and g p (x). Since the notation gcd f p (x), g p (x) is going to be used repeatedly, for briefness denote by e p (x) the gcd calculated by Euclidean algorithm for f p (x), g p (x). Let us stress that gcd f p (x), g p (x) is not determined uniquely, since for any non-zero t ∈ Z p the product t · gcd f p (x), g p (x) also is a gcd for f p (x), g p (x). We are denoting just one of these gcd's (namely, that computed by the Euclidean algorithm) by e p (x). This e p (x) is unique, since at each step of the Euclidean algorithm we have a unique action to take (to see this just consider the steps of "long division" used to divide f p (x) by g p (x) on field Z p ).
The main idea of the algorithm is to calculate the e p (x) ≈ gcd f p (x), g p (x) for some suitable p, and to reconstruct d(x) = gcd f (x), g(x) by it. We separate the process to four main problems that may occur, and show how to overcome each one to arrive to a correctly working algorithm.

Problem 1. Avoiding the eliminating coefficients.
After reduction ϕ p some of the coefficients of f (x) and g(x) may change or even eliminate. So their images f p (x) = ϕ p f (x) and g p (x) = ϕ p g(x) may keep very little information to reconstruct the d(x) based on e p (x).
The first simple idea to avoid such eliminations is to take p larger than the absolute value of all coefficients of f (x) and g(x). This, however, is not enough since a divisor h(x) of a polynomial f (x) may have coefficients, larger than those of f (x). Moreover, using the cyclotomic polynomials for large enough n: x − e 2iπk n one can get divisors of f (x) = x n − 1 which have a coefficient larger than any pre-given number [5,15,11]. Since we do not know the divisors of f (x) and g(x), we cannot be sure if the above mentioned large p will be large enough to prevent eliminations of coefficients of h(x). To overcome this one can use the Landau-Mignotte bounds 1 , as it is done in [4,15,14]. For a polynomial f (x) given by (10)   Let f (x) = a 0 x n + · · · + a n and h(x) = c 0 x k + · · · + c k be non-zero polynomials in Z[x]. If h(x) is a divisor of f (x), then: The proof is based on calculations on complex numbers, and it can be found, for example, in [15,11]. We are going to use the Landau-Mignotte bounds in the following two shapes: Proof. To obtain this from (11) first notice that |c 0 /a 0 | ≤ 1.
Finally, if k = deg h(x) ≤ n − 1 (k is unknown to us), then we can simply replace in (11) the value 2 k by 2 n−1 .
Remark 4.4. In literature they use the rather less accurate bound |c i | ≤ 2 n f (x) , but the second paragraph of our proof above allows to replace 2 n by 2 n−1 . See also Remark 6.7 .
Proof. To obtain this from (11) just notice that if h(x) is a common divisor for f (x) and g(x), then its leading coefficient c 0 divides both a 0 and b 0 .
Formula (13) provides the hint to overcome Problem 1 about eliminating coefficients, mentioned at the start of this subsection. Although the divisors h(x) of f (x) and g(x) are yet unknown, we can compute N f,g and take p > N f,g . If we apply the reduction ϕ p for this p we can be sure that none of the coefficients of h(x) has changed "much" under that homomorphism, for, ϕ p does not alter the non-negative coefficients of h(x), and it just adds p to all negative coefficients of h(x). The same holds true for d(x) = gcd f (x), g(x) . Example 4.6. If for some polynomials f (x), g(x) we have N f,g = 15, we can take the prime, say, p = 17 > N f,g . Assume we have somehow calculated d 17 (x) = 12x 3 +3x+10, we can be sure that d(x) is not the pre-image 29x 3 − 17x 2 + 20x + 27 because d(x) cannot have coefficients greater than 15 by absolute value. But we still cannot be sure if the pre-image d(x) is 12x 3 + 3x + 10, or −5x 3 + 3x + 10, or maybe −5x 3 − 14x − 7.
It is easy to overcome this by just taking a larger value: If the coefficient c i of d(x) is non-negative, then ϕ p (c i ) = c i < p/2, and if it is negative, then ϕ p (c i ) = c i + p > p/2. This provides us with the following very simple algorithm to reconstruct d(x) if we have already computed d p (x) for sufficiently large prime p.

Problem 3. Finding the correct fold of the modular gcd of right degree.
Now additionally assume the polynomials f (x), g(x) ∈ Z[x] to be primitive. Since cont (f (x)) and cont (g(x)) are defined up to the sign ±1, we can without loss of generality admit the leading coefficients of f (x), g(x) to be positive.
Below, in Problem 4, we will see that for some p the polynomial e p (x), computed by the Euclidean algorithm in Z p [x], may not be the image of d(x) and, moreover, its degree may be different from that of d(x). This means that by applying Algorithm 4.1 to e p (x) we may not obtain d(x). Assume, however, we have a p, which meets the condition p > 2 · N f,g and for which: (14) deg d(x) = deg e p (x). Example 4.7. For f (x) = x 2 + 4x + 3 and g(x) = x 2 + 2x + 1 whichever prime p > 4 we take, we will get by the Euclidean algorithm:

By Corollary 2.3 a common divisor of
we have d(x) = x + 1. So regardless how large p we choose, we will never get ϕ p (x + 1) = 2x + 2.
In other words, we are aware that the image d p (x) is one of the folds t · e p (x) of e p (x) for some t ∈ {1, . . . , p − 1}, but we are not aware which t is that.
The leading coefficient c 0 = lc (d(x)) of d(x) can also be assumed to be positive. Denote by w the positive gcd(a 0 , b 0 ). Since both c 0 and w are not altered by ϕ p , their fraction w/c 0 also is not altered. Take such a t that: (15) lc (t · e p (x)) = w.
of t · e p (x) by Algorithm 4.1, we will get a polynomial, which is either d(x) or is some fold of d(x). Since f (x), g(x) are primitive, it remains to go to the primitive part d(x) = pp (k(x)).
The general case, when f (x), g(x) may not be primitive, can easily be reduced to this: for arbitrary f (x), g(x) take their decompositions by formula (4) and set (16) r = gcd cont (f (x)) , cont (g(x)) ∈ Z.
Then assign f (x) = pp (f (x)), g(x) = pp (g(x)) and do the steps above for these new polynomials. After the d(x) = pp (k(x)) is computed, we get the final answer as r · d(x) = r · pp (k(x)) Notice that for Algorithm 4.1 we need p to be greater than any coefficient |c i | of the polynomial we reconstruct. The bound p > 2 · N f,g assures that p meets this condition for d(x). We, however, reconstruct not d(x) but l · d(x), which may have larger coefficients. One could overcome this point by taking p > w · 2 · N f,g but this is not necessary because, as we see later, while the Big prime modular gcd algorithm works, the value of p will grow and this issue will be covered.
The idea to overcome this problem is to show that the number of primes p, for which (14) falsifies, is "small". So if the selected p is not suitable, we take another p and do the calculation again by the new prime. And we will not have to repeat these steps for many times (we will turn to this point in Section 6).
The proof of the following theorem and the definition of the resultant res f (x), g(x) (that is, of the determinant of the Sylvester matrix S f,g of polynomials f (x), g(x)) can be found, for example, in [10,15,8,11]. The resultant is a comfortable tool to detect if the given polynomials are coprime: The following fact in a little different shape can be found in [15] or [4]: Since d p (x) = 0, we can consider the fractions f p (x)/d p (x) and . From unique factorizations of f p (x) and g p (x) in UFD Z p [x] it is very easy to deduce that that is, when f p (x)/d p (x) and g p (x)/d p (x) are not coprime in Z p [x] or, by Theorem 4.9, when res f p (x)/d p (x), g p (x)/d p (x)) = 0. The latter is the determinant of Sylvester matrix S fp/dp, gp/dp . Consider the matrix rings homomorphism (matrix modular reduction) (as mentioned earlier we use the same symbol ϕ p for numeric, polynomial and matrix reductions). Since, ϕ p (S f /d, g/d ) = S fp/dp, gp/dp , and since the determinant of a matrix is a sum of products of its elements, we get So R p can be zero if and only if R is divisible by p. The polynomials f (x)/d(x) and g(x)/d(x) are coprime in Z[x] and their resultant is not zero by Theorem 4.9. And R cannot be a positive integer divisible by p since that contradicts the condition of this corollary. Corollary 4.10 shows that if for some p the equality (14) does not hold for polynomials f (x), g(x) ∈ Z[x], then p divides either a 0 and b 0 , or it divides the resultant R. We do not know R, since we do not yet know d(x) to calculate the resultant R = res f (x)/d(x), g(x)/d(x) . But, since the number of such primes p is just finite, we can arrive to the right p after trying the process for a few primes. We will turn to this again in Section 6.

The Big prime modular gcd algorithm
Four steps of the previous section provide us with the following procedure. We keep all the notations from Section 4. Take the primitive polynomials f (x), g(x) ∈ Z[x]. Without loss of generality we may assume a 0 , b 0 > 0. Take any p > 2 · N f,g . Then by Euclidean algorithm. Then choose t so that (15) holds. Construct k(x) applying Algorithm 4.1 to t · e p (x). If the primitive part d(x) = pp (k(x)) divides both f (x) and g(x), then the gcd for these primitive polynomials if found: d(x) = gcd f (x), g(x) . That follows from consideration about divisor degrees above: if f (x), g(x) had a common divisor h(x) of degree greater than deg d(x), then, since the degree of h(x) is not altered by ϕ p , we would get deg

This means that if for
, we have the case when p divides the resultant R. Then we just ignore the calculated polynomial, choose another p > 2 · N f,g and redo the steps for it. Repeating these steps for finitely many times, we will eventually arrive to the correct d(x) for the primitive polynomials f (x), g(x).
The case of arbitrary non-zero polynomials can easily be reduced to this. By arguments mentioned earlier: we should calculate d(x) for primitive polynomials pp (f (x)) and pp (g(x)), and then output the final answer as r · d(x), where r is defined by (16). The process we described is the traditional form of the Big prime modular gcd algorithm.
Remark 5.1. Since our approach in Section 4 evolved the maximality of degrees of the common divisors, we can shorten some of the steps of our algorithm. Let us store in a variable, say, D the minimal value for which we already know it is not the deg gcd f (x), g(x) . As an initial D we may take, say, D = min{deg f (x), deg g(x)} + 1. Each time we calculate e p (x) = gcd f p (x), g p (x) , check if deg e p (x) is equal to or larger than the current D. If yes, we already know that we have an "inappropriate" p. Then we no longer need use Algorithm 4.1 to reconstruct k(x) and to get d(x) = pp (k(x)). We just skip these steps and proceed to the next p. Reconstruct d(x) and check if d(x)|f (x) and d(x)|g(x) only when deg e p (x) < D. Then, if d(x) does not divide f (x) or g(x), we have discovered a new bound D for deg gcd(f (x), g(x) . So set D = deg e p (x) and proceed to the next p. If in next step we get deg e p (x) ≥ D, we will again be aware that the steps of reconstruction of d(x) need be skipped.
We constructed the following algorithm:
choose a t such that the lc (t · e p (x)) = w; 15.
call Algorithm 4.1 to calculate the preimage k(x) of t · e p (x); 16.
Turning back to Remark 5.1, notice that for some prime numbers p we skip the steps 14 -18 of Algorithm 5.1, and directly jump to the step 08. In fact, Remark 5.1 has mainly theoretical purpose to display how usage of UFD properties and comparison of divisor degrees may reduce some of the steps of the Big prime modular gcd algorithm.
In practical examples the set of primes we use contains few primes dividing R = res f (x)/d(x), g(x)/d(x) , so we may not frequently get examples where the steps 14 -18 are skipped.
And we can take the prime p = 1031 > 2 · N f,g . It is not hard to compute that gcd f 1031 (x), g 1031 (x) ≈ 1. So f (x) and g(x) are coprime. It is worth to compare p = 1031 with much smaller values p = 67 and p = 37 obtained below for the same polynomials (1) in Example 8.1 using the modified Algorithm 8.1.
In [11] we also apply Algorithm 5.1 to other polynomials with cases when the polynomials are not coprime.

Estimating the prime divisors of the resultant
Although at the start of the Big prime modular gcd algorithm we cannot compute the resultant R = res f (x)/d(x), g(x)/d(x) for the given f (x), g(x) ∈ Z[x] (we do not know d(x)), we can nevertheless estimate the value of R and the number of its prime divisors. Denote: and for any of their common divisors d(x) the following holds: Proof. By Corollary 4.3 the coefficients of fractions f (x)/d(x) and g(x)/d(x) are bounded, respectively, by N f = 2 n−1 f (x) and . Since the numbers of summands in these fractions are at most n + 1 and m + 1, respectively, we get: Applying the Hadamard's maximal determinant bound [15] to the Sylvester matrix S f /d, g/d , we get that The bound of (18) is very rough. To see this apply it to the polynomials (1) of Knuth's example: Example 6.2. For polynomials (1) we have f (x = √ 113 and g(x = √ 570. So we can estimate N f < 1408, N g < 768 and N f,g < 512. Thus: which is a too large number to comfortably operate with. p > 2 · A f,g , then we will get that p ∤ R = res f (x)/d(x), g(x)/d(x) whatever the greatest common divisor d(x) be. And, clearly, p ∤ w holds for w = gcd(a 0 , b 0 ). So in this case Algorithm 5.1 will output the correct pp (k(x)) using just one p, and we will not have to take another p ∤ w after step 18. However, Example 6.2 shows why it is not reasonable to chose p by the rule (19) to have in Algorithm 5.1 one cycle only: it is easier to go via a few cycles for smaller p's rather than to operate with a huge p, which is two times larger than the bound ω obtained in Example 6.2.
Nevertheless, the bound A f,g may be useful if we remember that the process in Algorithm 5.1 concerned not the value of res f (x)/d(x), g(x)/d(x) but the number of its distinct prime divisors. Let us denote by p k # the product of the first k primes: p k # = p 1 · p 2 · · · p k (where p 1 = 2, p 2 = 3, etc.). They sometimes call p k # the "k'th primorial". The following is essential: Primorial (as a function on k) grows very rapidly. Say, for k = 10 it is more than six billions: p 10 # = 6, 469, 693, 230. This observation allows to use the bound A f,g in the following way: although the value of A f,g as a function on n = deg f (x), m = deg g(x) and on the coefficients of f (x) and g(x) grows rapidly, the number of its distinct prime divisors, may not be "very large" thanks to the fact that p k # also grows rapidly. Consider this on polynomials and values from Example 6.2: Example 6.6. It is easy to compute that: p 30 # = 3.1610054640417607788145206291544e + 46 < ω and p 31 # = 4.014476939333036189094441199026e + 48 > ω, where ω is the large number from Example 6.2. This means that the number of prime divisors of R = res f (x)/d(x), g(x)/d(x) , whatever the divisor d(x) be, is not greater than 30. And whichever 30 + 1 = 31 distinct primes we take, at least one of them will not be a divisor of R. That is, Algorithm 5.1 for the polynomials of Knuth's example will output the correct answer in not more than 31 cycles. We cannot find 31 primes p ∤ w so that Algorithm 5.1 arrives to a wrong d(x) = pp (k(x)) on step 18 for all of them.
Remark 6.7. Let us stress that estimates on the number of prime divisors of the resultant and the analog of Algorithm 7.1 can be found elsewhere, for example, in [15]. So the only news we have is that here we use a slightly better value for N f and N g to get 2 n+m times smaller bound for A f,g . Namely, in Corollary 4.3 we estimate |c i | not by 2 n f (x) but by 2 n−1 f (x) (see (12) and Remark 4.4). This makes the bound A f,g in formula (18) 2 n+m times lower, since N f and N g appear m and n times respectively.
7. An algorithm to check coprime polynomials The first application of the bounds found in previous section is an algorithm checking if the given polynomials f (x), g(x) ∈ Z[x] are coprime. Present the polynomials as f (x) = cont (f (x)) · pp (f (x)) and g(x) = cont (g(x)) · pp (g(x)). If r = gcd cont (f (x)) , cont (g(x)) ≈ 1, then f (x), g(x) are not coprime, and we do not have to check the primitive parts, at all.
If r ≈ 1, then switch to the polynomials f (x) = pp (f (x)) and g(x) = pp (g(x)). By Corollary 6.5 the number of distinct prime divisors of res f (x)/d(x), g(x)/d(x) is less or equal to k, where k is the largest number for which p k # ≤ A f,g .
And if gcd f p i (x), g p i (x) = 1 for all i = 1, . . . , k + 1, then f p i (x) and g p i (x) are not coprime for at least one p i , which is not dividing res f (x)/d(x), g(x)/d(x) . This means that f (x) and g(x) are not coprime. We got the following algorithm: output the result: f (x) and g(x) are not coprime and stop. 05. Set a 0 = lc (f (x)) and b 0 = lc (g(x)). 06. Calculate w = gcd(a 0 , b 0 ) in Euclidean domain Z. 07. Set f (x) = pp (f (x)) and g(x) = pp (g(x)). 08. Compute the bound A f,g for polynomials f (x), g(x) by (18). 09. Find the maximal k for which p k # ≤ A f,g . 10. Set i = 1. 11. While i = k + 1 12.
choose a new prime p ∤ w; 13.
apply the reduction ϕ p to calculate the modular images f p (x), g p (x) ∈ Z p [x]; 14.
calculate e p = gcd f p (x), g p (x) in Euclidean domain Z p [x]; 15.
if deg e q i = 0 16.
output the result: f (x) and g(x) are not coprime.
Two important advantages of this algorithm are that here we use much smaller primes p (we just require p ∤ w, not p > 2 · N f,g ), and in Algorithm 7.1, unlike in Algorithm 5.1, we never need to find t, to compute the preimage k(x) of t·gcd f p (x), g p (x) and the primitive part pp (k(x)). Remark 7.1. As it is mentioned by Knuth in [7], in a probabilistic sense the polynomials are much more likely to be coprime than the integer numbers. So it is reasonable to first test by Algorithm 7.1 if the given polynomials f (x), g(x) are coprime, and only after that apply Algorithm 5.1 to find their gcd in case if they are not coprime. See also Algorithm 8.2, where we combine both these approaches with a better bound for prime p.

Other modifications of algorithms
The bounds mentioned in Section 6 can be applied to obtain modifications of Algorithm 5.1. Let us outline four ideas, of which only the last two will be written down as algorithms.
For the non-zero polynomials f (x), g(x) ∈ Z[x] let us again start by computing r = gcd cont (f (x)) , cont (g(x)) and switching to the primitive parts f (x) = pp (f (x)) and g(x) = pp (g(x)), assuming that their leading coefficients a 0 and b 0 are positive. Calculate N f , N g by Corollary 4.3, N f,g by Corollary 4.5 and A f,g by (18). Find the maximal k for which p k # ≤ A f,g . Then take any k + 1 primes p 1 , . . . , p k+1 each greater than 2 · N f,g . We do not know d(x), but we are aware that the number of prime divisors of R = res f (x)/d(x), g(x)/d(x) is less than equal to k. So at least one of the primes p 1 , . . . , p k+1 is not dividing R. To find it compute the degrees of e p i (x) for all i = 1, . . . , k + 1. Take any p i , for which deg e p i (x) is the minimal (in case there are more than one p i 's with this property, take one of them, preferably, the smallest of all).
By our construction, deg e p i (x) = deg gcd f (x), g(x) holds. So we can proceed to the next steps: choose a t, such that lc (t · e p i (x)) = w = gcd(a 0 , b 0 ); then find by Algorithm 4.1 the pre-image k(x) of t · e p i (x); then proceed to its primitive part d(x) = pp (k(x)); and then output the final answer as r · d(x).
The advantage of this approach is that we do not have to go via the steps 14-18 of Algorithm 5.1 for more than one prime p. Also, we do not have to take care of the variable D. But the disadvantage is that we have to compute e p i (x) for large primes for k + 1 times (whereas in Algorithm 5.1 the correct answer could be discovered after consideration of fewer primes). Clearly, the disadvantage is a serious obstacle, since repetitions for k + 1 large primes consumes more labour than the steps 14-18 of Algorithm 5.1. So this is just a theoretical idea, not an approach for an effective algorithm.
we did it in step 09 of Algorithm 7.1: k is the maximal number for which p k # ≤ A f,g .