Journal of Complexity, volume 88, pages 101934

Factoring sparse polynomials fast

Alexander Demin
JORIS VAN DER HOEVEN
Publication typeJournal Article
Publication date2025-06-01
scimago Q1
wos Q1
SJR1.115
CiteScore3.1
Impact factor1.8
ISSN0885064X, 10902708
van der Hoeven J., Lecerf G.
2024-11-30 citations by CoLab: 1 Abstract  
Given an algebraic germ of a plane curve at the origin, in terms of a bivariate polynomial, we analyze the complexity of computing an irreducible decomposition up to any given truncation order. With a suitable representation of the irreducible components, and whenever the characteristic of the ground field is zero or larger than the degree of the germ, we design a new algorithm that involves a nearly linear number of arithmetic operations in the ground field plus a small amount of irreducible univariate polynomial factorizations.
van der Hoeven J., Lecerf G.
2024-04-27 citations by CoLab: 2 Abstract  
Consider a multivariate polynomial $$f \in K [x_1, \ldots , x_n]$$ over a field K, which is given through a black box capable of evaluating f at points in $$K^n$$ , or possibly at points in  $$A^n$$ for any K-algebra A. The problem of sparse interpolation is to express f in its usual form with respect to the monomial basis. We analyze the complexity of various old and new algorithms for this task in terms of bounds D and T for the total degree of f and its number of terms. We mainly focus on the case when K is a finite field and explore possible speed-ups.
Ryan K., Heninger N.
2023-08-08 citations by CoLab: 7 Abstract  
We introduce a new lattice basis reduction algorithm with approximation guarantees analogous to the LLL algorithm and practical performance that far exceeds the current state of the art. We achieve these results by iteratively applying precision management techniques within a recursive algorithm structure and show the stability of this approach. We analyze the asymptotic behavior of our algorithm, and show that the heuristic running time is $$O(n^{\omega }(C+n)^{1+\varepsilon })$$ for lattices of dimension n, $$\omega \in (2,3]$$ bounding the cost of size reduction, matrix multiplication, and QR factorization, and C bounding the log of the condition number of the input basis B. This yields a running time of $$O\left( n^\omega (p + n)^{1 + \varepsilon }\right) $$ for precision $$p = O(\log \Vert B\Vert _{max})$$ in common applications. Our algorithm is fully practical, and we have published our implementation. We experimentally validate our heuristic, give extensive benchmarks against numerous classes of cryptographic lattices, and show that our algorithm significantly outperforms existing implementations.
Chen T., Monagan M.
Mathematics in Computer Science scimago Q3 wos Q2
2022-09-21 citations by CoLab: 2 Abstract  
Our goal is to develop fast parallel algorithms to factor multivariate polynomials with integer coefficients. The authors contributed a parallel sparse Hensel lifting algorithm (CMSHL) to factor multivariate polynomials in their sparse representation (Chen and Monagan, in Proceedings of CASC 2020, LNCS 12291, pp 150–169, Springer, Berlin, 2020). The dominating cost of CMSHL is polynomial evaluations. To reduce this cost, in this work we represent the polynomial to be factored by a black box. We give an algorithm that computes the factors of the polynomial in their sparse representation directly using our modified CMSHL algorithm. Our new algorithm requires fewer probes to the black box than the algorithm of Kaltofen and Trager from 1990. We have implemented our algorithm in Maple with major subroutines coded in C. We present timing benchmarks for two examples of factoring determinants of matrices of polynomials. Our algorithm is much faster than Maple’s best determinant and factor algorithms on our benchmarks. We are able to compute the factors of $$\det T_n$$ for $$n=16$$ where $$T_n$$ is the $$n \times n$$ symmetric Toeplitz matrix with symbolic entries $$x_1,x_2,\dots ,x_n$$ .
Giorgi P., Grenet B., Perret du Cray A., Roche D.S.
2022-07-04 citations by CoLab: 6 Abstract  
Given a way to evaluate an unknown polynomial with integer coefficients, we present new algorithms to recover its nonzero coefficients and corresponding exponents. As an application, we adapt this interpolation algorithm to the problem of computing the exact quotient of two given polynomials. These methods are efficient in terms of the bit-length of the sparse representation, that is, the number of nonzero terms, the size of coefficients, the number of variables, and the logarithm of the degree. At the core of our results is a new Monte Carlo randomized algorithm to recover a polynomial f(x) with integer coefficients given a way to evaluate f(θ) mod m for any chosen integers θ and m. This algorithm has nearly-optimal bit complexity, meaning that the total bit-length of the probes, as well as the computational running time, is softly linear (ignoring logarithmic factors) in the bit-length of the resulting sparse polynomial. To our knowledge, this is the first sparse interpolation algorithm with soft-linear bit complexity in the total output size. For polynomials with integer coefficients, the best previously known results have at least a cubic dependency on the bit-length of the exponents.
Hoeven J.V., Lecerf G.
2022-01-03 citations by CoLab: 3 Abstract  
The best known asymptotic bit complexity bound for factoring univariate polynomials over finite fields grows with $$d^{1.5 + o (1)}$$ for input polynomials of degree d, and with the square of the bit size of the ground field. It relies on a variant of the Cantor–Zassenhaus algorithm which exploits fast modular composition. Using techniques by Kaltofen and Shoup, we prove a refinement of this bound when the finite field has a large extension degree over its prime field. We also present fast practical algorithms for the case when the extension degree is smooth.
van der Hoeven J., Lecerf G.
2021-05-20 citations by CoLab: 5 Abstract  
In this note, we present a variant of a probabilistic algorithm by Cuyt and Lee for the sparse interpolation of multivariate rational functions. We also present an analogous method for the computation of sparse gcds.
Chen T., Monagan M.
2020-10-17 citations by CoLab: 4 Abstract  
Sparse multivariate Hensel lifting (SHL) algorithms are used in multivariate polynomial factorization as efficient randomized algorithms. They improve on Wang’s classical multivariate Hensel lifting which can be exponential in the number of variables for sparse factors. In this work, we present worst case complexity analyses and failure probability bounds for two recently developed SHL algorithms. One of the algorithms solves the multivariate Diophantine equations using sparse interpolation, and the other interpolates the factors directly from bivariate images obtained using bivariate Hensel lifting. We have observed that a linear expression swell occurs in both approaches. We have modified the second approach to eliminate the expression swell. Our improvement also injects more parallelism into the sparse interpolation step. We have made a high-performance parallel implementation of our new SHL algorithm in Cilk C. We present timing benchmarks comparing our Cilk C implementation with the factorization algorithms in Maple and Magma. We obtain good parallel speedup and our algorithm is much faster than Maple and Magma on our benchmarks.
Monagan M., Tuncer B.
Journal of Symbolic Computation scimago Q2 wos Q4
2020-07-01 citations by CoLab: 7 Abstract  
The standard approach to factor a multivariate polynomial in Z [ x 1 , x 2 , … , x n ] is to factor a univariate image in Z [ x 1 ] then recover the multivariate factors from their univariate images using a process known as multivariate Hensel lifting. Wang's multivariate Hensel lifting recovers the variables one at a time. It is currently implemented in many computer algebra systems, including Maple, Magma and Singular. When the factors are sparse, Wang's approach can be exponential in the number of variables n . To address this, sparse Hensel lifting was introduced by Zippel and then improved by Kaltofen. Recently, Monagan and Tuncer introduced a new approach which uses sparse polynomial interpolation to solve the multivariate polynomial diophantine equations that arise inside Hensel lifting in random polynomial time. This approach is shown to be practical and faster than Zippel's and Kaltofen's algorithms and faster than Wang's algorithm for non-zero evaluation points. In this work we first present a complete description of the sparse interpolation used by Monagan and Tuncer and show that it runs in random polynomial time. Next we study what happens to the sparsity of multivariate polynomials when the variables are successively evaluated at numbers. We determine the expected number of remaining terms. We use this result to revisit and correct the complexity analysis of Zippel's original sparse interpolation. Next we present an average case complexity analysis of our approach. We have implemented our algorithm in Maple with some sub-algorithms implemented in C. We present some experimental data comparing our approach with Wang's method for both sparse and dense factors. The data shows that our method is always competitive with Wang's method and faster when Wang's method is exponential in n .
van der Hoeven J., Lecerf G.
Journal of Complexity scimago Q1 wos Q1
2020-02-01 citations by CoLab: 18 Abstract  
In 2008, Kedlaya and Umans designed the first multivariate multi-point evaluation algorithm over finite fields with an asymptotic complexity that can be made arbitrarily close to linear. However, it remains a major challenge to make their algorithm efficient for practical input sizes. In this paper, we revisit and improve their algorithm, while keeping this ultimate goal in mind. In addition we sharpen the known complexity bounds for modular composition of univariate polynomials over finite fields.
Huang Q.
2019-07-08 citations by CoLab: 12 Abstract  
In this paper, we propose a new interpolation algorithm for a sparse multivariate polynomial represented by a straight-line program (SLP). Our algorithm is a Monte Carlo randomized algorithm and works over fields with large or zero characteristic. Let f be an n-variate polynomial given by a straight-line program, which has a degree bound D and a term bound T. For fields of large characteristic, the bit complexity of our algorithm is linear in n,T,łog D in the Soft-Oh sense. Since n,T,łog D are factors of the size of f, our algorithm is optimal in n,T and łog D in the Soft-Oh sense. For fields of characteristic 0, the arithmetic complexity of our algorithm is also linear in n,T,łog D in the Soft-Oh sense. But the arithmetic complexity does not account for coefficient growth and therefore might be a poor reflection of reality.
Lecerf G.
Journal of Symbolic Computation scimago Q2 wos Q4
2019-05-01 citations by CoLab: 15 Abstract  
In their 1996 article, Lickteig and Roy introduced a fast “divide and conquer” variant of the subresultant algorithm which avoids coefficient growth in defective cases. The present article concerns the complexity analysis of their algorithm over effective rings endowed with the partially defined division routine. We achieve essentially the same kind of complexity bound as over effective fields. As a consequence we obtain new convenient complexity bounds for gcds, especially when coefficients are in abstract polynomial rings where evaluation/interpolation schemes are not supposed to be available.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Share
Cite this
GOST | RIS | BibTex
Found error?