Hari Kailad

Coppersmith's Method


Coppersmith's method is widely used in many modern lattice based attacks in cryptography. Partial factorization, Boneh-Durfee, and low public exponent all rely on this algorithm. This post describes how Coppersmith's method functions.

Finding roots of polynomials under $\mathbb{R}$ or $\mathbb{Z}$ is relatively easy using resultants or Hensel lifting. However, there is no known method to find roots of polynomials $\mod n$ without knowing the factorization of $n$. Coppersmith's method allows us to find "small" roots to these polynomials.

Background: Lattices

Lattices are a discrete additive subgroup spanned by a set of independent basis vectors. A lattice $\mathcal{L}(\bm{B})$ is the set of all integer linear combinations of those basis vectors.

A lattice spanned by two basis vectors.

Definition (Lattice). A lattice $\mathcal{L}(\bm{B})$ is defined as the set of points

$$\mathcal{L}(\bm{B}) = \left\{ \sum_{i=1}^{k} z_i\bm{b}_i : z_i \in \mathbb{Z} \right\} $$

One important property of lattices for cryptography is the difficulty of the Shortest Vector Problem. Given a lattice spanned by an arbitrary basis, finding the shortest nonzero vector is computationally hard for high dimensions. However, there are algorithms which can approximate the shortest vector problems, such as LLL.

One important application of LLL is solving "somewhat short" linear equations.

Example. Consider the equation $ax + by = c$, where $x, y$ are "short" integers. If we consider the basis $$ \begin{bmatrix} 1 & 0 & a \\ 0 & 1 & b \\ 0 & 0 & -c \end{bmatrix} $$

Note that the vector $(x, y, 0)$ is an integer linear combination of the rows. Therefore, it lies in the lattice, and so if the vector meets the requirements of LLL ("somewhat or approximately short"), it can be found.

Coppersmith's Method

Consider a polynomial equation $$p(x) = x^k + a_{k - 1}x^{k - 1} + \ldots + a_0 = 0 \pmod{N}$$

and let $p(x_0) = 0 \pmod{N}$ where $x_0 < X$ for some small bound $X$.

We would like to transform this polynomial $p(x)$ to a new polynomial $r(x)$ such that $r(x_0) = 0$ over the integers, which is easily solvable.

Lemma. Let $r(x)$ be a univariate polynomial of degree $c$. Let $h$ be a positive integer. Let $|f(x)|$ denote the $\ell_1$ norm of the coefficient vector of the polynomial $f$. Suppose that

  1. $|x_0| < X$
  2. $r(x_0) = 0 \pmod{N^h}$
  3. $| r(X) | < N^h$

Then $r(x_0) = 0$ over the integers.

Proof. We note that

Therefore, since $|r(x_0)| < N^h$, $r(x_0) = 0$ over the integers.

Intuitively, we should consider $r(x)$ as a "somewhat short" polynomial (Condition 3), which has a root $x_0$ over the integers as long as $x_0$ is small. Therefore, if we can construct $r(x)$, we can find this small root (through known polynomial solving techniques)!

Constructing the polynomial

Back to our original polynomial $p(x)$. Using the previous lemma, we can now try to find a new polynomial $r$ for our $p$.

Consider the set of polynomials of the following form: $$ q_{u, v}(x) = N^{h - v}x^u(p(x))^v $$

Notice that for every $u, v$ ranging to $h$, we have the following: $$ q_{u, v}(x_0) = 0 \pmod{N^m} $$

If every single $q_{u, v}(x_0) = 0 \pmod{N^m}$, any linear combination of the $q$ polynomials also evaluates to $0$. Consider the polynomial $$ q(x) = \sum_{i, j} c_{i,j} q_{i, j}(x) $$ As a linear combination, we see that $q(x_0) = 0 \pmod{N^m}$. Note that this satisfies Condition 2 of the Lemma. Since we know $x_0$ is bounded, Condition 1 is also satisfied. Our goal is to find some linear combination of the polynomials such that Condition 3 is satisfied.

We see here that we must solve for a linear combination of polynomials which is "somewhat short".

Sweeping a lot of annoying bounds and math under the rug, we can are able to select a bound $X$ such that the third condition is satisfied after running the LLL algorithm.

Therefore, in polynomial time, we can find this "small" root $x_0$ such that $p(x_0) = 0 \pmod{N}$.

In summary:

  • We first construct a set polynomials $q$ such that $q(x_0) = 0 \pmod{N^m}$.
  • We then search through this set for a polynomial $r$ such that the coefficients of $r$ form a short vector (using LLL).
  • By the Lemma, $r(x_0) = 0$ over the integers, so we can then solve using Hensel lifting or resultants.

Interesting Facts

This formulation of Coppersmith's method was never actually developed by Coppersmith, but was created by Howgrave-Graham. Coppersmith's original method involved a different basis (equivalent via row operations to a basis of the dual lattice).

Sagemath implements this algorithm in the small_roots function as seen here:

g  = [x**j * N**(m-i) * f**i for i in range(m) for j in range(delta) ]

B = Matrix(ZZ, len(g), delta*m )
for i in range(B.nrows()):
    for j in range( g[i].degree()+1 ):
        B[i,j] = g[i][j]*X**j

B =  B.LLL(**kwds)

We see here that this forms a matrix with the coefficients of the $q$ polynomials, and reduces it using LLL. Therefore, this finds the linear combination which meets the conditions given to find our root over the integers.

References

[1] Finding a small root of a univariate modular equation - Don Coppersmith
[2] Finding small roots of univariate modular equations revisited - Nicholas Howgrave-Graham