Last time we covered an operation in the LWE encryption scheme called modulus switching, which allows one to switch from one modulus to another, at the cost of introducing a small amount of extra noise, roughly $\sqrt{n}$, where $n$ is the dimension of the LWE ciphertext.

This time we’ll cover a more sophisticated operation called key switching, which allows one to switch an LWE ciphertext from being encrypted under one secret key to another, without ever knowing either secret key.

Reminder of LWE

A literal repetition of the last article. The LWE encryption scheme I’ll use has the following parameters:

An LWE secret key is defined as a vector in $\{0, 1\}^n$ (uniformly sampled). An LWE ciphertext is defined as a vector $a = (a_1, \dots, a_n)$, sampled uniformly over $(\mathbb{Z} / q\mathbb{Z})^n$, and a scalar $b = \langle a, s \rangle + m + e$, where $e$ is drawn from $D$ and all arithmetic is done modulo $q$. Note that $e$ must be small for the encryption to be valid.

Sometimes I will denote by $\textup{LWE}_s(x)$ the LWE encryption of plaintext $x$ under the secret key $s$, and it should be understood that this is a fixed (but arbitrary) draw from the distribution of LWE ciphertexts described above.

Main idea: homomorphically almost-decrypt

The main idea is to encrypt each entry of the original secret key using the new secret key (this collection of encryptions is jointly called a key-switching key), and then use this to homomorphically evaluate the first step of the decryption function (i.e., compute $b – \langle a, s \rangle$). The result is an encryption of the (noisy) message under the new key.

First we’ll show how this works in a naïve sense. In particular, doing what I said in the last paragraph verbatim won’t work because the error will grow too large. But we’ll do it anyway, measure the error, and the remainder of the article will show how the gadget decomposition can be used to reduce the error.

Key switching, without gadget decompositions

Start with an LWE ciphertext for the plaintext $m$. Call it

$\displaystyle c = (a_1, \dots, a_n, b) \in (\mathbb{Z}/q\mathbb{Z})^{n+1}$

where

$\displaystyle b = \left ( \sum_{i=1}^n a_i s_i \right ) + m + e_{\textup{original}}$

and $s = (s_1, \dots, s_n) \in \{ 0,1\}^n$ is the secret key. Now say we have another secret key, possibly of a different dimension $t = (t_1, \dots, t_m) \in \{ 0, 1\}^m$, and we would like to switch the ciphertext $c$ to a ciphertext $c’$ which encrypts the same underlying message $m$, but under the new secret key $t$. That is, we would like to write

$\displaystyle c’ = (a’_1, \dots, a’_m, b’) \in (\mathbb{Z}/q\mathbb{Z})^{m+1}$

where

$\displaystyle b’ = \left ( \sum_{i=1}^n a’_i t_i \right ) + m + e_{\textup{original}} + e_{\textup{new}}$

implying that there is possibly some additional error introduced as a result. As usual, so long as the total error in the ciphertext remains small enough (and $m$ is stored in the significant bits of the underlying integer space), the result will still be a valid LWE ciphertext.

Define the key switching key $\textup{KSK}(s, t)$ as follows (I will omit the $s, t$ and just call it KSK from now on):

$\displaystyle \textup{KSK} = \{ \textup{KSK}_i = \textup{LWE}_t(s_i) = (x_{i, 1}, \dots, x_{i, m}, y_i) \mid i=1, \dots, n\}$

In other words, $\textup{KSK}_i$ encrypts bit $s_i$, and $y_i = \langle x_i, t \rangle + s_i + e_i$ makes it a valid LWE encryption.

Now the algorithm to switch keys is merely as follows (where the first vector has $m$ leading zeros to ensure the dimensions align):

$\displaystyle c’ = (0, \dots, 0, b) – \sum_{i=1}^n a_i \textup{KSK}_i$

This is computing a linear combination of the $\textup{KSK}_i$. The specific linear combination is the first step of LWE decryption ($b – \langle a, s \rangle$), but performed on ciphertexts of $b$ and the $s_i$. Note, $(0, \dots, 0, b)$ is a valid (but insecure) LWE ciphertext of $b$ under any secret key, in part because we’re pretending the LWE samples and error were all sampled as zero; an unlikely but coherent outcome used to jumpstart a homomorphic computation in more places than key switching. So if you wanted to, you could write $c’$ as follows, to highlight how we’re computing additions and linear scalings of LWE ciphertexts.

$\displaystyle c’ = \textup{LWE}_{\textup{t}}(b) – \sum_{i=1}^n a_i \textup{LWE}_t(s_i)$

This should be enough to show that $c’$ is a valid LWE encryption (if we accept that adding and scaling preserves LWE validity). But to warm up for the rest of the article we’ll reprove it with a slightly different technique. This will also help us understand the error growth. Because LWE naturally admits sums and scalar products with corresponding added error, we expect the error to grow proportionally to the number of additions and the magnitudes of the $a_i$’s. And you may already be able to tell that because the $a_i$’s are uniform $\mathbb{Z}/q\mathbb{Z}$ elements, this part will be far too large to be useful. Let’s make this explicit now.

To show it’s a valid LWE encryption, we define the function $\varphi_s$, defined on any LWE ciphertext $c = (a_1, \dots, a_n, b)$ as $\varphi_s(c) = b – \langle a, s \rangle$. Some authors call $\varphi_s$ the “phase” function, but I think of it as a close friend: the first step of the decryption function for LWE (the second step would be rounding off the error). Critically, an LWE encryption is valid if and only if $\varphi_s(c) = m + e$ (provided $e$ is sufficiently small).

Because $\varphi_s$ is a linear function, it factors through the definition of $c’$ nicely, and we get

$\displaystyle \begin{aligned} \varphi_t(c’) &= \varphi_t((0, \dots, 0, b)) – \sum_{i=1}^n a_i \varphi_t(\textup{KSK}_i) \\\ &= b – \sum_{i=1}^n a_i (y_i – \langle x_i, t \rangle) \\\ &= b – \sum_{i=1}^n a_i (s_i + e_i) \end{aligned}$

where (reminder) $e_i$ is the error sample from $\textup{KSK}_i$’s definition. Distributing $a_i$ across the $(s_i + e_i)$ simplifies everything nicely

$\displaystyle \begin{aligned} &= b – \sum_{i=1}^n a_i s_i – \sum_{i=1}^n a_i e_i \\\ &= m + e_{\textup{original}} – \sum_{i=1}^n a_i e_i \end{aligned}$

Now as we foreshadowed, $e_{\textup{new}} = -\sum_{i=1}^n a_i e_i$ is simply too large. A typical LWE ciphertext will have error at least 1 (or it would be useless), and if $q = 2^{32}$, the $a_i$’s would also be of magnitude roughly $2^{31}$, so summing even two of those would corrupt even a 1-bit message stored in the most significant bit of the plaintext.

The way to deal with this is to use a bit decomposition.

Key switching, with gadget decompositions

Recall from the gadget decomposition article that the core function of a gadget decomposition is to preserve the ultimate value of a dot product while making the vectors multiplicands larger (spending space/time) but also making the size of the coefficients of one of the vectors smaller (reducing the accumulation of error due to that dot product).

This is exactly the approach we’ll take here. The “dot product” in question is $(a_1, \dots, a_n) \cdot \textup{KSK}$ (where KSK is viewed as a matrix), and we’ll expand the values $a_i$ into a vector of its digits in a base-$B$ number system, while modifying the key switching key so that those missing powers of $B$ are part of the LWE encryption. This will result in replacing the error term that looked like $\sum_{i=1}^n a_i e_i$ with an error term like $\sum_{i=1}^n c B e_i$ for some small constant $c$ (expect it to be even less than $B$).

More specifically, define decomposition parameters as a triple of numbers $(B, k, L)$. The number $B$ is a power of 2 no bigger than $q/2$, and $L$, or the number of levels of the decomposition, is the positive integer such that $B^L = q$ (this is forced by the choice of $B$). Then finally, $k$ is a number between $0$ and $L-1$ describing the “lowest level” (or least-significant digit) included in the decomposition.

An error-free decomposition sets the parameter $k=0$, and this is defined simply as a base-$B$ representation of a number. For example, suppose $q = 2^{32}$, and $(B, k, L) = (256, 0, 4)$, and we’re decomposing $x=2^{32} – 2$. Then $\textup{Decomp}_{256, 0, 4}(x) = (254, 255, 255, 255)$. I subtracted 2 to emphasize that the digits are little-Endian (the right-most entry is the most significant, representing the $256^3$ place).

An approximate decomposition is one with $k > 0$. For example, suppose $(B, k, L) = (256, 2, 4)$ and again $x=2^{32} – 2$. Setting $k=2$ means that we represent this number as if it were $(0, 0, 255, 255)$, wiping out the two least significant digits. The error of this approximation is $65534 = 254 + 255 \cdot 256^1$. As we will see, an approximate decomposition may help reduce overall error by splitting the newly introduced error into a sum of two terms, where $k$ scales the error differently in each term.

Let’s go through the key-switching key derivation again, using an error-free decomposition $(B, 0, L)$. First, re-define the key switching key as follows.

$\displaystyle \textup{KSK} = \{ \textup{KSK}_{i, j} = \textup{LWE}_t(s_i B^j) \mid i=1, \dots, n ; j = 0, \dots, L-1\}$

Note that this increases the dimension of the key-switching key by 1. Previously the key-switching key was a list of LWE ciphertexts (2-dimensional array of numbers), and now it’s a 3-dimensional array, with the new dimension corresponding to the decomposition digit $j$.

Because the powers of $B$ are attached to the message, they will factor out and allow us to reconstruct the original $a_i$’s, but they will not be included in the error part because error is added to the message during encryption.

Next, to perform the key switch, define $\textup{Decomp}(a_i) = (a_{i,0}, \dots, a_{i,L-1})$ and compute

$\displaystyle c’ = (0, \dots, 0, b) – \sum_{i=1}^n \sum_{j=0}^{L-1} a_{i,j} \textup{KSK}_{i,j}$

This is the same as the original key switch, but the extra summation accounts for the extra dimension introduced by the gadget decomposition. Then we can repeat the same $\varphi_t$ trick and see how the original $a_i$’s are reconstructed.

$\displaystyle \begin{aligned} \varphi_t(c’) &= b – \sum_{i=1}^n \sum_{j=0}^{L-1} a_{i,j} \varphi_t(\textup{KSK}_{i,j}) \\\ &= b -\sum_{i=1}^n \sum_{j=0}^{L-1} a_{i,j} (s_i B^j + e_i) \\\ &= b -\sum_{i=1}^n \sum_{j=0}^{L-1} a_{i,j} s_i B^j – \sum_{i=1}^n \sum_{j=0}^{L-1} a_{i,j} e_i \\\ &= b -\sum_{i=1}^n a_i s_i – \sum_{i=1}^n \sum_{j=0}^{L-1} a_{i,j} e_i \\\ &= m + e_{\textup{original}} – \sum_{i=1}^n \sum_{j=0}^{L-1} a_{i,j} e_i \end{aligned}$

One key ingredient above is noticing that in $\sum_{i=1}^n \sum_{j=0}^{L-1} a_{i,j} s_i B^j$, the $s_i$ factors out of the innermost sum, and what you have left is $\sum_{j=0}^{L-1} a_{i,j} B^j$, which is exactly how to reconstruct $a_i$ from its base-$B$ digits.

The second key ingredient is that the innermost term on the second line is $a_{i,j} (s_i B^j + e_i)$, which means that only the digits $a_{i,j}$ are multiplied by the error terms, not including the powers of $B$, and so the final error can be bounded by the largest allowable value of a single digit $B-1$, resulting in the new error being $L (B-1) \sum_{i=1}^n e_i$. For a Gaussian centered at zero, the expectation of these errors is zero, and using standard bounding arguments like Chernoff bounds, you can prove that with high probability this new error is at most $L(B-1) \sigma \sqrt{2n \log n}$, where $\sigma$ is the standard deviation of the error distribution.

Now, finally, we can run through this argument one more time, but using an approximate decomposition. This merely changes the sum’s lower bound from $j=0$ to $j=k$. Start by calling $\tilde{a}_i = \sum_{j=k}^{L-1} a_{i,j} B^j$, the approximation of $a_i$ from its most significant bits. Then the error of this approximation is $a_i – \tilde{a}_i = \sum_{j=0}^{k-1} a_{i,j} B^j$, a relatively small quantity at most $(B^k – 1) / (B-1)$ (if each $a_{i,j} = B-1$ is as large as possible).

$\displaystyle \begin{aligned} \varphi_t(c’) &= b – \sum_{i=1}^n \sum_{j=k}^{L-1} a_{i,j} \varphi_t(\textup{KSK}_{i,j}) \\\ &= b -\sum_{i=1}^n \sum_{j=k}^{L-1} a_{i,j} (s_i B^j + e_i) \\\ &= b -\sum_{i=1}^n s_i \sum_{j=k}^{L-1} a_{i,j} B^j – \sum_{i=1}^n \sum_{j=k}^{L-1} a_{i,j} e_i \\\ &= b -\sum_{i=1}^n s_i \tilde{a}_i – \sum_{i=1}^n \sum_{j=k}^{L-1} a_{i,j} e_i \end{aligned}$

Mentally zoom in on the first sum $\sum_{i=1}^n s_i \tilde{a}_i$. Use the trick of adding zero to get

$\displaystyle \sum_{i=1}^n s_i \tilde{a}_i = \sum_{i=1}^n s_i (a_i + \tilde{a}_i – a_i) = \sum_{i=1}^n s_i a_i – \sum_{i=1}^n s_i(a_i – \tilde{a}_i)$

The term $\sum_{i=1}^n s_i(a_i – \tilde{a}_i)$ is part of our new error term, and recalling that the secret key bits are binary, you should think of this in expectation as roughly $\frac{n}{2} B^{k-1}$ (more precisely, $\frac{n}{2} (B^{k}-1)/(B-1)$).

Continuing, we arrive at

$\displaystyle \begin{aligned} \varphi_t(c’) &= b -\sum_{i=1}^n a_i s_i – \sum_{i=1}^n s_i(a_i – \tilde{a}_i) – \sum_{i=1}^n \sum_{j=k}^{L-1} a_{i,j} e_i \\\ &= m + e_{\textup{original}} – \sum_{i=1}^n s_i(a_i – \tilde{a}_i) – \sum_{i=1}^n \sum_{j=k}^{L-1} a_{i,j} e_i \end{aligned}$

Rough error analysis

Now the choice of $k$ admits a tradeoff that one can optimize for to minimize the total newly introduced error. I’m going to switch to a sloppy mode of math to heuristically navigate this tradeoff.

The triangle inequality lets us bound the magnitude of the error by the sum of the magnitudes of the parts, i.e., the error is bounded from above by

$\displaystyle \left | \sum_{i=1}^n s_i(a_i – \tilde{a}_i) \right | + \left | \sum_{i=1}^n \sum_{j=k}^{L-1} a_{i,j} e_i \right |$

The left term is like $\frac{n}{2} B^{k-1}$ as we stated earlier, and with high probability it’s at most $(n/2 + \sqrt{n \log n}) B^{k-1}$. The right term is at most $(L-k)B \sum_{i=1}^n e_i$, (worst case size of $a_{i,j}$, increasing $B-1$ to $B$ because why not), and with high probability the sum of the $e_i$ is like $\sigma \sqrt{2n \log n}$, making the whole term bounded by $(L-k)B \sigma \sqrt{2n \log n}$. So we want to minimize the sum

$\displaystyle (n/2 + \sqrt{n \log n}) B^{k-1} + (L-k)B \sigma \sqrt{2n \log n}$

We could try to explicitly optimize this for $k$, treating the other terms as constant, but it won’t be nice because $k$ is present in both a linear term and an exponent. We could also just stare at it and think. The approximation error (the term on the left) is going to get exponentially larger as $k$ grows, so we want to keep $k$ relatively small. But on the other hand, the standard deviation $\sigma$ should be much larger than $n$ to keep LWE secure. This is effectively what we’re trying to suppress: error that grows like $O(n)$ is small enough to deal with, but error that grows like $\omega(n)$ is problematic. Increasing $k$ gives us a meager (but nontrivial) means to reduce the constant coefficient on that part of the error in exchange for $\Theta(n)$ growth with in the other term.

I admit, as of the time of this writing I still don’t understand how to set production security parameters for LWE. Is it still linear in $n$? Super-linear? Not sure. I’m betting future Jeremy will clarify this to me in another article. Even if it were linear in $n$, the right term multiplies $\sigma$ by $\sqrt{n \log n}$ which makes the whole thing super-linear, whereas the left term adds a square root factor. So the tradeoff in $k$ should still help.

Until I understand LWE security, I won’t have the asymptotics I need to analyze this further. Moreover, the allowed values of $B, k$ are so small that we can brute force evaluate all options. For example, if $B = 16$ then $k$ can be between 0 and 7. And realistically, if $n \approx 2^{10}$, then letting $k = 4$ makes the first term roughly $2^{26}$, which leaves only 6 bits left for the message (further reduced by any error introduced by the second term).

Thanks to Cathie Yun and Asra Ali for providing feedback on an early draft of this article.

Until next time!