Making Hybrid Images | Neural Networks and Backpropagation |
Elliptic Curves and Cryptography |

Bezier Curves and Picasso | Computing Homology | Probably Approximately Correct – A Formal Theory of Learning |

# The Gadget Decomposition in FHE

Lately I’ve been studying Fully Homomorphic Encryption, which is the miraculous ability to perform arbitrary computations on encrypted data without learning any information about the underlying message. It’s the most comprehensive private computing solution that can exist (and it does exist!).

The first FHE scheme by Craig Gentry was based on ideal lattices and was considered very complex (I never took the time to learn how it worked). Some later schemes (GSW = Gentry-Sahai-Waters) are based on matrix multiplication, and are conceptually much simpler. Even more recent FHE schemes build on GSW or use it as a core subroutine.

All of these schemes inject random noise into the ciphertext, and each homomorphic operation increases noise. Once the noise gets too big, you can no longer decrypt the message, and so every now and then you must apply a process called “bootstrapping” that reduces noise. It also tends to be the performance bottleneck of any FHE scheme, and this bottleneck is why FHE is not considered practical yet.

To help reduce noise growth, many FHE schemes like GSW use a technical construction dubbed the *gadget decomposition*. Despite the terribly vague name, it’s a crucial limitation on noise growth. When it shows up in a paper, it’s usually remarked as “well known in the literature,” and the details you’d need to implement it are omitted. It’s one of *those *topics.

So I’ll provide some details. The code from this post is on GitHub.

## Binary digit decomposition

To create an FHE scheme, you need to apply two homomorphic operations to ciphertexts: addition and multiplication. Most FHE schemes admit one of the two operations trivially. If the ciphertexts are numbers as in RSA, you multiply them as numbers and that multiplies the underlying messages, but addition is not known to be possible. If ciphertexts are vectors as in the “Learning With Errors” scheme (LWE)—the basis of many FHE schemes—you add them as vectors and that adds the underlying messages. (Here the “Error” in LWE is synonymous with “random noise”, I will use the term “noise”) In LWE and most FHE schemes, a ciphertext hides the underlying message by adding random noise, and addition of two ciphertexts adds the corresponding noise. After too many unmitigated additions, the noise will grow so large it obstructs the message. So you stop computing, or you apply a bootstrapping operation to reduce the noise.

Most FHE schemes also allow you to multiply a ciphertext by an unencrypted constant , but then the noise scales by a factor of , which is undesirable if is large. So you either need to limit the coefficients of your linear combinations by some upper bound, or use a version of the gadget decomposition.

The simplest version of the gadget decomposition works like this. Instead of encrypting a message , you would encrypt for some choice of , and then to multiply you write the binary digits of and you compute . If the noise in each encryption is , and summing ciphertexts sums noise, then this trick reduces the noise growth from to , at the cost of tracking ciphertexts. (Calling the noise is a bit of an abuse—in reality the error is sampled from a random distribution—but hopefully you see my point).

Some folks call the mapping , and for the sake of this article let’s call the operation of writing a number in terms of its binary digits (note, the first digit is the least-significant bit, i.e., it’s a little-endian representation). Then PowersOf2 and Bin expand an integer product into a dot product, while shifting powers of 2 from one side to the other.

This inspired the following “proof by meme” that I can’t resist including.

Working out an example, if the message is and , then and (again, little-endian), and the dot product is

## A generalized gadget construction

One can generalize the binary digit decomposition to different bases, or to vectors of messages instead of a single message, or to include a subset of the digits for varying approximations. I’ve been puzzling over an FHE scheme that does all three. In my search for clarity I came across a nice paper of Genise, Micciancio, and Polyakov called “Building an Efficient Lattice Gadget Toolkit: Subgaussian Sampling and More“, in which they state a nice general definition.

**Definition: **For any finite additive group , an –*gadget* of size and quality is a vector such that any group element can be written as an integer combination where has norm at most .

The main groups considered in my case are , where is usually or , i.e., unsigned int sizes on computers for which we get free modulus operations. In this case, a -gadget is a matrix , and the representation of satisfies .

Here and are fixed, and are traded off to make the chosen gadget scheme more efficient (smaller ) or better at reducing noise (smaller ). An example of how this could work is shown in the next section by generalizing the binary digit decomposition to an arbitrary base . This allows you to use fewer digits to represent the number , but each digit may be as large as and so the quality is .

One commonly-used construction is to convert an -gadget to an -gadget using the Kronecker product. Let be an -gadget of quality . Then the following matrix is an -gadget of size and quality :

Blank spaces represent zeros, for clarity.

An example with . The -gadget is . This has size and quality . Then for an -gadget, we construct

Now given a vector we write it as follows, where each little-endian representation is concatenated into a single vector.

And finally,

To use the definition more rigorously, if we had to write the matrix above as a gadget “vector”, it would be in column order from left to right, . Since the vector can be at worst all 1’s, its norm is at most , as claimed above.

## A signed representation in base B

As we’ve seen, the gadget decomposition trades reducing noise for a larger ciphertext size. With integers modulo , this can be fine-tuned a bit more by using a larger base. Instead of PowersOf2 we could define PowersOfB, where , such that divides . For example, with , we would only need to track 4 ciphertexts. And the gadget decomposition of the number we’re multiplying by would be the little-endian digits of its base- representation. The cost here is that the maximum entry of the decomposed representation is 255.

We can fine tune this a little bit more by using a *signed* base- representation. To my knowledge this is not the same thing as what computer programmers normally refer to as a signed integer, nor does it have anything to do with the *two’s complement* representation of negative numbers. Rather, instead of the normal base- digits for a number , the signed representation chooses .

Computing the digits is slightly more involved, and it works by shifting large coefficients by , and “absorbing” the impact of that shift into the next more significant digit. E.g., if and (all 1s up to the 10th digit), then the unsigned little-endian base- representation of is . The corresponding *signed* base- representation subtracts from the first digit, and adds 1 to the second digit, resulting in . This works in general because of the following “add zero” identity, where and are two successive unsigned digits in the unsigned base- representation of a number.

Then if , you’d repeat and carry the 1 to the next higher coefficient.

The result of all this is that the maximum absolute value of a coefficient of the signed representation is halved from the unsigned representation, which reduces the noise growth at the cost of a slightly more complex representation (from an implementation standpoint). Another side effect is that the largest representable number is less than . If you try to apply this algorithm to such a large number, the largest digit would need to be shifted, but there is no successor to carry to. Rather, if there are digits in the unsigned base- representation, the maximum number representable in the signed version has all digits set to . In our example with and 32 bits, the largest digit is 127. The formula for the max representable integer is .

```
max_digit = base // 2 - 1
max_representable = (max_digit
* (base ** (num_bits // base_log) - 1) // (base - 1)
)
```

A simple python implementation computes the signed representation, with code copied below, in which is the `base`

, and is `base_log`

.

```
def signed_decomposition(
x: int, base_log: int, total_num_bits=32) -> List[int]:
result = []
base = 1 << base_log
digit_mask = (1 << base_log) - 1
base_over_2_threshold = 1 << (base_log - 1)
carry = 0
for i in range(total_num_bits // base_log):
unsigned_digit = (x >> (i * base_log)) & digit_mask
if carry:
unsigned_digit += carry
carry = 0
signed_digit = unsigned_digit
if signed_digit >= base_over_2_threshold:
signed_digit -= base
carry = 1
result.append(signed_digit)
return result
```

In a future article I’d like to demonstrate the gadget decomposition in action in a practical setting called *key switching*, which allows one to convert an LWE ciphertext encrypted with key into an LWE ciphertext encrypted with a different key . This operation increases noise, and so the gadget decomposition is used to reduce noise growth. Key switching is used in FHE because some operations (like bootstrapping) have the side effect of switching the encryption key.

Until then!

# Group Actions and Hashing Unordered Multisets

I learned of a neat result due to Kevin Ventullo that uses group actions to study the structure of hash functions for unordered sets and multisets. This piqued my interest because a while back a colleague asked me if I could think of any applications of “pure” group theory to practical computer programming that were not cryptographic in nature. He meant, *not* including rings, fields, or vector spaces whose definitions happen to be groups when you forget the extra structure. I couldn’t think of any at the time, and years later Kevin has provided me with a tidy example. I took the liberty of rephrasing his argument to be a bit simpler (I hope), but I encourage you to read Kevin’s post to see the reasoning in its original form.

Hashes are useful in programming, and occasionally one wants to hash an unordered set or multiset in such a way that the hash does not depend on the order the elements were added to the set. Since collection types are usually generic, one often has to craft a hash function for a `set<T>`

or `multiset<T>`

that relies on a hash function `hash(t)`

of an unknown type `T`

. To make things more efficient, rather than requiring one to iterate over the entire set each time a hash is computed, one might seek out a hash function that can be incrementally updated as new elements are added, and provably does not depend on the order of insertion.

For example, having a starting hash of zero, and adding hashes of elements as they are added (modulo ) has this incremental order-ignorant property, because addition is commutative and sums can be grouped. XOR-ing the bits of the hashes is similar. However, both of these strategies have downsides.

For example, if you adopt the XOR strategy for a multiset hash, then any element that has an **even** quantity in the multiset will be the same as if it were not in the set at all (or if it were in the set with some other even quantity). This is because `x XOR x == 0`

. On the other hand, if you use the addition approach, if an element hashes to zero, its inclusion in any set has no effect on the hash. In Java the integer hash is the identity, so zero would be undetectable as a member of a multiset of ints in any quantity. Less drastically, a multiset with all even counts of elements will always hash to a multiple of 2, and this makes it easier to find hash collisions.

A natural question arises: given the constraint that the hash function is accumulative and commutative, can we avoid such degenerate situations? In principle the answer should obviously be *no*, just by counting. I.e., the set of all unordered sets of 64-bit integers has size , while the set of 64-bit hashes has size merely . You will have many many hash collisions, and would need a much longer hash to avoid them in principle.

More than just “no,” we can characterize the structure of such hash functions. They impose an abelian group structure on the set of hashes. And due to the classification theorem of finite abelian groups, up to isomorphism (and for 64-bit hashes) that structure consists of addition on blocks of bits with various power-of-2 moduli, and the blocks are XOR’ed together at the end.

To give more detail, we need to review some group theory, write down the formal properties of an accumulative hash function, and prove the structure theorem.

## Group actions, a reminder

This post will assume basic familiarity with group theory as covered previously on this blog. Basically, this introductory post defining groups and actions, and this followup post describing the classification theorem for commutative (abelian) groups. But I will quickly review group actions since they take center stage today.

A group defines some abstract notion of symmetries in a way that’s invertible. But a group is really meaningless by itself. They’re only useful when they “act” upon a set. For example, a group of symmetries of the square acts upon the square to actually rotate its points. When you have multiple group structures to consider, it makes sense to more formally separate the group structure from the set.

So a group action is formally a triple of a group , a set , and a homomorphism , where is the *permutation group* (or *symmetry group*) of , i.e., the set of all bijections . The permutation group of a set encompasses every possible group that can act on . In other words, every group is a subgroup of a permutation group. In our case, and define a subgroup of symmetries on via the image of . If is not injective, some of the structure in is lost. The homomorphism determines which parts of are kept and how they show up in the codomain. The first isomorphism theorem says how: .

This relates to our hashing topic because an accumulative hash—and a nicely behaved hash, as we’ll make formal shortly—creates a group action on the set of possible hash values. The image of that group action is the “group structure” imposed by the hash function, and the accumulation function defines the group operation in that setting.

## Multisets as a group, and nice hash functions

An appropriate generalization of multisets whose elements come from a base set forms a group. This generalization views a multiset as a “counting function” . The empty set is the function that assigns zero to each element. A positive value of implies the entry shows up in the multiset times. And a negative value is like membership “debt,” which is how we represent inverses, or equivalently* set difference* operations. The inverse of a multiset , denoted , is the multiset with all counts negated elementwise. Since integer-valued functions generally form a group under point-wise addition, these multisets also do. Call this group . We will freely use the suggestive notation to denote the addition of and the function that is 1 on and 0 elsewhere. Similarly for .

is isomorphic to the free abelian group on (because an instance of a multiset only has finitely many members). Now we can define a hash function with three pieces:

- An arbitrary
**base hash function**. - An arbitrary
**hash accumulator** - A
**seed**, i.e., a choice for the hash of the empty multiset

With these three data we want to define a multiset hash function recursively as

In order for the second bullet to lead to a well-defined hash, we need the property that the accumulation order of individual elements does not matter. Call a hash accumulator **commutative** if, for all ,

This extends naturally to being able to reorder any sequence of hashes being accumulated.

The third is a bit more complicated. We need to be able to use the accumulator to “de-accumulate” the hash of an element , even when the set that gave rise to that hash didn’t have in it to start.

Call a hash accumulator **invertible** if for a fixed hash , the map is a bijection. That is, accumulating the hash to two sets with different hashes under will not cause a hash collision. This defines the existence of an inverse map (even if it’s not easily computable). This allows us to finish the third bullet point.

- Fix and let be the inverse of the map . Then

Though we don’t have a specific definition for the inverse above, we don’t need it because we’re just aiming to characterize the structure of this soon-to-emerge group action. Though, in all likelihood, if you implement a hash for a multiset, it should support incremental hash updates when removing elements, and that formula would apply here.

This gives us the well-definition of . Commutativity allows us to define by decomposing arbitrarily into its constituent elements (with multiplicity), and applying or in any order.

## A group action emerges

Now we have a well-defined hash function on a free abelian group.

However, is **not** a homomorphism. There’s no reason hash accumulation needs to mesh well with addition of hashes. Instead, the family of operations “combine a hash with the hash of some fixed set” defines a group action on the set of hashes. Let’s suppose for simplicity that is surjective, i.e., that every hash value can be achieved as the hash of some set. Kevin’s post gives the more nuanced case when this fails, and in that case you work within instead of all of .

The group action is defined formally as a homomorphism

where is the permutation . Equivalently, we start from , pick some set with , and output .

The map is a homomorphism. The composition of two accumulations is order-independent because is commutative. This is how we view as “the binary operation” in , because combining two permutations and is the permutation .

And now we can apply the first isomorphism theorem, that

This is significant because any quotient of an abelian group is abelian, and this quotient is finite because is finite. This means that the group is isomorphic to

where , and where the operation in each component is the usual addition modulo . The -th summand corresponds to a block of bits of the hash, and within that block the operation is addition modulo . Here the “block” structure is where XOR comes in. Each block can be viewed as a bitmask with zeros outside the block, and two members are XOR’ed together, which allows the operations to apply to each block independently.

For example, the group might be for a 32-bit hash. The first block corresponds to 32-bit unsigned integers whose top 4 bits may be set but all other bits are zero. Addition is done within those four bits modulo 16, leaving the other bits unchanged. Likewise, the second component has the top four bits zero and the bottom two bits zero, but the remaining 26 bits are summed mod . XOR combines the bits from different blocks.

In one extreme case, you only have one block, and your group is just and the usual addition combines hashes. In the other extreme, each bit is its own block, your group is , the operation is a bitwise XOR.

Note, if instead of we used a hash of some other length , then in the direct sum decomposition above, would be the product of the sizes of the components. The choice maximizes the number of different structures you can have.

## Implications for hash function designers

Here’s the takeaway.

First, if you’re trying to design a hash function that avoids the degeneracies mentioned at the beginning of this article, then it will *have* to break one of the properties listed. This could happen, say, by maintaining additional state.

Second, if you’re resigned to use a commutative, invertible, accumulative hash, then you *might as well* make this forced structure explicit, and just pick the block structure you want to use in advance. Since no clever bit shifting will allow you to outrun this theorem, you might as well make it simple.

Until next time!

# Carnival of Mathematics #197

Welcome to the 197th Carnival of Mathematics!

197 is an unseemly number, as you can tell by the Wikipedia page which currently says that it has “**indiscriminate**, **excessive**, or **irrelevant** examples.” How deviant. It’s also a Repfigit, which means if you start a fibonacci-type sequence with the digits 1, 9, 7, and then continue with , then 197 shows up in the sequence. Indeed: **1, 9, 7**, 17, 33, 57, 107, **197**, …

## Untangling the unknot

Kennan Crane et al showcased a new paper that can untangle tangled curves quickly, and can do things like generate Hilbert-type space-filling curves on surfaces. It’s a long thread with tons of links to videos and reading materials, covering energy functions, functional analysis, Sobolev methods, and a custom inner product.

## Folding equilateral triangles without measuring

Dave Richeson shows off a neat technique for folding equilateral triangles using just paper and no measurements. Replies in the thread show the geometric series that converges to the right 60 degree angle.

## Shots fired at UMAP and t-SNE

Lior Pachter et al. study what sorts of structure are preserved by dimensionality reduction techniques like UMAP (which I have also used in a previous article) by comparing it against a genomics dataset with understood structure. They make some big claims about how UMAP and t-SNE destroy important structure, and they show how to contrive the dimensionality reduction plot to look like an elephant even when there’s no elephantine structure in the data.

I’m not expert, but perhaps one best case scenario for UMAP enthusiasts would be that their analysis only applies when you go from very high dimensions down to 2 just so you can plot a picture. But if you stop at, say, dimensions, you might still preserve a lot of the meaningful structure. Either way, they make a convincing pitch for Johnson-Lindenstrauss’s random linear reductions, which I’ve also covered here. Their paper is on biorXiv.

## Studying the Sieve

Ben Peters Jones took up Grant Sanderson’s math video challenge and released a series of videos studying the Sieve of Eratosthenes.

## Additional submissions

- Tanya Khovanova remarks on their Number Gossip, that 1331 is the only nontrivial cube that can be written as .
- Ganit Charcha explains the math behind Montgomery Multiplication, i.e., modular exponentiation without computing a division. Super useful in cryptography!
- Anne Schwartz asked about your favorite math teaching articles, and received a big response with lots of interesting articles.
- Colin Beveridge shows off how their favorite bag of tricks to determine which triangular numbers are also squares.
- An oldie, but new to me, was the series of computational notebooks, 12 Steps to Navier Stokes, which teaches the basics of Computational Fluid Dynamics.
- Nisar Khan has been drawing pictures of random number generators

Be sure to submit fun math you find in September to the next carvinal host!

# Searching for RH Counterexamples — Exploring Data

We’re ironically searching for counterexamples to the Riemann Hypothesis.

- Setting up Pytest
- Adding a Database
- Search Strategies
- Unbounded integers
- Deploying with Docker
- Performance Profiling
- Scaling up
- Productionizing

In the last article we added a menagerie of “production readiness” features like continuous integration tooling (automating test running and static analysis), alerting, and a simple deployment automation. Then I let it loose on AWS, got *extremely* busy with buying a house, forgot about this program for a few weeks (no alerts means it worked flawlessly!), and then saw my AWS bill.

So I copied the database off AWS using pg_dump (piped to gzip), terminated the instances, and inspected the results. A copy of the database is here. You may need git-lfs to clone it. If I wanted to start it back up again, I could spin them back up, and use `gunzip | psql`

to restore the database, and it would start back up from where it left off. A nice benefit of all the software engineering work done thus far.

This article will summarize some of the data, show plots, and try out some exploratory data analysis techniques.

## Summary

We stopped the search mid-way through the set of numbers with 136 prime divisors.

The largest number processed was

1255923956750926940807079376257388805204

00410625719434151527143279285143764977392

49474111379646103102793414829651500824447

17178682617437033476033026987651806835743

3694669721205424205654368862231754214894

07691711699791787732382878164959602478352

11435434547040000

Which in factored form is the product of these terms

```
2^8 3^7 5^4 7^4 11^3 13^3 17^2 19^2 23^2 29^2
31^2 37^2 41^2 43^1 47^1 53^1 59^1 61^1 67^1 71^1
73^1 79^1 83^1 89^1 97^1 101^1 103^1 107^1 109^1 113^1
127^1 131^1 137^1 139^1 149^1 151^1 157^1 163^1 167^1 173^1
179^1 181^1 191^1 193^1 197^1 199^1 211^1 223^1 227^1 229^1
233^1 239^1 241^1 251^1 257^1 263^1 269^1 271^1 277^1 281^1
283^1 293^1 307^1 311^1 313^1 317^1 331^1 337^1 347^1 349^1
353^1 359^1 367^1 373^1 379^1 383^1 389^1 397^1 401^1 409^1
419^1 421^1 431^1 433^1 439^1 443^1 449^1 457^1 461^1 463^1
467^1 479^1 487^1 491^1 499^1 503^1 509^1 521^1 523^1 541^1
547^1 557^1 563^1 569^1 571^1 577^1
```

The best witness—the number with the largest witness value—was

38824169178385494306912668979787078930475

9208283469279319659854547822438432284497

11994812030251439907246255647505123032869

03750131483244222351596015366602420554736

87070007801035106854341150889235475446938

52188272225341139870856016797627204990720000

which has witness value 1.7707954880001586, which is still significantly smaller than the needed 1.782 to disprove RH.

The factored form of the best witness is

```
2^11 3^7 5^4 7^3 11^3 13^2 17^2 19^2 23^2 29^2
31^2 37^1 41^1 43^1 47^1 53^1 59^1 61^1 67^1 71^1
73^1 79^1 83^1 89^1 97^1 101^1 103^1 107^1 109^1 113^1
127^1 131^1 137^1 139^1 149^1 151^1 157^1 163^1 167^1 173^1
179^1 181^1 191^1 193^1 197^1 199^1 211^1 223^1 227^1 229^1
233^1 239^1 241^1 251^1 257^1 263^1 269^1 271^1 277^1 281^1
283^1 293^1 307^1 311^1 313^1 317^1 331^1 337^1 347^1 349^1
353^1 359^1 367^1 373^1 379^1 383^1 389^1 397^1 401^1 409^1
419^1 421^1 431^1 433^1 439^1 443^1 449^1 457^1 461^1 463^1
467^1 479^1 487^1 491^1 499^1 503^1 509^1 521^1 523^1 541^1
547^1 557^1 563^1
```

The average search block took 4m15s to compute, while the max took 7m7s and the min took 36s.

The search ran for about 55 days (hiccups included), starting at 2021-03-05 05:47:53 and stopping at 2021-04-28 15:06:25. The total AWS bill—including development, and periods where the application was broken but the instances still running, and including instances I wasn’t using but forgot to turn off—was **$380.25**. When the application was running at its peak, the bill worked out to about **$100/month**, though I think I could get it much lower by deploying fewer instances, after we made the performance optimizations that reduced the need for resource-heavy instances. There is also the possibility of using something that integrates more tightly with AWS, such as serverless jobs for the cleanup, generate, and process worker jobs.

## Plots

When in doubt, plot it out. I started by writing an export function to get the data into a simpler CSV, which for each only stored and the witness value.

I did this once for the final computation results. I’ll call this the “small” database because it only contains the largest witness value in each block. I did it again for an earlier version of the database before we introduced optimizations (I’ll call this the “large” database), which had all witness values for all superabundant numbers processed up to 80 prime factors.. The small database was only a few dozen megabytes in size, and the large database was ~40 GiB, so I had to use postgres cursors to avoid loading the large database into memory. Moreover, then generated CSV was about 8 GiB in size, and so it required a few extra steps to sort it, and get it into a format that could be plotted in a reasonable amount of time.

First, using GNU `sort`

to sort the file by the first column,

```
sort -t , -n -k 1 divisor_sums.csv -o divisor_sums_sorted.csv
```

Then, I needed to do some simple operations on massive CSV files, including computing a cumulative max, and filtering down to a subset of rows that are sufficient for plotting. After trying to use pandas and vaex, I realized that the old awk command line tool would be great at this job. So I wrote a simple awk script to process the data, and compute data used for the cumulative max witness value plots below.

Then finally we can use `vaex`

to create two plots. The first is a heatmap of witness value counts. The second is a plot of the cumulative max witness value. For the large database:

And for the small database

Note, the two ridges disagree slightly (the large database shows a longer flat line than the small database for the same range), because of the way that the superabundant enumeration doesn’t go in increasing order of . So larger witness values in the range 400-500 are found later.

## Estimating the max witness value growth rate

The next obvious question is whether we can fit the curves above to provide an estimate of how far we might have to look to find the first witness value that exceeds the desired 1.782 threshold. Of course, this will obviously depend on the appropriateness of the underlying model.

A simple first guess would be split between two options: the real data is asymptotic like approaching some number less than 1.782 (and hence this approach cannot disprove RH), or the real data grows slowly (perhaps doubly-logarithmic) like , but eventually surpasses 1.782 (and RH is false). For both cases, we can use scipy’s curve fitting routine as in this pull request.

For the large database (roughly using log n < 400 since that’s when the curve flatlines due to the enumeration order), we get a reciprocal fit of

and a logarithmic fit of

The estimated asymptote is around 1.7757 in the first case, and the second case estimates we’d find an RH counterexample at around .

For the small database of only sufficiently large witness values, this time going up to about , the asymptotic approximation is

And the logarithmic approximation is

Now the asymptote is slightly lower, at 1.7748, and the logarithmic model approximates the counterexample can be found at approximately .

Both of the logarithmic approximations suggest that if we want to find an RH counterexample, we would need to look at numbers with thousands of prime factors. The first estimate puts a counterexample at about , the second at , so let’s say between 1k and 10k prime factors.

Luckily, we can actually jump forward in the superabundant enumeration to exactly the set of candidates with prime factors. So it might make sense to jump ahead to, say, 5k prime factors and search in that region. However, the size of a level set of the superabundant enumeration still grows exponentially in . Perhaps we should (heuristically) narrow down the search space by looking for patterns in the distribution of prime factors for the best witness values we’ve found thus far. Perhaps the values of with the best witness values tend to have a certain concentration of prime factors.

## Exploring prime factorizations

At first, my thought was to take the largest witness values, look at their prime factorizations, and try to see a pattern when compared to smaller witness values. However, other than the obvious fact that the larger witness values correspond to larger numbers (more and larger prime factors), I didn’t see an obvious pattern from squinting at plots.

To go in a completely different direction, I wanted to try out the UMAP software package, a very nice and mathematically sophisticated for high dimensional data visualization. It’s properly termed a *dimensionality reduction* technique, meaning it takes as input a high-dimensional set of data, and produces as output a low-dimensional embedding of that data that tries to maintain the same shape as the input, where “shape” is in the sense of a certain Riemannian metric inferred from the high dimensional data. If there is structure among the prime factorizations, then UMAP should plot a pretty picture, and perhaps that will suggest some clearer approach.

To apply this to the RH witness value dataset, we can take each pair , and associate that with a new (high dimensional) data point corresponding to the witness value paired with the number’s prime factorization

,

where , with zero-exponents included so that all points have the same dimension. This pull request adds the ability to factorize and export the witness values to a CSV file as specified, and this pull request adds the CSV data (using git-lfs), along with the script to run UMAP, the resulting plots shown below, and the saved embeddings as `.npy`

files (numpy arrays).

When we do nothing special to the data and run it through UMAP we see this plot.

It looks cool, but if you stare at it for long enough (and if you zoom in when you generate the plot yourself in matplotlib) you can convince yourself that it’s not finding much useful structure. The red dots dominate (lower witness values) and the blue dots are kind of spread haphazardly throughout the red regions. The “ridges” along the chart are probably due to how the superabundant enumeration skips lots of numbers, and that’s why it thins out on one end: the thinning out corresponds to fewer numbers processed that are that large since the enumeration is not uniform.

It also seemed like there is too much data. The plot above has some 80k points on it. After filtering down to just those points whose witness values are bigger than 1.769, we get a more manageable plot.

This is a bit more reasonable. You can see a stripe of blue dots going through the middle of the plot.

Before figuring out how that blue ridge might relate to the prime factor patterns, let’s take this a few steps further. Typically in machine learning contexts, it helps to normalize your data, i.e., to transform each input dimension into a standard Z-score with respect to the set of values seen in that dimension, subtracting the mean and dividing by the standard deviation. Since the witness values are so close to each other, they’re a good candidate for such normalization. Here’s what UMAP plots when we normalize the witness value column only.

Now this is a bit more interesting! Here the colormap on the right is in units of standard deviation of witness values. You can see a definite bluest region, and it appears that the data is organized into long brushstrokes, where the witness values increase as you move from one end of the stroke to the other. At worst, this suggests that the dataset has structure that a learning algorithm could discover.

Going even one step further, what if we normalize *all* the columns? Well, it’s not as interesting.

If you zoom in, you can see that the same sort of “brushstroke” idea is occurring here too, with blue on one end and red on the other. It’s just harder to see.

We would like to study the prettiest picture and see if we can determine what pattern of prime numbers the blue region has, if any. The embedding files are stored on github, and I put up (one version of) the UMAP visualization as an interactive plot via this pull request.

I’ve been sitting on this draft for a while, and while this article didn’t make a ton of headway, the pictures will have to do while I’m still dealing with my new home purchase.

Some ideas for next steps:

- Squint harder at the distributions of primes for the largest witness values in comparison to the rest.
- See if a machine learning algorithm can regress witness values based on their prime factorizations (and any other useful features I can derive). Study the resulting hypothesis to determine which features are the most important. Use that to refine the search strategy.
- Try searching randomly in the superabundant enumeration around 1k and 10k prime factors, and see if the best witness values found there match the log-log regression.
- Since witness values above a given threshold seem to be quite common, and because the UMAP visualization shows some possible “locality” structure for larger witness values, it suggests if there is a counterexample to RH then there are probably many. So a local search method (e.g., local neighborhood search/discrete gradient ascent with random restarts) might allow us to get a better sense for whether we are on the right track.

Until next time!