In the last article we set up pytest for a simple application that computes divisor sums and tries to disprove the Riemann Hypothesis. In this post we’ll show how to extend the application as we add a database dependency. The database stores the computed sums so we can analyze them after our application finishes.
As in the previous post, I’ll link to specific git commits in the final code repository to show how the project evolves. You can browse or checkout the repository at each commit to see how it works.
Interface before implementation
The approach we’ll take is one that highlights the principle of good testing and good software design: separate components by thin interfaces so that the implementations of those interfaces can change later without needing to update lots of client code.
We’ll take this to the extreme by implementing and testing the logic for our application before we ever decide what sort of database we plan to use! In other words, the choice of database will be our last choice, making it inherently flexible to change. That is, first we iron out a minimal interface that our application needs, and then choose the right database based on those needs. This is useful because software engineers often don’t understand how the choice of a dependency (especially a database dependency) will work out long term, particularly as a prototype starts to scale and hit application-specific bottlenecks. Couple this with the industry’s trend of chasing hot new fads, and eventually you realize no choice is sacred. Interface separation is the software engineer’s only defense, and their most potent tool for flexibility. As a side note, Tom Gamon summarizes this attitude well in a recent article, borrowing the analogy from a 1975 investment essay The Winner’s Game by Charles Ellis. Some of his other articles reinforce the idea that important decisions should be made as late as possible, since that is the only time you know enough to make those decisions well.
Our application has two parts so far: adding new divisor sums to the database, and loading divisor sums for analysis. Since we’ll be adding to this database over time, it may also be prudent to summarize the contents of the database, e.g. to say what’s the largest computed integer. This suggests the following first-pass interface, implemented in this commit.
class DivisorDb(ABC): @abstractmethod def load() -> List[RiemannDivisorSum]: '''Load the entire database.''' pass @abstractmethod def upsert(data: List[RiemannDivisorSum]) -> None: '''Insert or update data.''' pass @abstractmethod def summarize() -> SummaryStats: '''Summarize the contents of the database.''' pass
SummaryStats are dataclasses. These are special classes that are intended to have restricted behavior: storing data and providing simple derivations on that data. For us this provides a stabler interface because the contents of the return values can change over time without interrupting other code. For example, we might want to eventually store the set of divisors alongside their sum. Compare this to returning a list or tuple, which is brittle when used with things like tuple assignment.
The other interesting tidbit about the commit is the use of abstract base classes (“ABC”, an awful name choice). Python has limited support for declaring an “interface” as many other languages do. The pythonic convention was always to use its “duck-typing” feature, which meant to just call whatever methods you want on an object, and then any object that supports has those methods can be used in that spot. The mantra was, “if it walks like a duck and talks like a duck, then it’s a duck.” However, there was no way to say “a duck is any object that has a
quack method, and those are the only allowed duck functions.” As a result, I often saw folks tie their code to one particular duck implementation. That said, there were some mildly cumbersome third party libraries that enabled interface declarations. Better, recent versions of Python introduced the abstract base class as a means to enforce interfaces, and structural subtyping (
typing.Protocol) to interact with type hints when subtyping directly is not feasible (e.g., when the source is in different codebases).
Moving on, we can implement an in-memory database that can be used for testing. This is done in this commit. One crucial aspect of these tests is that they do not rely on the knowledge that the in-memory database is secretly a dictionary. That is, the tests use only the
DivisorDb interface and never inspect the underlying dict. This allows the same tests to run against all implementations, e.g., using
pytest.parameterize. Also note it’s not thread safe or atomic, but for us this doesn’t really matter.
Injecting the Interface
With our first-pass database interface and implementation, we can write the part of the application that populates the database with data. A simple serial algorithm that computes divisor sums in batches of 100k until the user hits Ctrl-C is done in this commit.
def populate_db(db: DivisorDb, batch_size: int = 100000) -> None: '''Populate the db in batches.''' starting_n = (db.summarize().largest_computed_n or 5040) + 1 while True: ending_n = starting_n + batch_size db.upsert(compute_riemann_divisor_sums(starting_n, ending_n)) starting_n = ending_n + 1
I only tested this code manually. The reason is that line 13 (highlighted in the abridged snippet above) is the only significant behavior not already covered by the
InMemoryDivisorDb tests. (Of course, that line had a bug later fixed in this commit). I’m also expecting to change it soon, and spending time testing vs implementing features is a tradeoff that should not always fall on the side of testing.
Next let’s swap in a SQL database. We’ll add sqlite3, which comes prepackaged with python, so needs no dependency management. The implementation in this commit uses the same interface as the in-memory database, but the implementation is full of SQL queries. With this, we can upgrade our tests to run identically on both implementations. The commit looks large, but really I just indented all the existing tests, and added the pytest parameterize annotation to the class definition (and corresponding method arguments). This avoids adding a parameterize annotation to every individual test function—which wouldn’t be all that bad, but each new test would require the writer to remember to include the annotation, and this way systematically requires the extra method argument.
And finally, we can switch the database population script to use the SQL database implementation. This is done in this commit. Notice how simple it is, and how it doesn’t require any extra testing.
After running it a few times and getting a database with about 20 million rows, we can apply the simplest possible analysis: showing the top few witness values.
sqlite> select n, witness_value from RiemannDivisorSums where witness_value > 1.7 order by witness_value desc limit 100; 10080|1.7558143389253 55440|1.75124651488749 27720|1.74253672381383 7560|1.73991651920276 15120|1.73855867428903 110880|1.73484901030336 720720|1.73306535623807 1441440|1.72774021157846 166320|1.7269287425473 2162160|1.72557022852613 4324320|1.72354665986337 65520|1.71788900114772 3603600|1.71646721405987 332640|1.71609697536058 10810800|1.71607328780293 7207200|1.71577914933961 30240|1.71395368739173 20160|1.71381061514181 25200|1.71248203640096 83160|1.71210965310318 360360|1.71187211014506 277200|1.71124375582698 2882880|1.7106690212765 12252240|1.70971873843453 12600|1.70953565488377 8648640|1.70941081706371 32760|1.708296575835 221760|1.70824623791406 14414400|1.70288499724944 131040|1.70269370474016 554400|1.70259313608473 1081080|1.70080265951221
We can also confirm John’s claim that “the winners are all multiples of 2520,” as the best non-multiple-of-2520 up to 20 million is 18480, whose witness value is only about 1.69.
This multiple-of-2520 pattern is probably because 2520 is a highly composite number, i.e., it has more divisors than all smaller numbers, so its sum-of-divisors will tend to be large. Digging in a bit further, it seems the smallest counterexample, if it exists, is necessarily a superabundant number. Such numbers have a nice structure described here that suggests a search strategy better than trying every number.
Next time, we can introduce the concept of a search strategy as a new component to the application, and experiment with different search strategies. Other paths forward include building a front-end component, and deploying the system on a server so that the database can be populated continuously.