A Numerical Mystery From the 19th Century Finally Gets Solved
Published by Reblogs - Credits in Posts,
number theory
A Numerical Mystery From the 19th Century Finally Gets Solved
By Leila Sloman
August 15, 2022
Two mathematicians have proven Patterson’s conjecture, which was designed to explain a strange pattern in sums involving prime numbers.
"It turns out that Gauss sums are magical, that they just do wonderful things for God knows what reason."
Kristina Armitage for Quanta Magazine
In the early 1950s, a group of researchers at the Institute for Advanced Study embarked on a high-tech project. At the behest of John von Neumann and Herman Goldstine, the physicist Hedvig Selberg programmed the IAS’s 1,700-vacuum-tube computer to calculate curious mathematical sums whose origins stretched back to the 18th century.
The sums were related to quadratic Gauss sums, named for the famed mathematician Carl Friedrich Gauss. Gauss would choose some prime number p, then sum up numbers of the form
$latex e^{\frac{2iπn^2}{p}}$.
Since their inception, quadratic Gauss sums have proved invaluable for tasks like counting solutions to certain types of equations. "It turns out that Gauss sums are magical, that they just do wonderful things for God knows what reason," said Jeffrey Hoffstein, a mathematician at Brown University.
In the mid-19th century, the German mathematician Ernst Eduard Kummer was toying with a close relative to these quadratic Gauss sums, where the n2 in the exponent is replaced by an n3. Kummer noticed that they tended to collect near particular values to a surprising degree — a keen observation that would lead to centuries of inquiry in number theory.
If cubic Gauss sums aren’t reworked into a simpler formula, their values are hard to infer. Lacking such a formula, Kummer set about calculating cubic Gauss sums — and calculating, and calculating. "It was very common for them to do these kind of heroic computations by hand back then," said Matthew Young, a mathematician at Texas A&M University. After plowing through 45 sums, corresponding to the first 45 non-trivial prime numbers, Kummer finally gave up.
Surveying his results, Kummer noticed something interesting. In theory, the sums could be anything between −1 and 1 (after being "normalized" — divided by a suitable constant). But when he did the calculations, he discovered that they were distributed in an odd way. Half the results were between ½ and 1, and only a sixth of them were between −1 and −½. They appeared to cluster around 1.
Kummer laid out his observations, along with a conjecture: If you somehow managed to plot all of the infinitely many cubic Gauss sums, you’d see most of them between ½ and 1; fewer between −½ and ½; and still fewer between −1 and −½.
Selberg, von Neumann and Goldstine set out to test this on their early computer. Selberg programmed it to calculate the cubic Gauss sums for all the non-trivial primes less than 10,000 — around 600 sums in all. (Goldstine and von Neumann would go on to author the paper; her contributions would end up relegated to a line of acknowledgment at the end.) They discovered that as the primes got bigger, the normalized sums became less inclined to cluster near 1. With convincing evidence that Kummer’s conjecture was wrong, mathematicians began to try to understand cubic Gauss sums in a deeper way that went beyond mere computation.
That process is now complete. In 1978, the mathematician Samuel Patterson ventured a solution to Kummer’s mathematical mystery, but couldn’t prove it. Then last fall, two mathematicians from the California Institute of Technology proved Patterson’s conjecture, at long last providing closure to Kummer’s musings from 1846.
Patterson first became hooked on the problem as a graduate student at the University of Cambridge in the 1970s. His conjecture was motivated by what happens when numbers are randomly placed anywhere between −1 and 1. If you add up N of these random numbers, the typical size of the sum will be $latex\sqrt{N}$ (it could be positive or negative). Likewise, if cubic Gauss sums were scattered evenly from −1 to 1, you’d expect N of them to add up to roughly $latex\sqrt{N}$.
With this in mind, Patterson added up N cubic Gauss sums, ignoring (for the moment) the requirement to stick to the prime numbers. He found that the sum was around N5/6 — bigger than $latex\sqrt{N}$ (which can be written as N1/2), but less than N. This value implied that the sums behaved like random numbers but with a weak force pressuring them toward positive values, called a bias. As N got larger and larger, the randomness would start to overwhelm the bias, and so if you somehow looked at all of the infinitely many cubic Gauss sums at once, they’d appear evenly distributed.
This seemingly explained everything: Kummer’s computations showing a bias, as well as the IAS computations refuting one.
But Patterson wasn’t able to do the same calculations for prime numbers, so in 1978, he officially wrote it down as a conjecture: If you add up the cubic Gauss sums for prime numbers, you should get the same N5/6 behavior.
Soon after giving a talk about his work on the Kummer problem, Patterson was contacted by a graduate student named Roger Heath-Brown, who suggested incorporating techniques from prime number theory. The two teamed up and soon published an advance on the problem, but they still couldn’t show that Patterson’s predicted N5/6 bias was accurate for primes.
Over the ensuing decades, there was little progress. Finally, at the turn of the millennium, Heath-Brown made another breakthrough, in which a tool he’d developed called the cubic large sieve played an essential role.
To use the cubic large sieve, Heath-Brown used a series of calculations to relate the sum of cubic Gauss sums to a different sum. With this tool, Heath-Brown was able to show that if you add up the cubic Gauss sums for primes less than N, the result can’t be much bigger than N5/6. But he thought that he could do better — that the sieve itself could be improved. If it could, it would lower the bound to N5/6 exactly, thus proving Patterson’s conjecture. In a short line of text, he sketched out what he thought the best possible formula for the sieve would be.
Even with this new tool in hand, mathematicians were unable to advance further. Then two decades later, a lucky encounter between the Caltech postdoc Alexander Dunn and his supervisor Maksym Radziwiłł marked the beginning of the end. Before Dunn began his position in September of 2020, Radziwiłł proposed they work on Patterson’s conjecture together. But with the Covid-19 pandemic still raging, research and teaching continued remotely. Finally, in January 2021, chance — or fate — intervened when the two mathematicians unexpectedly bumped into each other in a Pasadena parking lot. "We cordially chatted, and we agreed that we should start meeting and talking math," wrote Dunn in an email. By March, they were working diligently on a proof of Patterson’s conjecture.
"It was exciting to work on, but extremely high risk," said Dunn. "I mean, I remember coming to my office at, like, 5 a.m. every morning straight for four or five months."
Dunn and Radziwiłł, like Heath-Brown before them, found the cubic large sieve indispensable for their proof. But as they used the formula that Heath-Brown had written down in his 2000 paper — the one he believed to be the best possible sieve, a conjecture that the number theory community had come to believe was true — they realized something wasn’t right. "We were able to prove that 1 = 2, after very, very complicated work," said Radziwiłł.
At that point, Radziwiłł was sure the mistake was theirs. "I was kind of convinced that we basically have an error in our proof." Dunn convinced him otherwise. The cubic large sieve, contrary to expectations, could not be improved on.
Armed with the rightness of the cubic large sieve, Dunn and Radziwiłł recalibrated their approach to Patterson’s conjecture. This time, they succeeded.
"I think that was the main reason why nobody did this, because this [Heath-Brown] conjecture was misleading everybody," said Radziwiłł. "I think if I told Heath-Brown that his conjecture is wrong, then he probably would figure out how to do it."
Dunn and Radziwiłł posted their paper on September 15, 2021. In the end, their proof relied on the generalized Riemann hypothesis, a famously unproved conjecture in mathematics. But other mathematicians see this as only a minor drawback. "We would like to get rid of the hypothesis. But we’re happy to have a result that’s conditional anyway," said Heath-Brown, who is now a professor emeritus at the University of Oxford.
For Heath-Brown, Dunn and Radziwiłł’s work is more than just a proof of Patterson’s conjecture. With its unexpected insight into the cubic large sieve, their paper brought a surprise ending to a story he’s been part of for decades. "I’m glad that I didn’t actually write in my paper, ‘I am sure that one can get rid of this,’" he said, referring to the bit of the sieve that Dunn and Radziwiłł discovered was essential. "I just said, ‘It would be nice if one can get rid of this. It seems possible you should be able to.’ And I was wrong — not for the first time."