Showing posts with label Numbers. Show all posts
Showing posts with label Numbers. Show all posts

January 1, 2023

2023

 

2023 = 17 x 17 x 7 (A008865).

2023 is the sum of its digits times the square of the sum of the digits squared: (2 + 0 + 2 + 3 = 7)(2 x 2 + 0 x 0 + 2 x 2 + 3 X 3 = 17)(2 x 2 + 0 x 0 + 2 x 2 + 3 X 3 = 17) = 7 x 17 x 17 = 2023 (A257766).

2023 is a number that cannot be written as the sum of 3 squares.

2023 is a multiple of 7 whose sum of digits is equal to 7 (A063416).

2023 is the smallest of three consecutive numbers each divisible by a square (A070258).

2023 is the remainder when 7 to the 7th power is divided by 7! (A063709).

January 1, 2022

2022

 

2022 = 2 x 3 x 337

2022 is MMXXII in Roman numerals.

2022 is a number divisible by the sum of its digits (A003219). It is the first of four consecutive numbers, each divisible by the sum of its digits: 2022, 2023, 2024, and 2025 (A141769).

2022 is the sum of two consecutive prime numbers: 2022 = 1009 + 1013 (A206329).

2022 has the same set of digits in base 3 (2202220) and base 10 (A037422).

In base 5, 2022 contains each of the digits from 0 to 4: 31042.

March 18, 2021

How to Prove a Theorem So No One Else Can Claim It

The trouble with sitting down at a computer keyboard to enter a password is that someone may be looking over your shoulder. Because your password could be stolen as you type it, the computer system isn't completely secure.

But if you could somehow provide the computer with information that persuades the computer you know the password without actually giving away the password itself, you would be on your way to solving the security problem.

Furthermore, if no onlooker could reconstruct the password from the information you gave the computer, no one could break into the system—at least by using a purloined password.

The mathematical basis for such a scheme is available, and it depends on something called a "zero-knowledge" proof.

The idea is that a "prover" has found a proof for a theorem and wants to let a "verifier" know that he knows the proof without revealing the proof itself. The verifier can ask a special question that requires the equivalent of a yes-or-no answer. If the prover really knows the proof, he can answer the question correctly every time it is asked.

If the prover doesn't know the proof, he has only a 50 percent of being right each time. After, say, a dozen tries, the chances of fooling the verifier get very small.

Neither the question not the possible answers give away even a hint of the proof itself—hence, the term zero-knowledge proof.

The concept of zero-knowledge proofs was first defined in 1985 by Silvio Micali and Charles Rackoff. A year later, Micali, Oded Goldreich, and Avi Wigderson took a major step forward by showing that such a scheme works for a large class of mathematical theorems.

The three researchers demonstrated the procedure for a mathematical coloring problem, which involves ensuring that no two linked points in certain networks of connected points have the same color.

Manuel Blum then showed how to give an efficient zero-knowledge proof for any mathematical theorem.

One of the more difficult steps, Blum noted, is finding the right question for the verifier to ask. Once this is done, a zero-knowledge scheme can handle any theorem posed within any logic system.

All a verifier finds out is that a theorem is provable and that the prover actually knows the proof. And because the prover has to use at least as many words as the proof itself contains, he gives away an upper limit for the proof's length.

To show how the scheme works, Blum chose an example from graph theory. Any network of points (or nodes) connected by lines (or edges) is called a graph. In Blum's example, the graph consists of a star-shaped pattern of lines linking 11 points.

The prover has found a continuous path among the links that passes only once through through each of the 11 points on the graph and returns to where it started. This special type of path is called a Hamiltonian cycle.

The prover's aim is to persuade a verifier that such a path is known without giving the verifier the slightest idea of how to construct the path.

To do this in Blum's example, the prover privately marks 11 nodes along the circumference of a circle and labels them randomly from 1 to 11. The nodes on the circle are then connected in the same way as the points in the original graph. That is, lines would join nodes 1 and 5, 2 and 6, and so on.


Deciding whether a particular graph—a network of points and connecting lines—has a Hamiltonian cycle is often difficult. This involves finding a continuous path along the links that passes only once through each of the graph's points and then returns to where it started. In this example, such a path exists through the points in the order 1, 5, 6, 2, 8, 4, 10, 11, 9, 3, 7, and then back to 1. For a zero-knowledge proof, the star-shaped graph (above) can be redrawn so that all the points fall on the circumference of an imaginary circle (below) yet the connecting lines still join the same points as in the original graph.


The resulting diagram is covered up by, say, an erasable opaque film like that used on some lottery or contest tickets.

The verifier can ask the prover to uncover the complete graph, which shows that all the points are properly linked, or she can ask to see the Hamiltonian cycle.

In the latter case, the prover erases enough of the film to reveal just the lines that make up the cycle. He can do this only if he knows the right path. However, because the nodes are still covered up, the verifier doesn't know the actual path from point to point. All she can verify is that a Hamiltonian cycle exists. This process can be repeated as many times as the verifier wishes.

Because the prover doesn't know whether the verifier will ask for the graph or the cycle, he has to be ready for either choice and therefore must know the cycle. Failure to produce either the correct graph or the cycle during any turn is equivalent to a wrong answer, and the verifier then knows that either the prover is lying or he doesn't actually have the proof.

As outlined, this scheme sounds somewhat cumbersome. But the opaque film used in the example can be replaced by encryption schemes to hide information. Thus, proof checking can be done electronically when the whole procedure is encoded as strings of binary digits. This makes it possible to use this concept for password protocols and in cryptological games like tossing a coin by telephone or exchanging secret keys.

And in the sometimes turbulent world of mathematics research, it gives a wary mathematician a way to claim credit for being the first to find a particular proof without the necessity of giving away the proof's details. All that someone else can find out, until the proof itself is finally revealed, is that a particular theorem is provable.

February 24, 2021

Puzzling Groups and Ramsey Numbers

Throughout his long, itinerant life, Paul Erdős (1913-1996) spent most of his waking hours and, apparently, all his sleeping hours doing mathematics. He was a superb problem solver, and his phenomenal memory allowed him to cite exact references to thousands of mathematical papers, including their page numbers.

"If you don't know how to attack a particular problem, ask Erdős" was the constant refrain among his colleagues and collaborators. He was a one-man clearinghouse for information on the status of unsolved problems in the fields of mathematics that attracted his attention (see "Paul Erdos and an Infinity of Problems").

By the time he died in 1996, Erdős had also earned a reputation as the greatest problem solver of all time. He had the knack of asking just the right mathematical question—one that was neither so simple that it could be solved in a minute nor so difficult that a solution was unattainable. Ramsey theory, which offered a wide variety of challenges, was one of his joys.

A typical question involving Ramsey theory concerns how many numbers (see "Pigeonhole Congestion"), points (see "Planes of Budapest"), or specified objects are needed to guarantee the presence of a certain structure or pattern. You look for the smallest "universe" that automatically contains a given object.

For example, it may be the smallest group of arbitrarily positioned stars that would include a convex quadrilateral or the number of people required to ensure the attendance of three strangers or three acquaintances at a gathering (see "Party Games").

Fascinated by such problems, Paul Erdős particularly liked those he called "party puzzles," which involve different combinations of strangers and acquaintances brought together at some sort of gathering. But he didn't usually think of these problems in terms of people—except as a way of describing them informally.

Erdős reasoned in terms of points, lines, and colors—all elements of a branch of mathematics known as graph theory. In general, a graph (in this context) consists of an array of points, or vertices, connected by straight lines, which are often called edges.

To convert a party puzzle into a graph problem, you draw a point for each person present. You then draw lines between the points, with red lines joining two people who know each other and blue lines for two people who are strangers. When all pairs of points are joined, the resulting network of points and lines is known as a complete graph.


Depending on the relationships specified in a given case, such a graph may contain only red lines, only blue lines, or a mixture of red or blue lines joining the points. The problem comes down to proving that no matter how the lines are colored, you can't avoid producing, in the case of six points (or people), either a red triangle (representing three mutual acquaintances) or a blue triangle (three strangers).

Such a coloring is guaranteed for a complete graph of six points, but not necessarily for a graph of five or fewer points. Hence, the minimum number of points that necessarily has the requisite pattern is six, often designated by the so-called Ramsey number R(3,3), where the first number inside the parentheses gives the number of acquaintances and the second gives the number of strangers.

What about subgroups of four strangers or four acquaintances? It turns out that an array of 17 points is insufficient to guarantee that four points will be linked by the same color. Only a graph of 18 or more points will always do the job. Thus, R(4,4) = 18.


In a graph representing seventeen people (left), it's possible to color the lines so that no four points are connected by a network of lines that are completely blue or completely red. In contrast, in a graph representing eighteen people (right), there is always such a network, meaning that a group of four mutual acquaintances or four mutual strangers will always be present in such a gathering.

In other words, at a dinner party of 18 there will always be at least four mutual acquaintances or strangers, but that may not necessarily happen when only 17 people are present,

It may be tempting to consider Ramsey's theorem in drawing up a guest list to ensure an appropriate balance of novelty and familiarity in some random gathering—guests selected, perhaps, by sticking pins in a phone book.

The trouble is that calculating the minimum number of partygoers to have—say, seven acquaintances or nine strangers—can be extraordinarily difficult. Though Ramsey's theorem guarantees the existence of such arrangements, determining their actual size is an onerous task.

The chief problem is that the number of cases that must be checked escalates rapidly with each step up in the size of the group. For example, a group of five represents 1,024 different cases, a group of six has 32,768 possibilities, and a group of seven has 221 possibilities.

A prodigious amount of effort involving considerable mathematical ingenuity has gone into producing the rather scant table of Ramsey numbers known in the year 2000 (below). In several cases, mathematicians have been able to find the answer only to within a range of the true value—all for the sake of solving an intriguing puzzle rather than for any particular practical application.


A Ramsey number represents the minimum number of people needed to guarantee the presence of a given number of mutual acquaintances and a given number of mutual strangers. In some cases, mathematicians have not yet pinpointed the actual Ramsey number and can give only the range within which the number must fall.

In the last few decades, computers have played a significant role in determining Ramsey numbers. For example, in 1990, Brendan McKay and Zhang Ke Min relied on computers to help them sift through thousands upon thousands of possibilities to determine R(3,8).

A little later, McKay collaborated with Stanislaw P. Radziszowski in a concerted effort to find R(4,5). Checking all possible graphs was out of the question. Instead, McKay and Radziszowski focused on developing efficient procedures for mathematically "gluing" together small graphs, which could be checked more easily, to make graphs large enough for the problem at hand.

Because their search technique involved using the same computer program many times with different data, the researchers could readily partition their task into small pieces. This meant that they could do the necessary computations on many desktop computers rather than having to rely on a single supercomputer.

Both of the institutions where the researchers were based had a large number of workstations located in staff offices and student laboratories. Many of these machines were often idle during the night or on weekends. By commandeering these resources at odd times, the mathematicians could have as many as 110 computers working simultaneously on the problem.

By early 1993, McKay and Radziszowski had their answer: R(4,5) = 25. At that time, however, the prospect of cracking the next candidate, R(5,5) appeared bleak.

Erdős liked to tell an allegorical tale about the difficulties of determining Ramsey numbers. Suppose an evil spirit were to come to Earth and declare, "You have two years to find R(5,5). If you don't, I will exterminate the human race." In such an eventuality, it would be wise to get all the computers in the world together to solve the problem. "We might find it in two years," Erdős predicted.

But if the evil spirit were to ask for R(6,6), Erdős noted, it would be best to forgo any attempt at the computation and instead launch a preemptive strike against the spirit to destroy him before he destroys us.

On the other hand, "if we could get the right answer just by thinking, we wouldn't have to be afraid of him, because we would be so clever that he couldn't do any harm," Erdős concluded.

Erdős himself was a master of innovative techniques for solving problems, sometimes even exploiting randomness and probability to establish the existence of structures that he was determined to corral in his many party puzzles.

Previously: Planes of Budapest
Next: Dartboard Estimates

January 12, 2021

The Limits of Mathematics

At the beginning of the 20th century, the German mathematician David Hilbert (1862–1943) advocated an ambitious program to formulate a system of axioms and rules of inference that would encompass all mathematics, from basic arithmetic to advanced calculus. His dream was to codify the methods of mathematical reasoning and put them within a single framework.



Hilbert insisted that such a formal system of axioms and rules should be consistent, meaning that you can't prove an assertion and its opposite at the same time. He also wanted a scheme that is complete, meaning that you can always prove a given assertion either true or false. He argued that there had to be a clear-cut mechanical procedure for deciding whether a certain proposition follows from a given set of axioms.

Hence, it would be possible, though not actually practical, to run through all possible propositions, starting with the shortest sequences of symbols, and check which ones are valid. In principle, such a decision procedure would automatically generate all possible theorems in mathematics.

What Hilbert was saying is that "we can solve a problem if we are clever enough and work at it long enough," mathematician Gregory J. Chaitin wrote in his 1998 book The Limits of Mathematics: A Course on Information Theory and the Limits of Formal Reasoning. "He didn't believe that in principle there was any limit to what mathematics could achieve."

In the 1930s, Kurt Gödel (1906–1978), followed by Alan Turing (1912–1954) and others, proved that no such decision procedure is possible for any system of logic made up of axioms and propositions sufficiently sophisticated to encompass the kinds of problems that mathematicians work on every day.

"More precisely, what Gödel discovered was that the plan fails even if you just try to deal with elementary arithmetic, with the numbers 0, 1, 2, 3,… and with multiplication and addition," Chaitin wrote in his 2004 paper META MATH! The Quest for Omega. "Any formal system that tries to contain the whole truth and nothing but the truth about addition, multiplication, and the numbers 0, 1, 2, 3,… will have to be incomplete."

In Gödel's realm, no matter what the system of axioms or rules is, there will always be some assertion that can be neither proved nor invalidated within the system. Indeed, mathematics is full of conjectures—assertions awaiting proof—with no assurance that definitive answers even exist.

Turing's argument involved mathematical entities known as real numbers. Suppose you happen upon the number 1.6180339887. It looks vaguely familiar, but you can't quite place it. You would like to find out whether this particular sequence of digits is special in some way, perhaps as the output of a specific formula or the value of a familiar mathematical constant.

It turns out that the given number is the value, rounded off, of the so-called golden ratio, which can also be written as (1 + √5)/2, an example of a real number. Given that expression, which represents an infinite number of decimal digits, you can compute its value to any number of decimal places. Going in the opposite direction from the given rounded-off number to the expression, however, is much more difficult and problematic.

For example, it's possible that if the mystery number were available to an extra decimal place, the final digit would no longer match the decimal digits of the golden ratio. You would have to conclude that the given number is not an approximation of the golden ratio. Indeed, the extended string of digits could represent the output of a completely different expression or formula, or even part of a random sequence. It's impossible to tell for sure. There isn't enough information available.

To sort through the relationship between random sequences and the types of numbers that mathematicians and scientists use in their work, Chaitin defined the "complexity" of a number as the length of the shortest computer program (or set of instructions) that would spew out the number.

"The minimum number of bits—what size string of zeros and ones—needed to store the program is called the algorithmic information content of the data," Chaitin wrote in the article "The Limits of Reason," published in the March 2006 Scientific American. "Thus, the infinite sequence of numbers 1, 2, 3,… has very little algorithmic information; a very short computer program can generate all those numbers."

"It does not matter how long the program must take to do the computation or how much memory it must use—just the length of the program in bits counts," he added.

Similarly, suppose a given number consists of 100 1s. The instruction to the computer would be simply "print 1, 100 times." Because the program is substantially shorter than the sequence of 100 1s that it generates, the sequence is not considered random. If a sequence is disorderly enough that any program for printing it out cannot be shorter than the sequence itself, the sequence counts as algorithmically random. Hence, an algorithmically random sequence is one for which there is no compact description.

Interestingly, the number pi (the ratio of a circle's circumference to its diameter), which is expressed by an infinite number of digits, has little algorithmic information content because a computer can use a relatively small program to generate the number, digit by digit: 3.14159…. On the other hand, a random number with merely 1 million digits has a much larger amount of algorithmic information.

Chaitin proved that no program can generate a number more complex than itself. In other words, "a 1-pound theory can no more produce a 10-pound theorem than a 100-pound pregnant woman can birth a 200-pound child," he liked to say.

Conversely, Chaitin also showed that it is impossible for a program to prove that a number more complex than the program is random. Hence, to the extent that the human mind is a kind of computer, there may be a type of complexity so deep and subtle that the intellect could never grasp it. Whatever order may lie in the depths would be inaccessible, and it would always appear to us as random.

At the same time, proving that a sequence is random presents insurmountable difficulties. There's no way to be sure that we haven't overlooked a hint of order that would allow even a small compression in the computer program that produces the sequence.

From a mathematical point of view, Chaitin's result suggests that we are far more likely to find randomness than order within certain domains of mathematics. Indeed, his complexity version of Gödel's theorem states: Although almost all numbers are random, there is no formal axiomatic system that will allow us to prove this fact.

Chaitin's work indicates that there is an infinite number of mathematical statements that you can make about, say, arithmetic that can't be reduced to the axioms of arithmetic. So there's no way to prove whether the statements are true or false by using arithmetic. In Chaitin's view, that's practically the same as saying that the structure of arithmetic is random.

"What I've constructed and exhibited are mathematical facts that are true… by accident," he said. "They're mathematical facts which are analogous to the outcome of a coin toss…. You can never actually prove logically whether they're true."

This doesn't mean that anarchy reigns in mathematics, only that mathematical laws of a different kind might apply in certain situations. In such cases, statistical laws hold sway and probabilities describe the answers that come out of equations. Such problems arise when you ask whether an equation involving only whole numbers has an infinite number of whole-number solutions, a finite number, or none at all.

"In the same way that it is impossible to predict the exact moment at which an individual atom undergoes radioactive decay, mathematics is powerless to answer particular questions," Chaitin stated. "Nevertheless, physicists can still make reliable predictions about averages over large ensembles of atoms. Mathematicians may in some cases be limited to a similar approach."

That makes mathematics much more of an experimental science than many mathematicians would like to admit.

Chaitin went further. Human creativity is absolutely necessary for mathematical work, he argued, and "intuition cannot be eliminated from mathematics."

Originally posted March 6, 2006

January 1, 2021

2021

 

2021 = 43 x 47 (the product of two consecutive prime numbers: A006094). As the product of two primes, 2021 is a semiprime.

2021 = 452 − 4  (A028347).

The Euler polynomial x2 + x + 41 generates an unbroken sequence of 40 primes, starting with x = 0. 2021 is the third composite number that the formula produces (for x = 44) (A145292).

2021 is the sum of the first 24 pairs of consecutive primes (A102724).

The sum of the divisors of 2021 (1, 43, 47, 2021) is palindromic: 2112.

November 18, 2020

Lucky Numbers

Hunting for prime numbers, those evenly divisible only by themselves and 1, requires a sieve to separate them from the rest. For example, the sieve of Eratosthenes, named for a Greek mathematician of the third century B.C., generates a list of prime numbers by the process of elimination.

To find all prime numbers less than, say, 100, the hunter writes down all the integers from 2 to 100 in order (1 doesn't count as a prime). First, 2 is circled, and all multiples of 2 (4, 6, 8, and so on) are struck from the list. That eliminates composite numbers that have 2 as a factor.

The next unmarked number is 3. That number is circled, and all multiples of 3 are crossed out. The number 4 is already crossed out, and its multiples have also been eliminated. Five is the next unmarked integer. The procedure continues in this way until only prime numbers are left on the list. Though the sieving process is slow and tedious, it can be continued to infinity to identify every prime number.

Other types of sieves isolate different sequences of numbers. Around 1955, the mathematician Stanislaw Ulam (1909-1984) identified a particular sequence made up of what he called "lucky numbers," and mathematicians have been playing with them ever since.

Starting with a list of integers, including 1, the first step is to cross out every second number: 2, 4, 6, 8, and so on, leaving only the odd integers. The second integer not crossed out is 3. Cross out every third number not yet eliminated. This gets rid of 5, 11, 17, 23, and so on.

The third surviving number from the left is 7; cross out every seventh integer not yet eliminated: 19, 39,…. Now, the fourth number from the beginning is 9. Cross out every ninth number not yet eliminated, starting with 27.

This particular sieving process yields certain numbers that permanently escape getting "killed." That's why Ulam called them "lucky."

Lucky numbers less than 200: 1, 3, 7, 9, 13, 15, 21, 25, 31, 33, 37, 43, 49, 51, 63, 67, 69, 73, 75, 79, 87, 93, 99, 105, 111, 115, 127, 129, 133, 135, 141, 151, 159, 163, 169, 171, 189, 193, 195.

What's remarkable is that the "luckies," though generated by a sieve based entirely on a number's position in an ordered list, share many properties with primes. For example, there are 25 primes less than 100, and 23 luckies less than 100.

Indeed, it turns out that primes and luckies come up about equally often within given ranges of integers. The distances between successive primes and the distances between successive luckies also keep increasing as the numbers increase. In addition, the number of twin primes—primes that differ by 2—is close to the number of twin luckies.

Perhaps the most famous problem involving primes still unsolved is Goldbach's conjecture, which states that every even number greater than 2 is the sum of two primes. Luckies are featured in a similar conjecture, also unsolved: Every even number is the sum of two luckies. Computer searches have so far not found an exception.

Martin Gardner described many more features of lucky numbers in a delightful article, "Lucky numbers and 2187," in a 1997 issue of The Mathematical Intelligencer. "There is a classic proof by Euclid that there is an infinity of primes," he wrote. "Although it is easy to show there is also an infinity of lucky numbers, the question of whether an infinite number of luckies are primes remains, as far as I know, unproved."


How did the topic of lucky numbers happen to come up? The house where Gardner grew up in Tulsa, Okla., had the address 2187 S. Owasso. "Of course I never forgot this number," he said. It also happens to be one of the lucky numbers.

Gardner's imaginary friend, the noted numerologist Dr. Irving Joshua Matrix, can readily find additional remarkable properties associated with that number. Exchange the last two digits of 2187 to make 2178, multiply by 4, and you get 8712, the second number backward.

Take 2187 from 9999 and the result is 7812, the number in reverse. Moreover, the first four digits of the constant e, 2718, and the number of cubic inches in a cubic foot, 123 = 1728, are each permutations of 2187!

However, to those inclined to seeing meaning in certain numbers, Dr. Matrix issued the following warning: "Every number has endless unusual properties."

Originally posted September 8, 1997

November 2, 2020

Digits, Squares, and Cycles

Fascinating patterns lurk among the digits of whole numbers.

Pick a positive integer, say 57. Square each of the digits, then add the squares together: 52 + 72 = 25 + 49 = 74. Do the same thing with the digits of 74: 72 + 42 = 49 + 16 = 65. Keep repeating the procedure, using successive sums of squares.


You end up with the following sequence of numbers: 57, 74, 65, 61, 37, 58, 89, 145, 42, 20, 4, 16, 37, 58, 89, 145, 42, 20, 4, 16, 37, and so on. Notice that after a few steps, the numbers begin repeating themselves in a definite cycle (or loop): 37, 58, 89, 145, 42, 20, 4, and 16.

Try another number, say 88. The following sequence arises: 88, 128, 69, 117, 51, 26, 40, 16, 37, 58, 89, 145, 42, 20, 4, 16, 37, 58, 89, 145, 42, and so on. It takes a little longer to get there and the cycle's entry point (16) is different, but the same repeating set of numbers comes up.

Amazingly, it turns out that for a given positive integer, this procedure always leads to one of just two cycles: either the cycle of numbers noted above or a string of 1s. For instance, starting with 19 produces the sequence 82, 68, 100, 1, 1, 1, and so on. Beginning with 123,457 leads to the repeating set of numbers 4, 16, 37, 58, 89, 145, 42, and 20.

Mathematician Eugene D. Nichols first encountered a mathematical proof establishing this cyclic digital behavior in a Polish book of problems by Hugo Steinhaus, published in 1958. However, Steinhaus proved the theorem only for squaring the digits. "I got curious about what would happen when you did it to the third power, fourth power, and so on," Nichols said.

For example, suppose you use 3 as the exponent and start with the positive integer 57. You get 53 + 73 = 125 + 343 = 468. Continuing the procedure generates the following numbers: 792 (43 + 63 + 83), 1080, 513, 153, 153, 153, 153, and so on. In general, this procedure always leads to some sort of cycle, and there are nine possible cycles. The longest cycles contain only three numbers: 55, 250, and 133; 160, 217, and 352; and 160, 217, and 352.

"We wanted to find out whether there is any relationship between the number of loops we get and the power to which the digits are raised," Nichols said. Collecting data up to the fifteenth power, "we couldn't discern any kind of relationship, except one. For the odd powers, there are more loops than for the even powers."

Jerry Glynn and Theodore W. Gray devoted a section of their book The Beginner's Guide to Mathematica Version 3 to the problem and suggested ways of calculating the necessary sequences and searching for patterns.

Glynn had learned about the problem from Nichols during a lull at a mathematics meeting. "He started me off on the problem with a conspiratorial tone and a gleam in his eye," Glynn recalled. "He also started very, very slowly so I followed the steps easily until I could move ahead on my own. I was trapped and no longer bored."

In the Beginner's Guide, Gray noted, "Readers should be made aware that Jerry is having some sort of religious experience. He seems awed by these numbers. It is quite remarkable that the cycle lengths are so short, given how large the numbers are."

There's still lots to explore in this niche of number theory, and there's no proper proof yet that all numbers and all exponents always cycle.

Originally posted February 2, 1998

October 20, 2020

Medieval Harmony

Philippe de Vitry (1291-1361) was one of the most prominent figures in medieval music. He was the author of an important music theory text, Ars Nova, which introduced new rhythmic schemes and musical notation. He had a deep knowledge of philosophy, rhetoric, and mathematics.

In many ways, de Vitry's interests and accomplishments reflected the Pythagorean view that music is a subdivision of arithmetic, as shown, for example, in the simple mathematical relations between pitch and length of a string (see Circles of Dissonance). His work honored the dictum of the Roman philosopher Boethius (480-524) that "music is number made audible."

One of de Vitry's observations concerned numbers that can be written as a power of 2 multiplied by a power of 3. In modern terms, such numbers would be expressed as 2n x 3m, where n and m are integers. He called them harmonic numbers.

Here's a table of all harmonic numbers less than 1000.


De Vitry focused on pairs of harmonic numbers that differ by 1: 1, 2; 2, 3; 3, 4; and 8, 9. As it happens, those particular pairs correspond to musically significant ratios, representing an octave, fifth, fourth, and whole tone. He wondered whether there are any other such pairs of harmonic numbers.

In 1342, Levi ben Gerson, also known as Gersonides (1288-1344), proved that the original four pairs of harmonic numbers are the only ones that differ by 1. Here's how his proof went (using modern notation).

If two harmonic numbers differ by 1, one must be odd and the other even. The only odd numbers are powers of 3 (top row of chart, above). Hence, one of the two harmonic numbers must be a power of 3 and the other a power of 2 (first row and first column of chart). The basic task then involves solving two equations:

2n = 3m + 1 and 2n = 3m − 1.

Gersonides had the idea of looking at remainders after division of powers of 3 by 8 and powers of 2 by 8. For example, 27 divided by 8 gives a remainder of 3. For powers of 3, the remainders all turn out to be 1 or 3, depending on whether the power is even or odd. The remainders for powers of 2 are 1, 2, 4, then 0 for all powers higher than 2.

For 2n = 3m + 1, when m is odd, 3m has remainder 3, and 2n = 3m + 1 must then have the remainder 4, so n = 2 and m = 1. That gives the consecutive harmonic numbers 3, 4. When m is odd, the equation gives the consecutive harmonic numbers 1, 2.

For 2n = 3m − 1, when m is odd, 3m has remainder 3, so 2n = 3m − 1 has remainder 2; as a result, n = 1 and m = 1, to give the consecutive harmonic numbers 2, 3.

The final case, when m is even, is a little trickier and requires substituting 2k for m, then solving the equation 2n = 32k − 1 = (3k − 1)(3k + 1). That gives the consecutive harmonic numbers 8, 9. QED.

Interestingly, the theorem about harmonic numbers proved by Gersonides has connections with Fermat's last theorem and with the abc conjecture, a topic of some interest among number theorists.
 
Originally posted January 25, 1999

October 12, 2020

Absolutely Abnormal

Identifying the normal (or even the abnormal) in mathematics can pose serious difficulties.

In 1909, mathematician Émile Borel (1871-1956) introduced the concept of normality as one way to characterize the resemblance between the digits of a mathematical constant such as pi (the ratio of a circle's circumference to its diameter) and a sequence of random numbers.



If a number is normal, digit sequences of the same length occur with the same frequency. A constant would be considered normal to base 10 if any single digit in its decimal expansion appears one-tenth of the time, any two-digit combination one-hundredth of the time, any three-digit combination one-thousandth of the time, and so on.

In the case of pi, you would expect the digit 7 to appear 1 million times among the first 10 million decimal digits of pi. It actually occurs 1,000,207 times—close to the expected value. Each of the other digits also turns up with approximately the same frequency, showing no significant departure from predictions. (See "Is Pi Normal?" by Stan Wagon.)

A number is said to be "absolutely normal" if its digits are normal not only to base 10 but also to every integer base greater than or equal to 2. In base 2, for example, the digits 1 and 0 would appear equally often.

Borel established that there are lots of normal numbers. Finding a specific example of a normal number, however, proved much more difficult.

In 1933, D.G. Champernowne showed that the carefully constructed number 0.12345678910111213…, created by writing all the positive integers in a row as a single decimal, is normal to base 10. You can construct analogous normal numbers for other bases.

At the same time, although it is known that almost all real numbers are absolutely normal, no one has yet proved even a single, "naturally occurring" real number to be absolutely normal.

Mathematician Greg Martin turned his attention to the opposite extreme—real numbers that are normal to no base whatsoever. He described his successful search for such an "absolutely abnormal" number in the October 2001 American Mathematical Monthly.

To start with, every rational number is absolutely abnormal. For example, the fraction 1/7 can be written in decimal form as 0.1428571428571…. The digits 142857 just repeat themselves. Indeed, an expansion of a rational number to any base b or bk eventually repeats. Hence, a rational number "is about as far from being simply normal to the base bk as it can be," Martin remarked.

Martin focused on constructing a specific irrational absolutely abnormal number. He nominated the following candidate, expressed in decimal form, for the honor:

α = 0.6562499999956991999999…9999998528404201690728…

The middle portion (shown in bold) of the given fragment of α consists of 23,747,291,559 9's.

Martin's formulation of this number and proof of its absolute abnormality involved so-called Liouville numbers, named for Joseph Liouville (1809-1882).

One example of a Liouville number is 0.1100010000000000000000010000…, where there is a 1 in place 1, 2, 6, 24,…, n! and 0 elsewhere. Liouville had introduced such numbers as examples of transcendental numbers—real numbers that are not roots of polynomial equations with integer coefficients.

The construction that he found, Martin concluded, "easily generalizes to a construction giving uncountably many absolutely abnormal numbers."

Originally posted November 5, 2001

October 10, 2020

Testing for Divisibility


The crisp new dollar bill that I have just taken from my wallet bears the serial number 24598176. It's easy to tell that the number is exactly divisible by 2 but not by 5. Is it divisible by 3? by 4? by 11?
 
In a 1962 Scientific American article, Martin Gardner noted that during the 15th and 16th centuries in Renaissance Europe, the rules for checking whether one number is divisible by another without a remainder were widely known, particularly because of their usefulness in reducing large-number fractions to their lowest terms.

The advent of decimal arithmetic reduced the need for such tests. "Few people, including many mathematicians, know all the simple rules by which large numbers can be tested quickly for divisibility by numbers 1 through 12," Gardner remarked.

More recently, Eli Maor observed in an article on divisibility that, at one of his presentations, none of the teachers present appeared to know the quick divisibility test for 11.

Nowadays, in a world where calculators are ubiquitous, divisibility rules may seem little more than quaint relics of a distant past. Nonetheless, they can be handy for solving digital puzzles, reducing fractions, and as targets for algorithm development.

Here's a set of rules for numbers from 2 to 12.

2 A number is divisible by 2 if and only if its last digit is even (ending in 0, 2, 4, 6, and 8).

For example, because the final digit 6 is an even number, 24598176 is divisible by 2.

3 Sum the digits. If the result is more than one digit, sum again and continue until one digit remains. If the final digit (called the digital root of the number) is a multiple of 3 (3, 6, or 9), the number is divisible by 3. This also means that you can scramble the digits of any multiple of 3 and still have a number that is a multiple of 3.

For example, 2 + 4 + 5 + 9 + 8 + 1 + 7 + 6 = 42; 4 + 2 = 6. Because the digital root of 24598176 is 6, which is divisible by 3, the original number must be divisible by 3.

4 A number is divisible by 4 if and only if the number represented by the last two digits is a multiple of 4.

Because 76 is divisible by 4, 24598176 is divisible by 4.

5 A number is divisible by 5 if and only if its last digit is 0 or 5.

6 A number is divisible by 6 if and only if it is divisible by both 2 and 3 (that is, it's an even number divisible by 3).

The number 24598176 is divisible by both 2 and 3, so it must be divisible by 6.

7 Multiply the last digit of the number by 2. Subtract that product from the number obtained by deleting the last digit of the original number. Repeat the two steps as necessary. The original number is divisible by 7 if and only if the resulting number is divisible by 7.

For example, 6 ✕ 2 = 12; 2459817 − 12 = 2459805; 5 ✕ 2 = 10; 245980 − 10 = 245970; 0 ✕ 2 = 0; 24597 − 0 = 24597; 7 ✕ 2 = 14; 2459 − 14 = 2445; 5 ✕ 2 = 10; 244 − 10 = 234; 4 ✕ 2 = 8; 23 − 8 = 15. Because 15 is not divisible by 7, neither is 24598176.

8 A number is divisible by 8 if the number represented by its last three digits is a multiple of 8.

Because 176 is divisible by 8, 24598176 is divisible by 8.

9 A number is divisible by 9 if and only if it has a digital root of 9.

For example, because the digital root of 24598176 is 6 (see 3, above), the number is not divisible by 9.

10 A number is divisible by 10 if and only if it ends in 0.

11 Take the digits from right to left, alternately subtracting and adding. The original number is divisible by 11 if and only if the resulting number is divisible by 11. (Assume that 0 is divisible by 11.)

For example, 6 − 7 + 1 − 8 + 9 − 5 + 4 − 2 = −2. Because −2 is not divisible by 11, 24598176 is not divisible by 11.

12 A number is divisible by 12 if and only if it is divisible by both 3 and 4.

The number 24598176 is divisible by both 3 and 4, so it must be divisible by 12.

In the case of 7, the rule is sufficiently complicated that it takes nearly as much effort to check for divisibility as it would take to perform the division operation itself. Indeed, dozens of algorithms for testing for 7 have been devised over the years, but none of them represents a significant time-saver.

In the 2002 article "Divisibility: A Problem Solving Approach Through Generalizing and Specializing," Rina Zazkis described a student investigation of how one algorithm for testing for 7 (the one mentioned above) could be generalized to other prime numbers.

Surprisingly, the same algorithm can be applied to determine divisibility by 3. For 24598176, the resulting number is 15 (see 7, above), which is divisible by 3.

Variants of this recipe, which Zazkis described as a "trimming" algorithm, work for other prime numbers.

19 Multiply the last digit of the number by 2. Add that product to the number obtained by deleting the last digit of the original number. Repeat the two steps as necessary. The original number is divisible by 19 if and only if the resulting number is divisible by 19.

For example, 6 ✕ 2 = 12; 2459817 + 12 = 2459829; 9 ✕ 2 = 18; 245982 + 18 = 246000; 0 ✕ 2 = 0; 24600 + 0 = 24600; 0 ✕ 2 = 0; 2460 + 0 = 2460; 0 ✕ 2 = 0; 246 + 0 = 246; 6 ✕ 2 = 12; 24 + 12 = 36. Because 36 is not divisible by 19; the original number is not divisible by 19.

It's possible to devise a divisibility rule based on some sort of trimming algorithm for any prime, p, except 2 and 5. "The existence of a trimming algorithm for p depends on the existence of a multiple of p which is larger by 1 or smaller by 1 than a multiple of 10," Zazkis noted.

Can a similar algorithm be used to determine divisibility by a composite number? Are there other algorithms that can be generalized to work for primes?

The realm of divisibility tests has all sorts of avenues for further investigation and algorithm development.

Originally posted August 19, 2002

September 6, 2020

Pick a Digit, Any Digit

One of the most amazing mathematical results of the last few decades was the discovery of a surprisingly simple formula for computing digits of the number pi (π). Unlike previously known methods, this one allows you to calculate isolated digits—without computing and keeping track of all the preceding numbers.

No one had previously even conjectured that such a digit-extraction algorithm for pi was possible.

The only catch is that the formula works for hexadecimal (base 16) or binary digits but not for decimal digits. Thus, it's possible to determine that the forty billionth binary digit of pi is 1, followed by 00100100001110…. However, there's no way to convert these numbers into decimal form without knowing all the binary digits that come before the given string.

In hexadecimal form, the number pi is written as 3.243F6A8885A308D313198A2E0…, where the letters stand in for the hexadecimal equivalent of the base-10 numbers 10 (A), 11 (B), 12 (C), 13 (D), 14 (E), and 15 (F). It's straightforward to convert a hexadecimal expression into binary form but not into decimal form.

The novel scheme for computing individual hexadecimal digits of pi was found by David H. Bailey, Peter B. Borwein, and Simon M. Plouffe.

The method is based on the following new formula for pi:


Computing individual hexadecimal digits using that formula relies on a venerable technique known as the binary algorithm for exponentiation. Bailey, Borwein, and Plouffe provided details of how that procedure works in a report titled "On the rapid computation of various polylogarithmic constants."

A year after the discovery of the formula, Fabrice Bellard used it to calculate the 100 billionth hexadecimal digit of pi: 9, followed by C381872D2…. Later he computed the trillionth digit: 8, followed by 7F72B1DC…. In binary form, the corresponding result is 1, followed by 000011111110111…. The main calculation required a month of computation on more than 20 workstations and personal computers.

The basic Bailey-Borwein-Plouffe algorithm can also be used to compute the nth digit of certain other transcendental numbers, such as log(2) and π2. Log(2), for example, can be determined from the series: 1/2 + 1/8 + 1/18 + …, where the kth term is 1/k2k.

Physicist David John Broadhurst reported in 1998 the discovery of formulas for determining isolated hexadecimal or binary digits of several additional numerical constants of considerable interest to mathematicians, including Catalan's constant and values of the zeta function: ζ(3), and ζ(5).

It was a spectacular piece of work, Jonathan Borwein commented at the time. "His tools were a marvelous combination of experiment, numeric and symbolic computation, followed by a mix of computer and human proofs."

The decimal digits of Catalan's constant, named for the Belgian mathematician Eugène Charles Catalan (1814-1894), can be calculated from the series: 1 − 1/9 + 1/25 − 1/49 +…, where the kth term is (−1)k/(2k + 1)2.

Broadhurst found a formula that could be used to obtain isolated hexadecimal and binary digits. Interestingly, mathematicians have not yet proved that Catalan's constant (0.915965594177…) is an irrational number—that is, not expressible as a fraction, or rational number. The number itself was known in the year 2020 to 600 billion decimal digits.

"It illustrates that there's a world of difference between being able to come up with methods of computing a constant quickly and necessarily being able to prove something about its transcendance or irrationality," Borwein said. "This highlights the difference between what we can prove, what we have good evidence for, and what we just know is true even though a proof appears hopelessly out of our reach."

What apparently makes it feasible to find formulas for certain constants and not others is that each successfully tackled number, including pi, is the value of a logarithm, Borwein noted. Broadhurst's interest in the formulas lay in the context of applying the theory of mathematical knots to physics, specifically quantum field theory.

Plouffe, for one, also found a way to compute individual decimal digits of pi without calculating preceding digits. That made it possible to compute a particular decimal digit of pi using a pocket calculator, Plouffe said. However, his algorithm is fairly slow and clearly impractical for determining the millionth or billionth decimal digit of pi.

Nonetheless, there's no evidence yet that an efficient algorithm for computing isolated decimal digits of pi doesn't exist.

Originally posted March 2, 1998

August 25, 2020

Sequence Puzzles

Given a sequence consisting of the whole numbers 1, 4, 9, 16, 25, 36, and 49, what number comes next in the sequence?

The most likely answer is 64—the next number in a sequence of squares of consecutive integers, starting with 1.

Such sequence puzzles are a staple of textbook exercises, brainteaser collections, and various intelligence and aptitude tests. Some sequences are easy to figure out, some have multiple interpretations, and others can require considerable head-scratching before the pattern becomes evident.

Neil A.J. Sloane has been collecting number sequences ever since he was a graduate student at Cornell University in the 1960s (see the Quanta Magazine article "The Connoisseur of Number Sequences" and the Numberphile video "What Number Comes Next?").

Sloane described nearly 6,000 examples in his 1995 book The Encyclopedia of Integer Sequences and has added thousands upon thousands of additional examples to an online extension of the book, The On-Line Encyclopedia of Integer Sequences (OEIS).


One useful feature of Sloane's OEIS compendium is the ability to enter a set of numbers and search for information about that sequence. For example, suppose you enter the numbers 1, 2, 3, 6, 11, 23, 47, 106, 235. Among other things, the results page tells you that the next term is 551, that this sequence is associated with trees having n nodes, and that there is a formula for calculating the sequence's terms.

As a way to demonstrate his online encyclopedia, Sloane had a practice of assembling entertaining pages of sequence puzzles. One set starts off with several simple classic sequences (perfect squares, Fibonacci numbers, and so on), then quickly moves into more perplexing territory.

For example, what do you make of the sequence 1, 2, 3, 7, 43, 1807, 3263443? It turns out that the terms are members of a variant of Sylvester's sequence, and the sequence's next member is 10650056950807.

At the OEIS puzzle page, when you give up, you can just click on a link to see the answer. That's a handy shortcut you don't have available to you when you're taking your SAT's.


Originally posted May 19, 2003

July 7, 2020

A Special Sequence

1. A Consequential Countdown

A Special Sequence

Remember the puzzling countdown (see "A Consequential Countdown"): 55, 34, 21, 13, 8, 5, 3, 2, 1, 1?

Look at it in reverse order: 1, 1, 2, 3, 5, 8, 13, 21, 34, 55. Do you see a pattern? Can you predict what number would come next?

The numbers belong to a famous sequence named for the Italian mathematician Fibonacci, who lived more than 700 years ago.

Leonardo of Pisa, also known as Fibonacci. He lived from about 1170 to 1240.

Each consecutive number is the sum of the two numbers that precede it. Thus, 1 + 1 = 2, 1 + 2 = 3, 2 + 3 = 5, 3 + 5 = 8, and so on.

The ninth number of the Fibonacci sequence is 34 and the tenth is 55, so the eleventh is 89. What is the twelfth Fibonacci number? What is the sixteenth?

TRY IT!
Here are some additional number sequences. See if you can fill in the missing numbers and figure out the rules for making each sequence.

1, 3, 4, 7, 11, 18, 29, 47, ____, 122, …
3, 6, 12, 24, 48, ____, 192, 384, …
1, 3, 6, 10, 15, 21, 28, 36,____, 55, 66, 78, …
1, 4, 9, 16, ____, 36, 49, 64, 81, 100, …

Number sequences have intrigued mathematicians for centuries. The first example given above gives the Lucas numbers, honoring the French mathematician Édouard Lucas, who studied the Fibonacci sequence.

Lucas worked out what would happen if you started with any two whole numbers, then followed the Fibonacci rule. He discovered many interesting new sequences and number patterns.

Mathematician Neil Sloane has been collecting number sequences ever since he was a student at Cornell University in the 1960s. He described nearly six thousand examples in his book the Encyclopedia of Integer Sequences. The current online version (The On-Line Encyclopedia of Integer Sequences) catalogs more than 300,000 sequences.

Mathematicians and other researchers use his book and database as a reference for counting or tabulating things that involve number sequences, from the number of atoms in various molecules to different types of knots.


Answers:
The twelfth Fibonacci number is 144. The sixteenth is 987.
The missing numbers are: 76, 96, 44, and 25.
The first set of numbers is an example of a Lucas sequence that begins with 1 and 3, then 1 + 3 = 4, 3 + 4 = 7, 4 + 7 = 11, 7 + 11 = 18, and so on.
In the second sequence, the numbers double again and again.
In the third, you start with 1, add 2 to get 3, then add 3 to get 6, then add 4 to the new answer to get 10, and so on.
The fourth sequence consists of consecutive perfect squares: 1 ✕ 1 = 1, 2 ✕ 2 = 4, 3 ✕ 3 = 9, and so on.

July 6, 2020

A Consequential Countdown

1. A Consequential Countdown

"Clean up your room right this minute," your mother says. "You may not go out until all your stuff is put away."

You flop onto your bed, dreading the prospect of having to sort through all those cards and papers and books and games and socks and shoes and…

…just thinking about it makes you feel tired. You are starting to feel sleepy…very sleepy…and your eyes close….

Suddenly you open your eyes and realize you must have dozed off. Gazing up at the flat, dull ceiling of your bedroom, you remember the mess you were supposed to clean up. Now it's dark outside. Your curtains are still open, and through the window you see a sky full of stars.


You hear a noise that sounds like a distant motor. Soon it grows louder than an air conditioner, then louder than a lawn mower, then even louder than a huge truck. Your bed starts to move, as if powered by its own engine. The mattress folds up, propping you in a sitting position.

Your eyes search the room. Where is all your stuff? Who cleaned up the mess? Your desk is now covered with high-tech displays and a keyboard instead of papers and books. Mounted high on the wall in front of you is a huge screen.

"PREPARE FOR TAKEOFF!" blares a voice from your bedroom radio. The number "55" flashes on the screen, and the voice roars "FIFTY-FIVE!"

A second later, the voice from your radio shrieks "THIRTY-FOUR!" as the flashing number of the screen changes to "34."

"TWENTY-ONE!"

"THIRTEEN!" If this is a countdown, you wonder, why does it skip so many numbers?

"EIGHT!"

"FIVE!"

"THREE!" You shiver with trepidation as an automatic seat belt locks over you.

"TWO!" Your heart pounds faster and faster.

"ONE!" You shut your eyes and clench your fists.

"ONE!" One again? Is the countdown stuck?

EXPLOSION! You feel the whole room shudder as it takes off.

Your body feels pressed down into the seat, as if there is a tremendous amount of gravity. Looking out the window, you realize that you are heading into space.

After a couple of minutes, the motor quiets down. Your body starts to feel eerily light. Unbuckling the seat belt, you find that you can push off from your seat and float over to the window.

Clutching the windowsill, you can make out Earth's blue oceans, white clouds,  and brown and green continents as the planet grows smaller and smaller.


Then you gaze into space and see thousands of stars and dozens of spiral-shaped galaxies.


Spiral galaxy M81. NASA/JPL-Caltech/ESA/Harvard-Smithsonian CfA

Why are there so many spirals? Why did the countdown skip numbers and repeat the number 1? What in the cosmos is going on here?

July 2, 2020

Mean Median Surprise

Start with three numbers, say 5, 17, and 23. Their median (middle value) is 17. Find a fourth number so that the arithmetic mean of all four is 17. This number must be 23 (4 ✕ 17 – 5 – 17 – 23).

Repeat the process. The median of 5, 17, 23, and 23 is halfway between 17 and 23 (20). Find a fifth number so that the mean of all five numbers is 20. This number is 32 (5 ✕ 20 – 5 – 17 – 23 – 23).

Repeat the process. The median of 5, 17, 23, 23, and 32 is 23. Find a sixth number so that the mean of all six is 23. The sixth number must be 38.

Continuing the process, you get the sequence 5, 17, 23, 23, 32, 38, 23, 23, 23, 23,…. The sequence reaches a constant value!

The same thing happens if you start with the numbers 6, 46, and 78. You get the sequence 6, 46, 78, 54, 66, 74, 96, 108, 102, 110, 96, 100, 195, 213, 96, 96, 96,….

It also happens with 13, 41, and 53. You get the sequence 13, 41, 53, 57, 71, 83, 67, 71, 102, 112, 89, 93, 71, 71, 71,….

"To our surprise, the same thing happened to every sequence we examined, with whatever three numbers we started," Harris S. Schultz and Ray C. Shiflett reported in an article in the May 2005 College Mathematics Journal. Schultz and Shiflet dubbed these strings M&m sequences for "mean and median."

In these sequences, "we calculate the median of the list of the first k values and choose the k + 1 value so that the mean of the first k + 1 values equals this median," the mathematicians noted.

An M&m sequence is considered stable if it eventually reaches a constant value. The length of the sequence is the number of terms it takes to get to the repeating value for the first time. For example, the sequence starting with 6, 46, and 78 has a stable value of 96 and its length is 15.

The starting numbers don't have to be integers. The numbers 5, 5.5, and 33.9, for example, yield a sequence of length 73 and stable value –4.65625.

Schultz and Shiflet conjectured that every M&m sequence is stable. In investigating the problem, the mathematicians proved a variety of results that fall short of the ultimate goal but provide useful insights into what's going on.

It's obvious that any sequence that starts with three identical numbers is constant. It's also easy to show that if two of the values are the same, the M&m sequence has length 5. Various other intriguing patterns also emerged.

Schultz and Shiflet proved some stability results, particularly for sequences that start with 0, x, and x + 1, where x is greater than or equal to 1.

"Our hope is that readers will be motivated to study and explore these M&m sequences," Schultz and Shiflet wrote. The question remains: Is every M&m sequence stable?

Originally posted May 30, 2005

June 27, 2020

Prime Listening

The human ear has remarkable capabilities—so remarkable that we generally accept as routine such formidable tasks as recognizing a voice, picking out a single word in a cacophony of cocktail-party chatter, or hearing a flute's sweet tone in the midst of an orchestral romp.

Our sense of hearing can integrate disparate sounds into a harmonious whole or detect subtle nuances buried in noise. It has amazing powers of pattern recognition. Indeed, by using sound to represent masses of data or jumbles of numbers, we can take advantage of that capability to identify regularities or make subtle distinctions.

Mathematician Chris K. Caldwell has developed a scheme for listening to sequences of primes—to hear both simple patterns and perplexing irregularities found among those numbers.

"Multimedia allows the use of sight and sound, so why not use sound?" Caldwell asks. "After all, don't we use our ears to detect patterns as we listen to the car motor for a problem or shake a box to determine its contents?"

The MIDI (Musical Instrument Digital Interface) specification often used in computer programs to represent musical notes assigns a number to each note on a keyboard. According to this recipe, middle C is 60, C-sharp is 61, D is 62, and so on, for a total of 128 notes. One could use that correspondence directly to "play" primes. For example, 67 would then be G.

There are infinitely many primes, however, so Caldwell's strategy is to divide each prime by a fixed number, then play just the remainder—a novel application of that standard number-theory tool known as modular arithmetic.

For example, if the divisor, or modulus, is 7, then for the primes {2, 3, 5, 7, 11, 13, 17, 19, 23. . .}, one would play {2, 3, 5, 0, 4, 6, 3, 5, 2,…}. Because those particular notes on the MIDI scale would be too low in frequency to be audible, Caldwell adds a constant, such as 55. Hence, the first prime, 2, is played as the note A. There are seven possible notes.

Primes modulo seven

In listening to the primes modulo 7, you find that all seven notes are played, but the lowest note occurs just once. That lowest note is the prime number 7. Any other number that leaves a remainder of 0 when divided by 7 must be a multiple of 7, so it can't be a prime. A theorem of Peter Gustav Lejeune Dirichlet (1805-1859) on primes in arithmetic progression guarantees that all the other notes are heard infinitely often when one plays all the primes.

Trying various prime and composite divisors reveals other patterns, and one can check which notes are played infinitely often, which ones are played just once, and which ones are never played for each modulus.

It's possible to use the duration of notes to represent the gaps between prime numbers. There's no gap between the first two primes, 2 and 3. There's a gap of one between 3 and 5, a gap of one between 5 and 7, and a gap of three between 7 and 11. According to the prime number theorem, the average gap length up to a certain prime equals the value of the natural logarithm of that prime.

Caldwell makes the note representing a particular prime last a time proportional to the size of the gap to the next prime in the sequence, divided by the natural logarithm of that prime.

Caldwell's "Prime Number Listening Guide" offers his weirdly tuneful "primes modulo 41 with prime gap tempo" and other primal sounds and even a way to make your own prime music.

There may be more to come. Caldwell has started to develop a program to play the factorizations of sequences of numbers. "The low primes are used to determine what percussion instrument to play, the next larger primes are mapped to one instrument, and the larger ones (modulo some base) are mapped to a second," he says. "If the integer is divisible by a power (greater than 1) of the prime, then that prime's 'note' is played louder."

You may one day hear more of the concert theorems that mathematicians have so painstakingly gleaned from their studies of integers and the primes.

Originally posted June 22, 1998