February 25, 2021

Dartboard Estimates

Throwing darts at a target may sound like a curiously haphazard way to solve a mathematical problem. Properly applied as a kind of intelligent guessing, however, it can become a highly effective technique for obtaining answers to certain problems in Ramsey theory (see "Playing Fields of Logic") and in other areas of mathematics.

Suppose that instead of the usual rings and diagonals, a dartboard features a target in the shape of a triangle inside a circle. If you were to throw darts at the board without aiming, the darts would land randomly within the circle and the triangle. By counting up the number of darts that land anywhere on the target and those that land only within the triangle, you can estimate the triangle's size relative to that of the circle.

For example, if 10 out of 100 randomly thrown darts fall inside the triangle, the triangle has approximately one-tenth the area of the circle.

For a dartboard consisting of a triangle inside a circle inside a square, the proportion of randomly thrown darts that land inside the triangle provides an estimate of the triangle's area relative to that of the square. You can also estimate the circle's area relative to that of the square or find an approximate value of π.

Such poorly played games of darts can provide remarkably good estimates of a variety of mathematical quantities, including π (pi), the ratio of the circumference of a circle to its diameter. Just make a target in which a circle of a given radius fits snugly inside a square with sides equal to twice the circle's radius.

The number of darts that hit the circle divided by the number that hit the square outside the circle provides an estimate of the value of π divided by four. The more inaccurate the aim and the larger the number of throws, the better the estimate becomes (see also "Buffon's Needling Ants" and "Targeting Pi").

Random choices can also play a prominent role in mathematical proofs. In 1947, Paul Erdős (1913-1996) used a probabilistic method to show, without actually determining the Ramsey number itself, that a given Ramsey number has to be greater than a certain value (see "Puzzling Groups and Ramsey Numbers").

The darts of Erdős's thought experiment were flips of a coin. In effect, he started with a room of mutual strangers, brought each of the pairs together, and introduced them only if a coin flip came up heads. The result was a random mixture of strangers and fresh acquaintances, linked in various ways.

By an ingenious chain of reasoning, Erdős then demonstrated that the probability of getting a party with a desired mix of strangers and acquaintances is practically certain beyond a particular size of group.

However, although Erdős could demonstrate the existence of the right kinds of patterns among the random pairings of his thought experiment, he could give no hint as to how to find such pairings or draw and color the relevant graphs and subgraphs to determine the Ramsey number.

Using mathematical recipes that involve the equivalent of coin flips has proved a useful strategy for solving a wide range of problems in mathematics, from testing whether a number is prime (evenly divisible only by itself and 1) to estimating such mathematical quantities as π.

Such randomized algorithms are also extremely useful in computer science for sorting data, recognizing patterns, generating images, and searching databases on a computer. In many cases, random choices or paths shorten the task at hand and help to avoid cases of computational gridlock.

The same thing can happen when two people walking in opposite directions along a sidewalk both step to the same side to avoid colliding, then do it again and again. They get trapped in an embarrassing dance until the stalemate is somehow finally broken.

A coin toss can solve the problem for both the inadvertent dancers and a computer that is engaged in resolving conflicting instructions or deciding which of two or more equally likely courses to take.

Probabilities also enter into Monte Carlo simulations for modeling the behavior of atoms in a gas and other physical systems (see also "Random Bits"). You can toss coins or throw dice to obtain answers for both mathematical and physical questions, and, with the help of a computer, this can sometimes be done surprisingly quickly and efficiently.

Ironically, Erdős himself never worked with a computer. His insights, however, suggested novel probabilistic methods of tackling tough problems well suited to the powerful, ubiquitous computers of today.

Next: Group Thoughts

February 24, 2021

Puzzling Groups and Ramsey Numbers

Throughout his long, itinerant life, Paul Erdős (1913-1996) spent most of his waking hours and, apparently, all his sleeping hours doing mathematics. He was a superb problem solver, and his phenomenal memory allowed him to cite exact references to thousands of mathematical papers, including their page numbers.

"If you don't know how to attack a particular problem, ask Erdős" was the constant refrain among his colleagues and collaborators. He was a one-man clearinghouse for information on the status of unsolved problems in the fields of mathematics that attracted his attention (see "Paul Erdos and an Infinity of Problems").

By the time he died in 1996, Erdős had also earned a reputation as the greatest problem solver of all time. He had the knack of asking just the right mathematical question—one that was neither so simple that it could be solved in a minute nor so difficult that a solution was unattainable. Ramsey theory, which offered a wide variety of challenges, was one of his joys.

A typical question involving Ramsey theory concerns how many numbers (see "Pigeonhole Congestion"), points (see "Planes of Budapest"), or specified objects are needed to guarantee the presence of a certain structure or pattern. You look for the smallest "universe" that automatically contains a given object.

For example, it may be the smallest group of arbitrarily positioned stars that would include a convex quadrilateral or the number of people required to ensure the attendance of three strangers or three acquaintances at a gathering (see "Party Games").

Fascinated by such problems, Paul Erdős particularly liked those he called "party puzzles," which involve different combinations of strangers and acquaintances brought together at some sort of gathering. But he didn't usually think of these problems in terms of people—except as a way of describing them informally.

Erdős reasoned in terms of points, lines, and colors—all elements of a branch of mathematics known as graph theory. In general, a graph (in this context) consists of an array of points, or vertices, connected by straight lines, which are often called edges.

To convert a party puzzle into a graph problem, you draw a point for each person present. You then draw lines between the points, with red lines joining two people who know each other and blue lines for two people who are strangers. When all pairs of points are joined, the resulting network of points and lines is known as a complete graph.

Depending on the relationships specified in a given case, such a graph may contain only red lines, only blue lines, or a mixture of red or blue lines joining the points. The problem comes down to proving that no matter how the lines are colored, you can't avoid producing, in the case of six points (or people), either a red triangle (representing three mutual acquaintances) or a blue triangle (three strangers).

Such a coloring is guaranteed for a complete graph of six points, but not necessarily for a graph of five or fewer points. Hence, the minimum number of points that necessarily has the requisite pattern is six, often designated by the so-called Ramsey number R(3,3), where the first number inside the parentheses gives the number of acquaintances and the second gives the number of strangers.

What about subgroups of four strangers or four acquaintances? It turns out that an array of 17 points is insufficient to guarantee that four points will be linked by the same color. Only a graph of 18 or more points will always do the job. Thus, R(4,4) = 18.

In a graph representing seventeen people (left), it's possible to color the lines so that no four points are connected by a network of lines that are completely blue or completely red. In contrast, in a graph representing eighteen people (right), there is always such a network, meaning that a group of four mutual acquaintances or four mutual strangers will always be present in such a gathering.

In other words, at a dinner party of 18 there will always be at least four mutual acquaintances or strangers, but that may not necessarily happen when only 17 people are present,

It may be tempting to consider Ramsey's theorem in drawing up a guest list to ensure an appropriate balance of novelty and familiarity in some random gathering—guests selected, perhaps, by sticking pins in a phone book.

The trouble is that calculating the minimum number of partygoers to have—say, seven acquaintances or nine strangers—can be extraordinarily difficult. Though Ramsey's theorem guarantees the existence of such arrangements, determining their actual size is an onerous task.

The chief problem is that the number of cases that must be checked escalates rapidly with each step up in the size of the group. For example, a group of five represents 1,024 different cases, a group of six has 32,768 possibilities, and a group of seven has 221 possibilities.

A prodigious amount of effort involving considerable mathematical ingenuity has gone into producing the rather scant table of Ramsey numbers known in the year 2000 (below). In several cases, mathematicians have been able to find the answer only to within a range of the true value—all for the sake of solving an intriguing puzzle rather than for any particular practical application.

A Ramsey number represents the minimum number of people needed to guarantee the presence of a given number of mutual acquaintances and a given number of mutual strangers. In some cases, mathematicians have not yet pinpointed the actual Ramsey number and can give only the range within which the number must fall.

In the last few decades, computers have played a significant role in determining Ramsey numbers. For example, in 1990, Brendan McKay and Zhang Ke Min relied on computers to help them sift through thousands upon thousands of possibilities to determine R(3,8).

A little later, McKay collaborated with Stanislaw P. Radziszowski in a concerted effort to find R(4,5). Checking all possible graphs was out of the question. Instead, McKay and Radziszowski focused on developing efficient procedures for mathematically "gluing" together small graphs, which could be checked more easily, to make graphs large enough for the problem at hand.

Because their search technique involved using the same computer program many times with different data, the researchers could readily partition their task into small pieces. This meant that they could do the necessary computations on many desktop computers rather than having to rely on a single supercomputer.

Both of the institutions where the researchers were based had a large number of workstations located in staff offices and student laboratories. Many of these machines were often idle during the night or on weekends. By commandeering these resources at odd times, the mathematicians could have as many as 110 computers working simultaneously on the problem.

By early 1993, McKay and Radziszowski had their answer: R(4,5) = 25. At that time, however, the prospect of cracking the next candidate, R(5,5) appeared bleak.

Erdős liked to tell an allegorical tale about the difficulties of determining Ramsey numbers. Suppose an evil spirit were to come to Earth and declare, "You have two years to find R(5,5). If you don't, I will exterminate the human race." In such an eventuality, it would be wise to get all the computers in the world together to solve the problem. "We might find it in two years," Erdős predicted.

But if the evil spirit were to ask for R(6,6), Erdős noted, it would be best to forgo any attempt at the computation and instead launch a preemptive strike against the spirit to destroy him before he destroys us.

On the other hand, "if we could get the right answer just by thinking, we wouldn't have to be afraid of him, because we would be so clever that he couldn't do any harm," Erdős concluded.

Erdős himself was a master of innovative techniques for solving problems, sometimes even exploiting randomness and probability to establish the existence of structures that he was determined to corral in his many party puzzles.

Previously: Planes of Budapest
Next: Dartboard Estimates

February 22, 2021

Planes of Budapest

Nearly every Sunday during the winter of 1933 in Budapest, a small group of students would meet somewhere in the city at a park or cafe to discuss mathematics. The gathering typically included Paul Erdős (1913-1996), George Szekeres (1911-2005), and Esther Klein (1910-2005).

In addition to feeding their passion for mathematics, the students enjoyed exchanging personal gossip and talking politics. In a way, they had their own, informal university without walls, and they relished the chance to explore new ideas.

On one particular occasion, Klein challenged the group to solve a curious problem in plane geometry that she had recently encountered. Suppose there are five points positioned anywhere on a flat surface, as long as no three of the points form a straight line. When four points are joined, they form a quadrilateral—a four-sided figure.

Klein had noticed that given five points, four of them always appeared to define a convex quadrilateral. The term convex means that the figure has no indentations. Thus, a square is convex, but a crescent quadrilateral (an angular boomerang or arrowhead shape with four vertices) is not.

Here, ACDE is a convex quadrilateral, but ABCE (shaded) is not because of the indentation at B.

Klein asked if anyone could prove that a convex quadrilateral has to be present in any given set of five points lying in a plane, with no three points in a line. After letting her friends ponder the problem, Klein explained her proof.

She reasoned that there are three different ways in which a convex polygon encloses all five points. You can imagine the points as nails sticking out of a board with an elastic band stretched so that it rings the largest possible number of nails.

The simplest case occurs when a convex polygon results from joining four points to create a quadrilateral, which fences in the remaining point and automatically satisfies the requirement.

In the second case, if the convex polygon is a pentagon incorporating all five points, then any four of these points can be connected to form a quadrilateral.

Finally, if the convex polygon includes just three points to create a triangle, the two points left inside the triangle define a line that splits the triangle so that two of the triangle's points are on one side of the line. These two points plus the two interior points automatically form a convex quadrilateral.

Inspired by Klein's argument, Szekeres went on to generalize the result, proving that among points randomly scattered across a flat surface, you can always find a set that forms a particular polygon—if there are enough points. For example, you can always find a convex pentagon in an array of nine points but not among just eight points.

Szekeres and Erdős then proposed that if the number of points that lie in a plane is 1 + 2k-2, you can always select k points so that they form a convex polygon with k sides. Thus, for a quadrilateral, k is 4, and the number of points required will be 1 + 24-2 = 1 + 22 = 5. For a convex pentagon, k is 5, and the number of points required will be 9.

In describing this occasion in a memoir written many years later, Szekeres recalled: "We soon realized that a simple-minded argument would not do and there was a feeling of excitement that a new type of geometrical problem [had] emerged from our circle which we were only too eager to solve."

Erdős dubbed it the "happy end" problem. The ending he had in mind was the subsequent marriage of Szekeres and Klein. The problem also turned out to be a application of a theorem developed by Frank Ramsey (see "Playing Fields of Logic")  

Interestingly, no one has yet proved the conjecture concerning the precise number of points required to guarantee the presence of a convex polygon. Indeed, the problem is solved only for the values k = 3, 4 (due to Klein), and 5. So, no one has even demonstrated that any set of 17 points in the plane always contains six points that are the vertices of a convex hexagon.

Erdős himself later played a central role in the transformation of Ramsey theory, which involves the application of Ramsey's theorem and related propositions, from a collection of isolated results into a coherent body of work,

Though pursued mainly for the thought-provoking mathematical puzzles that it suggests, Ramsey theory has begun to play a role in the design of vast computer and telephone networks. Techniques used in the theory can help engineers and computer scientists to take advantage of patterns that inevitably arise in large systems in which data storage, retrieval, and transmission have a significant component.
In the meantime, work has continued on the original conjecture proposed by Erdős and Szekeres. Expressed mathematically, the problem asks: For any integer n greater than or equal to 3, determine the smallest positive integer N(n) such that any set of at least N(n) points in general position in the plane (that is, no three of the points are on a line) contains n points that are the vertices of a convex n-gon.

Despite the efforts of many mathematicians, the Erdös-Szekeres problem is still far from being solved. At the same time, the unsolved problem has suggested a wide variety of related questions worthy of exploration. Some mathematicians have looked for other types of patterns among sets of points, and others have generalized the question to higher dimensions and other situations.

That's the rich legacy of an elegant problem first suggested many decades earlier and still attracting mathematical attention.

February 21, 2021

Playing Fields of Logic

Ramsey theory owes it name to Frank Plumpton Ramsey, an English mathematician, philosopher, and economist. His father, Arthur Stanley Ramsey, was a professor of mathematics and the president of Magdalene College at the University of Cambridge.

Frank Ramsey (1903-1930). MAA Convergence Portrait Gallery.

Frank Ramsey was born in 1903 and spent nearly his entire life in Cambridge. After he graduated in 1925 as Cambridge University's top math scholar, he remained at the university, quickly developing a reputation as an enthusiastic lecturer who could present ideas with great clarity. Applying his keen intellect, he made significant, wide-ranging contributions to the study of theoretical economics and other fields.

One of Ramsey's contemporaries, philosopher George E. Moore (1873-1958), wrote that Ramsey "combined very exceptional brilliance with very great soundness of judgment in philosophy. He was an extraordinarily clear thinker: no one could avoid more easily than he the sort of confusions of thought to which even the best philosophers are liable, and he was capable of apprehending clearly, and observing consistently, the subtlest distinction."

Ramsey was fascinated by questions of logic. He had been deeply influenced by Bertrand Russell (1872-1970), a philosopher and mathematician who believed that mathematics is synonymo.us with logic. Russell's ambitious goal was to prove that "all pure mathematics deals exclusively with concepts definable in terms of a very small number of fundamental logical concepts, and that all propositions are deducible from a very small number of fundamental logical principles."

These words appear in Russell's book The Principles of Mathematics (Principia Mathematica), in which he set forth his landmark thesis.

Russell was keenly interested in how we come to know what we know—how we draw valid conclusions from evidence. For him, such questions required close study of logic puzzles and paradoxes to help expose potential cracks in structures of belief.

"A logical theory may be tested by its capacity for dealing with puzzles, and it is a wholesome plan, in thinking about logic, to stock the mind with as many puzzles as possible, since these serve the same purpose as is served by experiments in physical science," Russell argued.

Russell was not the only one who was concerned about logic and the foundations of mathematics. At the beginning of the 20th century, David Hilbert (1862-1943) had argued that there had to be a clear-cut procedure for deciding whether a given logical proposition follows from a given set of axioms (an axiom is a statement thta is assumed to be true without proof).

Ramsey set out to demonstrate that such a decision process exists for at least one particular type of mathematical problem. As one of the steps in his argument, he considered relationships among sets of whole numbers, and he proved that if the number of objects in a set is sufficiently large and each pair of objects has one of a number of relations, there is always a subset containing a certain number of objects where each pair has the same relation.

In other words, there are patterns implicit in any sufficiently large set or structure. We can see Ramsey's theorem applied in cases in which the objects are groups of six people (see "Party Games") and whole numbers in increasing and decreasing sequences (see "Pigeonhole Congestion").

Though Ramsey was probably aware that his theorem could be applied to many different areas of mathematics, he focused on its application to mathematical logic. He showed that, given a set of axioms, certain true relationships must exist within the logical system.

Ramsey read his paper containing the result, "On a Problem of Formal Logic," to the London Mathematical Society in 1928, and it was published two years later.

Ramsey died in 1930, at the age of 26, as a result of complications associated with abdominal surgery for a grave illness. Yet he left an astonishing legacy of achievement in several fields. Ironically, the mathematical work for which he is now best known attracted little attention in his day.

Moreover, subsequent developments showed that Ramsey's success in finding a decision procedure for a particular class of mathematical problems was not representative of the general case.

Just a few years after Ramsey's death, Kurt Gödel (1906-1978), followed by Alan Turing (1912-1954) and others, proved that no such decision procedure was possible for any logical system of axioms and propositions sufficiently sophisticated to encompass the kinds of problems that mathematicians work on every day.

Typically, it's possible to make true statements or propose theorems that you can't prove within the given system without adding additional axioms.

Although Ramsey's theorem is accurately attributed to Frank Ramsey, its initial popularization stemmed from its application in geometry and the playful antics of a group of young mathematicians in Hungary in the 1930s.

Next: Planes of Budapest

February 18, 2021

Pigeonhole Congestion

Sorting the mail that comes into an office generally requires that each piece be slipped into the appropriate slot of an array of pigeonholes—one for each employee. Suppose that a small business needs ten such slots. When 11 pieces of mail arrive, one or more of the slots will have to contain at least two items.

So, if there are more pigeons than holes, some of the pigeons have to double up. Expressed mathematically, the so-called pigeonhole principle is a handy idea for proving theorems, and it often comes up in Ramsey theory to demonstrate the inevitability of certain patterns (see "Party Games").

By Pigeons-in-holes.jpg by en:User:BenFrantzDale; this image by en:User:McKay - Transferred from en.wikipedia to Commons.; Original text : Edited from Image:Pigeons-in-holes.jpg, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=4658682

Perhaps the simplest possible application of the pigeonhole principle concerns groups of men and women. If there are three people present in a gathering, at least two must be men or at least two must be women.

The same principle can be applied to any allocation of a given type of object, whether it is balls dropped into boxes, people slotted into a number of categories, or numbers meeting particular criteria.

For a more subtle, nontrivial use of the pigeonhole principle, we can look at sequences of numbers. Consider, for example, the set of numbers 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. In this case, the numbers are written in an increasing sequence. Each successive number is larger than its predecessor.

The numbers can also be written as a decreasing sequence: 9, 8, 7, 6, 5, 4, 3, 2, 1, 0.

Now suppose the order of these numbers is randomly scrambled. Mathematicians argue that the resulting sequence of numbers will always contain a shorter sequence (or subsequence) of at least four increasing or at least four decreasing numbers.

For example, in the scrambled set 1, 2, 8, 0, 3, 6, 9, 4, 5, 7, the numbers 1, 2, 8, 9 represent an increasing subsequence. The scrambled sequence 8, 4, 9, 2, 1, 6, 3, 7, 0, 5 happens to contain a decreasing consisting of 8, 4, 2, 1, 0.

How can we be sure that an arbitrary sequence of ten different numbers will have an increasing or decreasing subsequence containing at least four numbers?

The number of differently ordered sets containing ten members is 10 ✕ 9 ✕ 8 ✕ 7 ✕ 6 ✕ 5 ✕ 4 ✕ 3 ✕ 2 ✕ 1 (or 10!), which equals 3,628,800. As in the party-of-six example, the array of possible arrangements is too large for each one to be checked individually.

The most direct, elegant proof of this claim hinges on the pigeonhole principle. Consider the sequence 3, 0, 6, 2, 1, 8,7, 9, 5, 4. The idea is to put the numbers in an array of slots so that consecutive numbers in each row increase going to the right and in each column decrease going down (below). The first number goes into the slot in the upper-left corner.

The next number, 0, is smaller than 3, so it goes into the slot below 3. The following number, 6, is larger than 0, so it can't go in the slot beneath 0. It goes in the slot to the right of 3.

The number 2 is larger than 0 but smaller than 6, so it fits into the slot to the right of 0 and below 6. Similarly, 1 is smaller than 2 and 6 and larger than 0, so the only place it fits according to the allocation rule is in the slot below 2.

Each number in turn can be slotted into the array, and from the result it's easy to read off both increasing (3, 6, 8, 9) and decreasing (8, 7, 5, 4) subsequences of length four (below).

Because every number has to fit into its own slot, the diagram suggests that the very best that anyone can do is to cram nine of the numbers into a three-by-three corner, leaving the tenth number to complete either an increasing or decreasing subsequence of length four.

In general, there has to be at least one such subsequence in any scrambled set of numbers.

For strings of 101 arbitrarily ordered numbers, you find increasing or decreasing subsequences of length 11 or more, and that's not necessarily true of a sequence of just 100 different, scrambled numbers.

Generally, it's possible to prove that for n2 + 1 different numbers in a sequence, you will always get increasing or decreasing subsequences of length at least n + 1.

In the example given above, n = 3, so sequences have 32 + 1, or 10,  members and subsequences have a length of at least 3 + 1, or 4.

The pigeonhole principle is one tool that mathematicians can use to identify certain patterns that are inevitably present in a random array. In a more general sense, it also applies to everyday situations.

Anyone who frequently plays solitaire card games has undoubtedly noticed that when the cards start falling into place early, the game more often than not proceeds rapidly and open slots get filled, leaving chance a smaller field on which to cofound the ultimate result. In some games, there's no need to unveil the last few hidden cards because a successful outcome is already guaranteed.

So, patterns and predictability can arise in a variety of ways even in the realm of pure chance. Ramsey theory involves establishing the guarantees for such unexpectedly regular behavior among sequences of numbers, in geometric arrays of dots and lines, and in mathematical logic.

Previously: Party Games
Next: Playing Fields of Logic

February 11, 2021

The Long Run

Quite often, in a race game governed strictly by chance, the player who starts out ahead stays ahead for most, if not all, of the race. This striking feature is worth examining more closely.

In a two-player coin-flipping game, heads and tails will each win half the time, on average (see "Rolls and Flips"). But in a game of a thousand flips, when the total number of heads versus the total number of tails is at issue, it is very likely that one player will be ahead in the cumulative scoring more than 90 percent of the time.

In other words, the proportion of time that, say, the number of heads exceeds the number of tails can be very high. Although it's equally likely that either of the two players will be in the lead at any given moment, one of them will probably have the lead nearly the whole time.

Such a counterintuitive reality hinges on the fact that, like dice, cards, and roulette wheels, coins don't have memories. Each coin toss is independent of all other tosses. Even if the first few tosses happen to result in more heads than tails, the probability of getting either a head or a tail on each subsequent toss remains precisely ½.

Whatever slight trend gets established at the beginning doesn't necessarily get washed out, even though, in the long run, the proportion of heads to tails generally gets closer and closer to ½ as the number of tosses increases.

Gamblers and other game players often incorrectly reason that because the numbers of heads and tails even out over the long run, a sequence of eight heads out of ten means that tails are somehow more likely to come up in the next ten or twenty tosses to establish a proper balance. They imagine that the probabilities involved are sometimes elastic, stretching more tightly or loosening as necessary to bring the results back in line with expected trends.

What people overlook is the fact that the proportion of heads and tails also approaches ½ if, in subsequent tosses, heads and tails appear equally often—as they should.

Suppose heads have shown up eight times and tails twice in the first ten flips. The proportion of heads is 0.8 at this stage, In the next two thousand turns, the players happen to toss a thousand heads (for a total of 1,008) and a thousand tails (a total of 1,002). Heads are still in the lead, but the proportion of heads is now considerably closer to 0.5 (or ½).

Thus, flipping four heads in a row with a fair coin doesn't make it any more likely that the next flip will be tails. But it does make it more likely that you will end up with 54 heads in 100 flips, as opposed to 50.

Similarly, in a simple, two-player race game involving a single die, the player who's ahead at any given time has the higher probability of winning. Even at a hundred squares from the goal, a player merely four spaces behind faces odds of two to one against winning if it's his opponent's turn to throw the die.

Gamblers at roulette tables in casinos fall into the same kind of trap when they keep track of the numbers that happen to come up on the wheel. They usually believe that they can identify patterns in the sequences of numbers that come up.

However, if the roulette wheel is truly unbiased, such efforts are fruitless. What happens on one turn of the wheel has no bearing on what occurs on the next.

Because the casino holds one or two numbers on the wheel as its own, the casino bank is certain to beat the majority of players in the long run. Those gamblers who continue to play long enough will be ruined sooner or later, and no system of betting and number or color selection can prevent such an outcome.

Of course, apparent patterns do inevitably come up in the distribution of the numbers, but these ephemeral patterns don't mean a thing unless the wheel were imperfect.

The 17th-century Russian novelist Fyodor Dostoyevsky (1821-1881) became seriously hooked on roulette in 1863, when he won a considerable fortune on his first foray into playing the wheel. In a letter to his brother Misha, Dostoyevsky wrote:

My dear Misha, in Wiesbaden, I invented a system, used it in actual play, and immediately won 10,000 francs. The next morning I got excited, abandoned the system and immediately lost. In the evening I returned to the system, observed it strictly, and quickly and without difficulty won back 3,000 francs.
When I arrived in Baden-Baden I went to the tables and in a quarter of an hour won 600 francs. This goaded me on. Suddenly I began to lose, could no longer keep my head, and lost every farthing.

So it went in Dostoyevsky's succession of roulette episodes across Europe—a journey marked by requests for loans, unpaid hotel bills, and pawned watches—until 1871, when he finally quit.

Dostoyevsky's masterful short novel The Gambler, written in haste in 1866 to settle a debt, presents a striking portrait of an individual irresistibly drawn into Lady Fortune's web. Dostoyevsky describes a gambler's obsession with the zero slot on the roulette wheel:

It was, of course, a rare happening for zero to come up three times out of some ten or so, but there was nothing particularly astounding about it. I had myself seen zero turn up three times running two days before, and on that occasion one of the players, zealously recording all the coups on a piece of paper, had remarked aloud that no earlier than the previous day that same zero had come out exactly once in twenty-four hours.

Dostoyevsky was certainly not the first to write about the lure of the luck of the draw. Girolamo Cardano (1501-1576), a brilliant, highly regarded physician and mathematician with an impressive array of intellectual achievements to his credit, was another player who found gambling irresistible.

Around 1564, Cardano wrote The Book on Games of Chance, in which he presented competent analyses of backgammon and several other dice games, along with a variety of card games that included an early version of poker. He also excelled at chess, which at that time was accompanied by betting and handicapping.

In a section on who should play and when, Cardano offered the following advice about moderation in gambling:

So, if a person be renowned for wisdom, or if he be dignified by a magistracy or any other civil honor or by a priesthood, it is all the worse for him to play…. You should play rarely and for short periods, in a suitable place, for small stakes, and on suitable occasions, or at a holiday banquet.

It was advice that Cardano himself had trouble heeding. Erratic, argumentative, and obsessed, he gambled incessantly, sending his family into the Milan poorhouse for long stretches. In his brutally frank autobiography, he remarked:

During many years—for more than forty years at the chess boards and twenty-five years of gambling—I have played not off and on but, as I am ashamed to say, every day. Thereby I have lost esteem, my worldly goods, and my time. There is no corner of refuge for my defense, except if someone wishes to speak for me, it should be that I did not love the game but abhorred the circumstances which made me play: lies, injustices, and poverty, the insolence of some, the confusion in my life, the contempt, my sickly constitution and unmerited idleness, the latter caused by others.

The irrational superstitions and strategies of gamblers testify to the extraordinarily vast landscapes of possibility in games of chance. Unpredictability reigns, with patterns appearing out of nowhere, then mysteriously disappearing. Anything is possible, yet the possibilities are doled out in measured doses.

Faced with the astonishing scope of chance, the compulsive gambler lives on the brink, ever focused on the next spin of the wheel or roll of the die.

Using the mathematics of chance, you can predict outcomes, establish conditions for guaranteeing certain results, or, at the very least, show whether any particular game is worth playing.

Games provide structured situations in which people can compute probabilities, often with great precision, and test their hypotheses. They can then bring this knowledge to bear not only in games but also in everyday situations in which uncertainty and randomness play a role. Understanding the laws of probability can guide their decisions in a wide range of matters.

Of course, games teach nothing unless you pay attention to the odds and learn to deal with them. But even precise calculations may not be enough in the face of human greed and deceit.

In his 1753 novel The Adventures of Ferdinand, Count Fathom, Tobias Smollett (1721-1771) noted that his hero, Fathom, learned to "calculate all the chances with the utmost exactness and certainty," only to be fleeced by a team of swindlers. A knowledge of mathematics is no proof against human folly.

The theory of probability combines commonsense reasoning with calculation. It domesticates luck, making it subservient to reason. It stands as an attempt to master the vagaries of uncontrollable circumstance in the face of unavoidable ignorance.

Nonetheless, the tantalizing patterns that come and go among the infinite possibilities of randomness have a way of entangling the human mind. Randomness and pattern intermingle in mysterious ways. Where there is one, the other inevitably lurks.

Previously: Change of Face

February 9, 2021

Change of Face

The serious gamblers in casinos hang out at the craps tables. The basic rules of this two-dice game are simple, but the bewildering array of options for betting on various outcomes creates a fast-paced, insidiously seductive pastime, in which a heady brew of chance, intuition, experience, calculation, and superstition come into play.

The shooter tosses two dice. If a total of seven or eleven comes up on a beginning roll, the shooter and those wagering with him win whatever amount they bet. If a two, three, or twelve total (called craps) shows up, the shooter and his companions lose.

Players betting against the shooter win if a two or three comes up. They neither lose nor win for a double six (twelve). Any of the remaining six totals (four, five six, eight, nine, and ten) on a beginning roll starts off a different sequence of play, with different possible bets.

Suppose a shooter replaces the standard dice with a pair of new dice whose faces are marked as follows: 1, 2, 2, 3, 3, 4 and 1, 3, 4, 5, 6, 8. Should the other players or the casino management object?

First, we can check whether the probabilities of the various sums with the new dice are different. We can do this by displaying in a table all the ways in which each sum from two to twelve can be obtained with the two different pairs of dice.

Substituting a pair of "weird" dice with faces labeled 1, 2, 2, 3, 3, 4 and 1, 3, 4, 5, 6, 8 for a pair of standard dice has no effect on games in which only the sums matter. For both sets of dice, there's only one way to roll a sum of two, two ways to roll a sum of three, and so on, as shown in charts of sums corresponding to the possible outcomes for standard dice (left) and weird dice (right). In games in which rolling doubles plays a role, however, there is a difference. There are six ways to roll doubles with standard dice and only four ways to roll doubles with weird dice (shaded squares).

Interestingly, the unusually marked dice (known as Sicherman dice) and the standard dice have exactly the same frequencies for the possible sums. There's only one way to roll a two, two ways to roll a three, and so on.

Although the numbering on the weird dice is completely different from that on the standard dice, all the odds are exactly the same for rolling any given sum. You could make the substitution at the craps table as far as the sums are concerned (see "Weird Dice" by Joseph Gallian).

The trouble is that craps betting puts certain combinations, such as doubles, in special roles (for example, the double six on a beginning roll). With the weird dice, there's no way to get a double two, five, or six, and there are two ways to get a double three.

Changing the dice would have a considerable impact on many board games in which rolling doubles or other special combinations affects a player's fate (see "Weird Dice").

In Monopoly, players buy, sell, rent, and trade real estate in cutthroat competition to bankrupt their opponents, They take turns throwing a pair of dice , with the totals indicating how many spaces to proceed along an outside track that includes 22 color-coded properties, four railroads, two utilities, a Luxury Tax square, an Income Tax square, three Chance squares, and three Community Chest squares. Corner squares are marked Go, Just Visiting/In Jail, Free Parking, and Go to Jail.

Players start at Go. A double warrants a second throw, but three consecutive doubles sends a player directly to the In Jail square. To get out of jail, the player must throw a double. If she succeeds, whatever sum she gets decides how many spaces to advance along the board (see also "Monopoly Cheat Sheet").

However, using the nonstandard dice gives a lower probability of rolling doubles (only 4 out of 36 instead of 6 out of 36). Moreover, the chances of landing on a square six spaces away goes up twofold and the chances of landing four, ten, or twelve spaces away on a move out of jail are zero. 

Thus, if you happen to own the property St. James Place, which is six spaces away from jail, you are likely to collect lots of rent from players escaping jail. On the other hand, the owner of Virginia Avenue (four spaces away from jail) loses out on the extra business.

In fact, according to calculations made in one study, the change in dice moves St. James Place up from tenth to the sixth most frequently visited space in the game. Virginia Avenue descends from 24th to  27th in the rankings.

It's possible to prove mathematically that the weird dice represent the only alternative numbering of a pair that provides the same sum probabilities as standard dice. Of course, it doesn't matter how you arrange the six numbers on each die, though you can opt for symmetry by placing the numbers so that each set of opposite sides totals five or nine.

You can work out alternative numbering schemes not only for cubic dice but also for dice in the shape of a tetrahedron (four triangular faces), an octahedron (12 pentagonal faces), and an icosahedron (20 triangular faces). (See "Renumbering the Faces of Dice" by Duane M. Broline.)

For example, a pair of standard octahedrons marked 1, 2, 3, 4, 5, 6, 7, and 8 have the same sum probabilities as a pair with one marked 1, 3, 5, 5, 7, 7, 9, 11 and the other marked 1, 2, 2, 3, 3, 4, 4, 5. Either set would work in a two-dice game.

There are also other ways to get the same sum results as two standard cubic dice. For example, you can use a tetrahedral die labeled 1, 1, 4, 4 combined with a spinner with 18 equal segments labelled 1, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6, 7, 7, and 8. This combination yields the same sum probabilities as an ordinary pair of cubes labeled 1 through 6.

Using weird dice brings a fresh perspective to games of chance. It provides a welcome opportunity for examining the interaction between rules and randomizers.

Next: The Long Run

February 8, 2021

Climbing and Sliding

The origins of the game of Chutes and Ladders (or its Snakes and Ladders counterpart) go back many centuries to India and a board game called moksha patam (a Hindu concept that is akin to heaven and hell). Designed as a way of instructing children in religious values, the game graphically depicts the coexistence of good and evil and illustrates the impact of chance on human affairs.

In the game, checkerboard pilgrims follow a winding path that is shortened by virtuous deeds and lengthened by sinful acts or wrongdoing. They triumph when they climb ladders that represent faith, reliability, and generosity, and they fail when they slither down snakes that signify disobedience, vanity, and greed. A die dictates the fortunes of the road.

The game's design and rules have varied over the years. Typically, a grid ten squares long and ten squares wide, the game board features several ladders and snakes (or chutes). The number of shortcuts and detours, their length, and their placement all affect how the game proceeds.

A classic Indian design may have twelve snakes and eight ladders, with one long snake reaching back down to the starting square and a ladder of more modest length extending to the winning square at the top of the board.

In modern Western versions of the game, the moralizing usually takes on a gentler cast, and the snakes are often replaced by chutes (slides). Usually, the number of ladders equals the number of snakes, though in some upwardly mobile versions the ladders predominate.

Example of a Chutes and Ladders game board.

In most of these game designs, the distribution and number of snakes and ladders probably reflect trial-and-error experience when it comes to a combination that makes a pleasing game.

However, although rolls of a die determine the moves, it's possible to use mathematics to analyze the game and figure out where to place the snakes and ladders to ensure that the game is neither too long nor too short.

Of course, you could test a board design by playing hundreds of games and recording the results. Alternatively, you can use a mathematical approach named for mathematician Andrey A. Markov (1856-1922) to compute the average length of a game.

Markov studied systems of objects that change from one state to another, according to specified probabilities. When an initial state can lead to subsequent states, and these states, in turn, can lead to additional states, the result is a Markov chain.

Suppose we examine a game of Chutes and Ladders in which there's only one player. Initially the player's token is off the board, and we call this condition state 0. The first roll of a die produces either one, two, three, four, five, or six, meaning that the player can move his (or her) token to one of the squares numbered 1, 2, 3, 4, 5, or 6 after the first roll.

If the player is lucky and rolls a one, he goes to square 1, and then up the ladder to square 38 without an additional turn. Rolling a four would again put him on a ladder, this time taking him to square 14.

So, as the result of one turn the player can end up in state 2, 3, 5, 6, 14, or 38, with an equal probability of 1/6. Squares 1 and 4 aren't considered states because the token doesn't come to rest there. Similarly, the top of any chute doesn't count as a state.

For the second roll of the die, the game can be in one of six states, and we must consider each possibility separately.

Suppose, for example, that the first throw puts the player in state 2. From there, the second roll moves him to square 3, 4, 5, 6, 7, or 8. Because 4 is the bottom of a ladder, the player immediately ascends to 14. So, starting from state 2, the player can move to state 3, 5, 6, 7, 8, or 14 with equal probability.

If the first roll happened to put the player in state 14, the second roll would bring him to square 15, 16, 17, 18, 19, or 20 with equal probability. However, landing on square 16 would plunge him back to 6. Hence, from state 14 the player can move with equal probability to state 6, 15, 17, 18, 19, or 20.

After two rolls, the game can be in a total of 21states. These states, however, are not equally likely. For example, there's only one way to reach state 44 at the end of two rolls: by rolling a one and then a six. Therefore, the probability of ending up in state 44 after two rolls is 1/6 ✕ 1/6 = 1/36.

In contrast, there are three ways of getting to state 31: by rolling three and six, by rolling five and four, and by rolling six and three. Thus, the probability of being in state 31 after two throws is (1/6 ✕ 1/6) + (1/6 ✕ 1/6) + (1/6 ✕ 1/6) = 3/36.

Such calculations can be carried out for three, four, or more rolls to create a complicated Markov chain showing how the game progresses from one set of states to another. For any position of the token, a player can readily determine the probability of ending up on that square.

You can also use the same technique to analyze particular situations. For instance, to place a chute so as to minimize a player's chance of climbing a particular ladder, you can construct a table giving the probabilities of climbing a ladder when a ladder is a given number of spaces ahead, first with no chutes in the way and then with a single chute on one of the intervening squares in front of the ladder.

A player who is one square in front of the ladder has one chance in six of rolling a one to get to the ladder, giving a probability of 1/6, or about .17. A player two squares in front can get to the ladder by rolling either a two (probability 1/6) or two ones (1/36) for a combined probability if 1/6 + 1/36, or .19.

A player who starts at square three must roll a three (1/6), two and one (1/36), one and two (1.36), or one, one, and one (1/216), for a total probability of .23, to reach the ladder.

If there is no chute, the probability of reaching the ladder turns out to be highest when the player is six spaces away.

You might think that placing a single chute immediately in front of a ladder would serve as a strong guard, but calculating the probabilities of landing there instead of on the ladder shows it to be the least effective position. A chute six squares in front is much more difficult to circumvent without missing the ladder as well.

Some of these factors come into play in some well-known versions of Chutes and Ladders. In one version, the top row, or last ten squares, on the board has three chutes. One chute is two squares in front of the finish line. A second is five squares in front, and a third is seven in front.

These chutes turn out to be effective guards, and more often than not a player wins by taking the ladder from square 80 directly to the winning 100th square, completely bypassing the treacherous final stretch.


Indeed, the likelihood of such a finish is measured by the fact that all three chutes in the top row land the player in the row that leads to the shortcut ladder in square 80, with no other obstacles standing in the way.

An alternative version, called Gigantik® Snakes & Ladders, has a very different set of snakes and ladders. The large game board has two places in which a guard snake immediately precedes a ladder, one near the beginning and another near the end. It also has two lengthy snakes slithering down from the top row and no ladder directly to the finishing square, so that games tend to take longer than those on a standard Chutes and Ladders board.

The theory of Markov chains offers a powerful method of analyzing probabilistic behavior in a variety of systems. Nowadays computers can readily handle large numbers of states, and researchers can quickly analyze the outcomes and fairness of such games as Chutes and Ladders and Monopoly.

The same techniques can also be used in science to model the wanderings of molecules drifting in air, the foraging patterns of birds, and the fluctuations of prices on a stock market.

Previously: Tumbling Dice
Next: Chance of Face

February 7, 2021

Tumbling Dice

Like coins (see "Coin Toss Randomness"), cubic dice are subject to physical laws. An unscrupulous player can take advantage of this physics to manipulate chance. A cheat, for instance, can control a throw by spinning a die so that a particular face remains uppermost or by rolling it so that two faces stay vertical. In each case, the maneuver reduces the number of possible outcomes.

A grossly oversized die, in particular, is quite vulnerable to such manipulation. The standardized size of dice used in casinos may well represent a compromise configuration—based on long experience—that maximizes the opportunity for fairness. Casinos and gambling regulations specify the ideal dimensions and weight of dice.

The size of a die can affect how easily it can be manipulated. The two red dice (right) were precisely machined for casino use.

A cheat can also doctor a die to increase the probability of or even guarantee certain outcomes. References to "loaded" dice, in which one side is weighted so that a particular face falls uppermost, have been found in the literature of ancient Greece, Nowadays casino dice are transparent to reduce the chances of such a bias being introduced.

Casino operators detect loaded dice by dropping them carefully into a glass of water. A loaded die will tend to turn while descending through the water, whereas a fair die will sink with little rotation. These computer simulations show a fair die tumbling on all six sides (top), a loaded die constrained to tumble on just four sides (middle), and a fair die constrained to tumble on four sides. The motion of a loaded die tends to be slightly more erratic than that of a fair die.

Even without deliberately creating a bias, it's difficult to manufacture dice accurately without introducing some asymmetry or nonuniformity. Manufacturers of casino dice take great pains to assure quality.

Typically 0.75 inch wide, a die is precisely sawed from a rectangular rod of cellulose or some other transparent plastic. Pits are drilled about 0.017 inch deep into the faces of the cube, and the recesses are then filled in with paint of the same weight as the plastic that has been drilled out. The edges are generally sharp and square.

In contrast, ordinary store-bought dice, like those used in children's games, generally have recessed spots and distinctly rounded edges. Because much less care goes into the fabrication of such dice, they are probably somewhat biased.

Achieving fairness is even more difficult with polyhedral dice that have eight, twelve, or twenty faces, each of which must be manufactured and finished to perfection.

A 12-sided (or dodecahedral) die.

In principle, an unbiased cubic die produces six possible outcomes. It makes sense to use a mathematical model in which each face has an equal probability of showing up. You can then calculate other probabilities, including how often a certain number is likely to come up.

Several decades ago, statistician Frederick Mosteller had an opportunity to test the model against the behavior of real dice. A man named William H. Longcor, who had an obsession with tossing dice, came to him with an amazing offer to record the results of millions of tosses (see "A Passion for Tossing Dice").

Mosteller accepted the offer, and some time later he received a large crate of big manila envelopes, each of which contained the results of 20,000 tosses with a single die and a written summary showing how many runs of different kinds had occurred.

"The only way to check the work was by checking the runs and then comparing the results with theory," Mosteller recalled. "It turned out [Longcor] was very accurate." Indeed, the results highlighted some errors in the then-standard theory of the distribution of runs.

Because the data had been collected using both casino dice from Las Vegas and ordinary store-bought dice, it was possible to compare their performance not only with theory but also with each other and with a computer that simulated dice tossing.

As it turned out, the computer proved to have a poor random-number generator (see "Random Bits"), whereas the Las Vegas dice were very close to perfect in comparison with theory.

A mathematical model allows us to analyze the behavior of dice in both the short term and the long run and to study how the randomness of tumbled dice interacts with the rules of various games to favor certain strategies and results.

In some versions of Chutes and Ladders (or its Snakes and Ladders counterpart), each player must roll a six in order to start the game and then roll again to make his or her first move. It may actually take a while for that particular number to materialize, but more often than not each player will succeed in four or fewer throws.

The mathematics of chance reveals why. On each of four rolls of a single die, the probability that six will come up is 1/6. Because each roll of a die is independent of any other roll, the probability that six will not come up in four rolls is 5/6 ✕ 5/6 ✕ 5/6 ✕ 5/6, or 625/1296. Hence, the chance of getting a six is 1 − 625/1296 = 671/1296 = .5177, a probability of more than 50 percent.

During the 17th century, a favorite gambling game in the salons of Paris involved betting with even odds that a player would throw at least one six in four rolls of a single die. The calculated probabilities demonstrate that anyone who made the bet could win handsomely in the long run, especially if the unsuspecting victim believed intuitively that it would take six rolls to get the desired result.

One variation of the game involved rolling a pair of dice to obtain a double six in 24 throws. In this case, the gambler would lose in the long run if he offered even money on the possibility.

One of the players who were puzzled by such an outcome was writer Antoine Gombaud, Chevalier de Méré (1607-1684). An old gamblers' rule said that two dice come up in six times as many ways as one die cast. Thus, if four is the critical number of throws in a game with one die to reach favorable odds, then six times four, or 24, would be the critical number of throws in a game with two dice.

Gombaud suspected that 24 wasn't the right answer, and he worked out an alternative solution to the problem, looking at the 36 different throws possible with two dice. He wasn't entirely sure of his conclusion, however, so he asked his acquaintance Blaise Pascal (1623-1662), a mathematician and philosopher, to check the reasoning.

Pascal determined that the probability of rolling a double six after 24 throws turns out to be less than ½. No double six comes up in 35 out of the 36 possible outcomes for two dice. Throwing the dice 24 times means that the probability of not getting a double six is 35/36 multiplied by itself 24 times, or (35/36)24 = .5086.

So, the probability of winning is only 1 − .5086, or .4914. Clearly, betting with even odds on such an outcome becomes a losing proposition in the long run.

Possible results of throws with two dice. Rolling a fair die has six possible outcomes. In rolling two dice, there are six possible outcomes for the second die, which gives a total of 36 pairs. What's the probability of rolling two dice to obtain, say, five? Because the total comes up in four different pairs (shaded),the probability is 4/36, or 1/9.

Even with one die, however, it often takes a player many more than four throws to roll a particular number. On one occasion in five, a player will not succeed within 12 throws, and on one occasion in a hundred within 24 throws of the die.

Previously: Rolls and Flips

February 6, 2021

Dice Bias

Dice represent an intriguing example of the interplay between randomness, chance, and physical law.

The simplest mathematical model of a standard die is based on the assumption that the die is a perfect cube. A cube has six faces, and each face has an equal probability of coming up—one in six (1/6).

A store-bought die, however, isn't really a perfect cube. Such dice, like those used in children's games, generally have recessed spots and distinctly rounded edges. Imperfections introduced during manufacture and use, which alter the die's center of mass (or gravity), also undoubtedly introduce biases.

One bias apparent to anyone who has observed the distribution of spots on a die involves the six dimples on one face and the single dimple on the opposite face. Clearly, more material has been drilled out of one face than the other. The mass distribution suggests that 6 should come up slightly more often than the other numbers.

How large is the bias? You could try to find out by experiment, but a lot of extraneous factors can confound the results. You could also create a mathematical model of a dimpled die and compute the odds. That's what Ed Pegg Jr. did in developing models for calculating probabilities involving irregular objects.

Here's where physics comes in.

Imagine putting the die flat on a table. By tipping the die, it's possible to roll the die over to another face. Pegg calculated the minimum amount of energy required for each possible tip. The amount of energy needed to tip the die from 3 to 6 is slightly less than the energy needed to tip it from 3 to 1, for example.

In a toss, the die bounces around randomly, losing energy with each bounce. Eventually, the die doesn't have enough energy to escape from a certain orientation. By calculating centers of gravity for different numbers of drilled pits and using those data to create a Markov matrix for each bounce, Pegg came up with the following results.

Computed results for 10,000 tosses of a die with drilled dimples.

Dice used in gambling casinos are shaped much more precisely than those sold in novelty stores and with many board games. To ensure fairness, the drilled pits are filled with paint of the same weight as the plastic that was drilled out to make the spots (see "A Passion for Tossing Dice").

A casino die.

Pegg has spent a lot of time thinking about dice. For example, fair dice don't have to be in the shape of regular polyhedra, such as cubes, icosahedra, and so on.

Pegg compiled a list (with illustrations) of all possible fair dice—ones in which each face has the same relationship to all other faces and each face has the same relationship with the center of gravity. Along with the regular tetrahedron, for instance, you can also use the four-faced polyhedra known as the isosceles tetrahedron and the scalene tetrahedron.

A four-sided die in the shape of a regular tetrahedron.

Pegg listed 30 different shapes for fair dice. "I'd love to see someone make the exotic dice in the list," he remarked.

The fact that dice are subject to physical law also comes up in another context. Pegg observed that, in wargaming and other chancy pursuits, "gamers have an uncanny ability to influence dice." How players take advantage of physical effects to manipulate dice might make for an interesting study, he added,

Originally posted October 26, 1998

February 5, 2021

Rolls and Flips

Attributed by Suetonius to Julius Caesar (100-44 B.C.)

Dice are among the oldest known randomizers used in games of chance.

In 49 B.C., when Julius Caesar ordered his troops across the river Rubicon to wage civil war in Italy, the alea of the well-known proverb he quoted already had the standard form of the die we use today: a cube engraved or painted with one to six dots, arranged so that the number of dots on opposite faces totals seven and the faces marked with one, two, or three dots go counterclockwise around a corner.

More than two thousand years earlier, the nobility of the Sumerian city of Ur in the Middle East played with tetrahedral dice. Carefully crafted from ivory or lapis lazuli, each die was marked on two of its four corners, and players presumably counted how many marked or unmarked tips faced upward when these dice were tossed.

Egyptian tombs have yielded four-sided pencils of ivory and bone, which could be flung down or rolled to see which side faces upward. Cubic dice were used for games and gambling in classical Greece and in Iron Age settlements of northern Europe.

Because it has only two sides, a coin is the simplest kind of die. Typically, the two faces of a coin are made to look different (heads or tails), and this distinguishing feature plays a key role in innumerable pastimes in which a random decision hinges on the outcome of a coin toss.

How random is a coin toss? Using the equations of basic physics and the laws of motion, it's possible to to predict how long it takes a coin to drop from a known height. Apart from a small deviation due to measurement error, the time it will hit can be worked out precisely.

On the other hand, a properly flipped coin tossed sufficiently high spins so often during its flight that calculating whether it lands heads or tails is practically impossible, even though the whole process is governed by well-defined physical laws (see "Coin Toss Randomness").

Despite the unpredictability for individual flips, however, the results of coin tossing are not haphazard. For a large number of tosses, the proportion of heads is very close to ½.

In the eighteenth century, naturalist Georges-Louis Leclerc, Comte de Buffon (1707-1788), tested this notion by experiment. He tossed a coin 4,040 times, obtaining 2,048 heads (a proportion of 0.5069).

During World War II, an English mathematician held as a prisoner of war in Germany passed the time in the same way, counting 5,067 heads in ten thousand coin tosses.

Such data suggest that a well-tossed fair coin is a satisfactory randomizer for achieving an equal balance between two possible outcomes. However, this equity of outcome doesn't necessarily apply to a coin that moves along the ground after a toss.

An uneven distribution of mass between the two sides of a coin and the nature of its edge can bias the outcome to favor, say, tails over heads. A U.S. penny of a certain vintage, spinning on a surface rather than in the air, for example, comes up heads only about 20 percent of the time (see "Penny Bias").

To ensure an equitable result, it's probably wise to catch a coin before it lands on some surface and rolls, spins, or bounces to a stop.

Empirical results from coin-tossing experiments support the logical assumption that each possible outcome of a coin toss has probability of ½ or .5, Once we make this assumption, we can build abstract models that capture the probabilistic behavior of tossed coins—both the randomness of individual tosses and the special kind of order that emerges from the process.

Consider what happens when a single coin is tossed repeatedly. On the first toss, the outcome is either a head or a tail. Two tosses have four (2 ✕ 2) possible outcomes, each with a probability of ¼ (or .25), and three tosses have eight (2 ✕ 2 ✕ 2) possible outcomes.

In general, the number of possible outcomes can be found by multiplying together as many 2s as there are tosses.

You can readily investigate the likelihood that certain patterns will appear in large numbers of consecutive tosses. For example, if a coin is tossed, say, 250 times, what's the longest run of consecutive heads that's likely to arise?

A simple argument gives us a rough estimate. Except on the first toss, a run of heads can begin only after a toss showing tails. Thus, because a tail is likely to come up about 125 times in 250 tosses, there are 125 opportunities to start a string of heads.

For about half of those tails, the next toss will be a head. This gives us around sixty-three potential head runs. Roughly half the time, the first head will be followed by a second one. So, around thirty-two runs will consist of  two heads or more.

About half of these will contain at least one additional head, meaning that we will get sixteen runs of three heads or more, eight runs of at least four heads, four runs of at least five heads, two runs of six heads or more, and one run of seven heads or more.

That's actually a surprising result to many. People who are asked to write down a string of heads or tails that looks random rarely include sequences of more than four or five heads (or tails) in a row.

In fact, it's generally quite easy to distinguish a human-generated sequence from a truly random sequence because the one that is written down by a human typically incorporates an insufficient number of long runs.

So, although an honest coin tends to come up heads about half the time, there's a good chance it will fall heads in a sequence of two, three, or four tosses. The chances of that happening ten times in a row are much smaller, but it can still happen.

That's what makes it hard to decide, just from a record of the outcomes of a short sequence of tosses, whether such a string is a chance occurrence or it represents evidence that the coin is biased to come up heads more often than not.

Previously: The Die Is Cast