## February 5, 2021

### Rolls and Flips

Attributed by Suetonius to Julius Caesar (100-44 B.C.)

Dice are among the oldest known randomizers used in games of chance.

In 49 B.C., when Julius Caesar ordered his troops across the river Rubicon to wage civil war in Italy, the alea of the well-known proverb he quoted already had the standard form of the die we use today: a cube engraved or painted with one to six dots, arranged so that the number of dots on opposite faces totals seven and the faces marked with one, two, or three dots go counterclockwise around a corner.

More than two thousand years earlier, the nobility of the Sumerian city of Ur in the Middle East played with tetrahedral dice. Carefully crafted from ivory or lapis lazuli, each die was marked on two of its four corners, and players presumably counted how many marked or unmarked tips faced upward when these dice were tossed.

Egyptian tombs have yielded four-sided pencils of ivory and bone, which could be flung down or rolled to see which side faces upward. Cubic dice were used for games and gambling in classical Greece and in Iron Age settlements of northern Europe.

Because it has only two sides, a coin is the simplest kind of die. Typically, the two faces of a coin are made to look different (heads or tails), and this distinguishing feature plays a key role in innumerable pastimes in which a random decision hinges on the outcome of a coin toss.

How random is a coin toss? Using the equations of basic physics and the laws of motion, it's possible to to predict how long it takes a coin to drop from a known height. Apart from a small deviation due to measurement error, the time it will hit can be worked out precisely.

On the other hand, a properly flipped coin tossed sufficiently high spins so often during its flight that calculating whether it lands heads or tails is practically impossible, even though the whole process is governed by well-defined physical laws (see "Coin Toss Randomness").

Despite the unpredictability for individual flips, however, the results of coin tossing are not haphazard. For a large number of tosses, the proportion of heads is very close to ½.

In the eighteenth century, naturalist Georges-Louis Leclerc, Comte de Buffon (1707-1788), tested this notion by experiment. He tossed a coin 4,040 times, obtaining 2,048 heads (a proportion of 0.5069).

During World War II, an English mathematician held as a prisoner of war in Germany passed the time in the same way, counting 5,067 heads in ten thousand coin tosses.

Such data suggest that a well-tossed fair coin is a satisfactory randomizer for achieving an equal balance between two possible outcomes. However, this equity of outcome doesn't necessarily apply to a coin that moves along the ground after a toss.

An uneven distribution of mass between the two sides of a coin and the nature of its edge can bias the outcome to favor, say, tails over heads. A U.S. penny of a certain vintage, spinning on a surface rather than in the air, for example, comes up heads only about 20 percent of the time (see "Penny Bias").

To ensure an equitable result, it's probably wise to catch a coin before it lands on some surface and rolls, spins, or bounces to a stop.

Empirical results from coin-tossing experiments support the logical assumption that each possible outcome of a coin toss has probability of ½ or .5, Once we make this assumption, we can build abstract models that capture the probabilistic behavior of tossed coins—both the randomness of individual tosses and the special kind of order that emerges from the process.

Consider what happens when a single coin is tossed repeatedly. On the first toss, the outcome is either a head or a tail. Two tosses have four (2 ✕ 2) possible outcomes, each with a probability of ¼ (or .25), and three tosses have eight (2 ✕ 2 ✕ 2) possible outcomes.

In general, the number of possible outcomes can be found by multiplying together as many 2s as there are tosses.

You can readily investigate the likelihood that certain patterns will appear in large numbers of consecutive tosses. For example, if a coin is tossed, say, 250 times, what's the longest run of consecutive heads that's likely to arise?

A simple argument gives us a rough estimate. Except on the first toss, a run of heads can begin only after a toss showing tails. Thus, because a tail is likely to come up about 125 times in 250 tosses, there are 125 opportunities to start a string of heads.

For about half of those tails, the next toss will be a head. This gives us around sixty-three potential head runs. Roughly half the time, the first head will be followed by a second one. So, around thirty-two runs will consist of  two heads or more.

About half of these will contain at least one additional head, meaning that we will get sixteen runs of three heads or more, eight runs of at least four heads, four runs of at least five heads, two runs of six heads or more, and one run of seven heads or more.

That's actually a surprising result to many. People who are asked to write down a string of heads or tails that looks random rarely include sequences of more than four or five heads (or tails) in a row.

In fact, it's generally quite easy to distinguish a human-generated sequence from a truly random sequence because the one that is written down by a human typically incorporates an insufficient number of long runs.

So, although an honest coin tends to come up heads about half the time, there's a good chance it will fall heads in a sequence of two, three, or four tosses. The chances of that happening ten times in a row are much smaller, but it can still happen.

That's what makes it hard to decide, just from a record of the outcomes of a short sequence of tosses, whether such a string is a chance occurrence or it represents evidence that the coin is biased to come up heads more often than not.

Previously: The Die Is Cast