One of the classic problems of planning ahead concerns a traveling salesman who must visit customers in a number of cities scattered across the country and then return home. The problem is to find the shortest possible route visiting each city only once.
Listing all possible routes, calculating the length of each one, and picking the shortest is one possible strategy for solving the traveling salesman problem. For a large number of cities, however, such a brute-force approach takes too long.
Of course, individual instances of the salesman's problem may have simple solutions that work well under certain conditions. For example, first visiting all locations close to the starting point, then moving farther afield may work.
There's no guarantee, however, that such a strategy works in every possible case. Indeed, proceeding from the origin to the nearest point, then to the nearest point to that, and so on, doesn't generally give the shortest path.
Problems similar to the traveling salesman problem crop up in many practical situations, such as designing networks to link an array of points, routing telephone calls, allocating resources, setting timetables and schedules, planning budgets, and packing objects in bins.
It turns out that vervet monkeys can also solve the traveling salesman problem—albeit to a limited extent. In 1997, Audrey E. Cramer and C.R. Gallistel reported in an article in the journal Nature that, rather than always heading for the nearest food supply, the monkeys apparently plan their next three visits to minimize distance and travel time.
Vervet monkey (Chlorocebus pygerythrus). Wikipedia
Vervet monkeys eat fruit of various kinds, foraging on patchily distributed supplies. They typically visit several sites in a single trip. "Here we pose a question about their route-choosing mechanism: How many future destinations along the route influence the choice of the next destination," Cramer and Gallistel asked.
To find out, the researchers modeled their experiment on one performed more than two decades earlier with chimpanzees. Psychologist Emil W. Menzel carried juvenile chimpanzees around an outdoor field while he hid fruit in 18 randomly placed locations. He then released the chimps and recorded the routes taken to collect the fruits.
Menzel reported in a paper in Science that the apes remembered most of the hiding places and the type of food in each, and they rarely rechecked a place they had already emptied of food. Moreover, their search pattern bore no relation to the baiting sequence and tended to minimize the distance traveled.
Cramer and Gallistel's subjects were four adult vervet monkeys, who were carried around as grapes were hidden in six or eight small holes, randomly selected from 25 holes spaced at 1.53-meter intervals in a 5 x 5 grid.
The monkeys ignored the order in which the grapes were hidden and, like the apes, tended to minimize the distance traveled. "However, they never visited more than six hiding places, confirming earlier reports that monkeys and apes differ dramatically in how many potential destinations they can keep in mind at once," the researchers noted.
"A computer algorithm that always chose the closest site as the next destination did as well as our subjects, suggesting that such a mechanism would be adequate," they reported. "However, performance on arrays chosen specifically to test for the influence of destinations beyond the nearest showed that the choice of the next destination was strongly influenced by at least two farther destinations."
In one experiment, monkeys visited the four vertices of a diamond, one of which was the starting point. When a monkey started and ended at the same corner, it didn't simply choose the next nearest site but typically planned ahead, following the shortest route around the diamond (first to a middle location, then the far point, next to the other middle location, and back to the start).
A grape was put in the hole at the monkey's starting point (S) after it had reached the first site, requiring a return to the starting point later on in the visit sequence. The diamond route (SABCS) is shorter than the zigzag route (SACBS).
For both monkeys and chimps, future destinations affect the decision on which site to visit next.
A traveling salesman must visit customers in a given number of cities scattered across the country. The problem is to find the shortest route that takes the traveler just once to each city before returning home.
Listing all possible routes, calculating the length of each one, and picking the shortest is one possible strategy for solving the traveling salesman problem. For a large number of cities, however, such a brute-force approach requires a huge amount of computation.
For example, to find the shortest possible route to visit 10 cities, a computer would have to evaluate 362,880 (9 ✕ 8 ✕ 7 ✕ 6 ✕ 5 ✕ 4 ✕ 3 ✕ 2 ✕ 1) possibilities. As the number of cities increases, the number of possible routes skyrockets.
Optimal route for 10 cities.
Nonetheless, researchers have found ways to compute optimal tours for a remarkably large number of cities. One such route, worked out in May 2004, visits all 24,978 cities, towns, and villages in Sweden. The tour is about 72,500 kilometers long. No shorter route is possible.
This image represents an optimal tour through 13,509 cities in the United States. Courtesy of William Cook
Mathematician Robert Bosch and student Adrianne Herman developed a computer program that brought the traveling salesman problem into the world of art. They used the routes that result from solving the problem to create continuous-line drawings. See "One Continuous Line."
Art teachers are fond of asking beginning students to make such drawings. The idea is to look at an object, place the tip of your pencil on a sheet of paper, then make a drawing of the object without letting the pencil's tip leave the paper until the picture is finished. That's a lot like finding an optimal route that links a large number of cities.
In their procedure for creating continuous-line drawings, Bosch and Herman started with a black-and-white digital image. Each pixel of the image had a grayscale value between 0 (completely black) and 255 (completely white). The picture was then divided into squares, each one consisting of a block of pixels. Next, the average darkness of each square was computed.
A blank digital "canvas" was then divided into squares to match those of the original image. The computer randomly placed points (representing cities) within each square in such a way that the points were uniformly distributed within the square. The number of points placed in each square was related to the square's darkness. Distances between the points were then computed.
To find an approximate solution to the corresponding traveling salesman problem, Bosch and Herman used a method developed by William Cook and his collaborators. The resulting tour was a continuous-line drawing of the target image.
Approximately solving the traveling salesman problem for a set of 27,486 carefully placed cities produced an impressive likeness of the Mona Lisa. Courtesy of Robert Bosch.
Magnified view of the route required to create part of a continuous-line portrait of the Mona Lisa. Courtesy of Robert Bosch.
Bosch used the technique to create a number of continuous-line portraits, including ones of Marilyn Monroe and other people. His artful routes sometimes involved more than 100,000 cities.
So, the same sorts of techniques used for solving optimization problems—which range from routing telephone calls and constructing schedules to allocating resources and designing networks—can also play a role in creating ingenious artworks.
That painting of Gauss has been copied many times over the years, with variants of the portrait even appearing on bank notes and postage stamps. One recent rendition of the Gauss portrait was constructed entirely from 48 complete sets of double-nine dominoes.
A portrait of Carl Friedrich Gauss (above) constructed from 48 complete sets of dominoes (detail, below). Courtesy of Robert Bosch.
The work of mathematician Robert Bosch, this remarkable creation and others like it represented the successful application of a novel algorithm for approximating target images as arrays of dominoes.
To create a domino portrait, Bosch started with an image—a picture of Gauss, Abraham Lincoln, Marilyn Monroe, John Lennon, or some other figure—and a certain number of complete sets of dominoes. His goal was to position the dominoes in such a way that, when seen from a distance, the assemblage looked like the target image.
What made this task especially challenging was the self-imposed requirement of using complete sets. Hence, when Bosch worked with 48 sets, he used precisely 48 blank dominoes, 48 dominoes that have a blank square and a square with one dot, and so on. To make this work, the process required delicate compromises on the placement of constituent dominoes.
To find optimal positions for the dominoes, Bosch applied a mathematical technique known as integer programming. Long used in operations research, integer programming has aided schedule construction, route selection, and a variety of other large-scale planning tasks.
Bosch's computer program started by dividing the target image into an array of squares. Some squares were completely white, others were completely black, and the rest were various shades of gray. White squares were assigned the value 9, black squares had the value 0, and gray squares were given intermediate values.
A domino is made up of two squares. A complete set consists of 55 dominoes—10 that are doubles and 45 that are not doubles. Each double has two possible orientations (horizontal or vertical), and each nondouble has four possible orientations. Clearly, double-blank dominoes would most likely go where the grayscale value is 0, and double-nine dominoes would most likely go where the grayscale value is 9.
Using integer programming, Bosch's software determined the optimal placement and orientation of each of the available dominoes. In effect, it selected the arrangement that most closely matched the numbers that described the original image. Choosing a larger number of sets of dominoes enabled the rendering of a more accurate approximation of the original image.
A portion of an image was divided into squares (above) with grayscale values between 0 and 9 (below).
Bosch's computer program then worked out the arrangement of dominoes (below) that gave the best possible match with the image's grayscale values. Courtesy of Robert Bosch.
One tradition that flourished more than 200 years ago in Japan, during its period of isolation from the western world, involved Euclidean geometry. Scholars and others would inscribe geometric problems on wooden tablets, then hang the tablets under the eaves of Shinto shrines and Buddhist temples as offerings. Such a tablet is called a sangaku, which means "mathematical tablet" in Japanese.
More than 800 tablets have survived. Many of them feature drawings and problems that concern tangent circles.
Here's one example. Suppose three circles are tangent to one another and rest on a base line. Find a relationship among the radii of the three circles.
Sangaku circle problem.
Problems involving tangent circles can often be solved using the Descartes circle theorem, named for French mathematician René Descartes (1596-1650). In a letter of November 1643 to Princess Elisabeth of Bohemia, Descartes developed a formula relating the curvatures of four circles, each of which touch all of the other three. He defined the curvature (or bend) of a circle as the reciprocal of its radius. Hence, if the radius of a circle is one-fifth (1/5) that of another, its curvature is 5 times that of the larger circle.
Given four mutually tangent circles with curvatures a, b, c, and d, the Descartes circle equation specifies that 2(a2 + b2 + c2 + d2) = (a + b + c + d)2.
Descartes tangent circle configuration.
The same formula holds for three touching circles nested within a fourth circle. In this case, however, the curvature of the outer circle is negative, because the other circles touch it from the inside rather than the outside. The formula also applies to configurations in which one or two of the touching circles are replaced by straight lines. A line is considered a circle with infinite radius and zero curvature.
Given three tangent circles, there are precisely two additional circles that are tangent to all three. If the original circles have curvatures a, b, and c, and the additional circles have curvatures d and d ', the following simple relationship holds: d + d ' = 2(a + b + c).
Interestingly, Descartes considered only one of the possible circle configurations, and his proof of the curvature relations was incomplete. In 1826, mathematician Jakob Steiner (1796-1863) independently discovered and proved the same result, around the same time that Japanese scholars were posing questions about tangent circles.
In 1841, Philip Beecroft, an English amateur mathematician, rediscovered the Descartes circle formula. Then it was discovered again in 1936 by Frederick Soddy (1877-1956), who had won a Nobel prize in 1921 for his discovery of isotopes. Soddy expressed the theorem in the form of a poem, "The Kiss Precise," which was published in the journal Nature. Here's a portion of one verse:
Four circles to the kissing come,
The smaller are the benter. The bend is just the inverse of The distance from the centre… The sum of the squares of all four bends Is half the square of their sum.
In another verse, Soddy announced his discovery of the analogous formula for five spheres in three dimensions.
The sphere is much the gayer, And now besides the pair of pairs A fifth sphere in the kissing shares.
In one more recent twist, Allan R. Wilks and his collaborators found a remarkable formula relating the curvatures and coordinates of tangent circles. They also extended these results to spheres and higher-dimensional analogs of circles and to circles on hyperbolic and spherical surfaces.
Apollonius of Perga (c. 260-190 B.C.) pondered tangent circles more than 2,000 years ago. They have long been a staple of geometry textbooks and the subject of countless exercises, and they continue to fascinate.
Some mathematical problems are easy to describe but turn out to be notoriously difficult to solve. Nonetheless, despite their reputed difficulty and repeated warnings from those who had failed to solve them in the past, these infamous problems continue to lure mathematicians into hours, days, and even years of futile labor.
In a 2002 presentation on "mathematical problems between order and chaos," Jeffrey C. Lagarias highlighted three such notorious unsolved problems. Susceptible himself to the lure of these tantalizing conundrums, Lagarias admitted that he could have subtitled his talk "some problems I wish I could solve."
In a game of mathematical billiards, a ball moves across a table of a given shape at a constant speed forever. There's no friction to slow the ball down. It simply travels in a straight line until it hits a cushion and rebounds according to the rule that the angle of reflection equals the angle of incidence.
What makes the mathematical game interesting is the table's geometry. Depending on the ball's initial position and direction, its path after repeated reflections can vary considerably within the confines of tables having different shapes. For example, on a circular table, a ball can follow paths that never penetrate an inner circular region of a certain diameter in the middle of the table. A stadium-shaped table leads to unpredictable paths reminiscent of chaos (see "Billiards in the Round" and "Mirror Bounces").
Billiards trajectory within an equilateral triangle.
In the case of tables bounded by straight lines, one unsolved problem concerns triangular tables: Is it true that every triangular table has a periodic orbit—in other words, a billiard-ball path that repeats itself?
In the case of triangles in which all three interior angles are acute, the answer is known to be "yes." In 1992, Michael D. Boshernitzanhinted that there may exist some sort of polygon (or triangle) that has no periodic orbit. However, no one has yet pinned down its identity—if it actually exists.
A Kolakoski sequence is a string of numbers that describes itself. Such a sequence is formed from a pair of 1-digit numbers, m and n. When m = 1 and n = 2, the resulting sequence is made up of "blocks" of single and double 1s and 2s, where a block is a single digit or a pair of digits different from the digit or pair of digits in the preceding block.
To construct the sequence, start with the single digit 1 (the first block). The single 1 means that a block of length one follows the initial block. Because the first block is 1, the second block must be 2 to give the two-block sequence 12.
Now, the single 2 of the second block means that the third block has length two, so 11 is added to the sequence to give 1211. The addition of the two 1s means that the fourth and fifth blocks each have length one, giving first 12112 and then 121121. The addition of 21 means that the sixth block has length two and the seventh block length one: 121121221.
Kolakoski sequence (first 42 digits):
K = 121121221221121122121121221121121221221121…
Here are some unanswered questions about this mysterious sequence.
Is there a formula for the nth term?
If a certain string (for example, 21221) occurs in the sequence, must it occur again?If a certain string (for example, 21221) occurs in the sequence, must the same digits in reverse order (12212) also occur?
If a certain string (for example, 21221) occurs in the sequence, must a new string in which 1s and 2s are swapped (12112) also occur? In the long run, do as many 1s occur as do 2s? In other words, is the limiting frequency of 1s precisely 0.5? So far, Vasek Chvatal has been able to show that the limiting frequency is between 0.499 and 0.501.
Intriguingly, a variant of the Kolakoski sequence involving the digits 1 and 3 may shed light on describing where atoms are located in unusual materials known as quasicrystals.
The 3x + 1 conjecture
The so-called 3x + 1 conjecture has baffled mathematicians for more than 80 years. Also known as the Collatz problem, it concerns a sequence of positive integers.
Start with any positive integer n.
If n is even, divide it by 2 to give n' = n/2. If n is odd, multiply it by 3 and add 1 to give n' = 3n + 1.
For example, starting with 5, you get the following sequence: 5, 16, 8, 4, 2, 1, 4, 2, 1,…
Starting with 11, you get the sequence: 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1, 4, 2, 1,…
The numbers generated by these rules are sometimes called "hailstone numbers" because their values fluctuate wildly up and down (as if suspended in turbulent air) before they finally crash (to the ground) as the repeating string 4, 2, 1.
Computational experiments suggest that such a sequence will always eventually crash. However, for some starting integers, it takes a huge number of steps to reach the repeating cycle.
Will you always end up in a repeating cycle? No one has yet found a counterexample. Computation has verified that no such counterexample can start with an integer less than at least 1015.
Other questions concerning the 3x + 1 sequence have proved equally elusive.
Between order and chaos
All three examples fall within an area of mathematics known as dynamical systems—which deals with the evolution over time of systems such as groups of planets or sets of billiard balls moving according to certain rules.
"These are dangerous problems," Lagarias warned. Attempting them can be risky to your health and career. He also offered the following "advice" about deciding which mathematical problems are worth pursuing.
Question: How can you tell whether a problem is hopelessly difficult? Answer: Judgment. Question: How do you develop good judgment? Answer: Good judgment comes from experience; experience comes from bad judgment.
Have you heard the one about an itinerant entertainer traveling with a wolf, a goat, and a basket of cabbages?
The showman comes to a river and finds a small boat that holds only himself and one passenger. For obvious reasons, he can't leave the wolf alone with the goat, or the goat with the cabbages. How does he get his cargo safely to the other side (see "Tricky Crossings").
Such puzzles and their many variants are ways of dressing up a relatively straightforward mathematical problem. Since the days of ancient Egypt and Babylon, instructors have often used such devices to turn routine mathematical exercises into problems that tickle and challenge the mind (see "Problems to Sharpen the Young" and "The Ladies' Diary").
Mathematical puzzles and games remain remarkably popular. Puzzle addicts throughout the world snap up many of the hundreds of such books published every year. A wide range of magazines feature puzzle pages, often including sudoku and other venerable time sinks.
The appearance of a new, ingenious puzzle can stir up frenzied activity. In just three years after its introduction, sales of Rubik's cubes grew beyond 100 million.
Amusement is one of humankind's strongest motivating forces. Although mathematicians sometimes belittle a colleague's work by labeling it "recreational" mathematics, much serious (and useful) mathematics has come our of recreational problems, which test mathematical logic and reveal mathematical truths.
Philosopher and mathematician Bertrand Russell noted, "A logical theory may be tested by its capacity for dealing with puzzles, and it is a wholesome plan, in thinking about logic, to stock the mind with as many puzzles as possible, since these serve much the same purpose as is served by experiments in physical science."
Probability theory originated in questions about gambling. Ingenious solutions to tiling problems are seen in the folk art of many cultures and in physical theory for describing crystals.
Similar connections between recreational problems and important mathematical questions occur in number theory, geometry, graph theory, and many other areas of mathematics.
Mathematical physicist John L. Synge summed it up. "In submitting to your consideration the idea that the human mind is at its best when playing, I am myself playing, and that makes me feel that what I am saying may have in it an element of truth."
Of all the Popes who have headed the Catholic church over the centuries, only one has been a mathematician. Gerbert of Aurillac (c. 946–1003) was Europe's leading mathematician before taking over as Pope Sylvester II, starting in 997.
Gerbert of Aurillac.
In the article "When the Pope was a Mathematician," published in the College Mathematics Journal, Leigh Atkinson noted that Gerbert received practically no instruction in mathematics when he went to school at a monastery close to Aurillac, a village in south-central France.
Among the very few mathematical works that might have been available to Gerbert was an oft-copied, widely circulated manuscript called Propositiones ad acuendos juvenes. The Latin title can be translated as "Problems to sharpen the young." It's usually attributed to Alcuin of York (735–804).
The text contains 53 (or so) word problems (with solutions), in no particular pedagogical order. Among the most famous of these problems are four that involve river crossings (see "Tricky Crossings"), including the problem of three jealous husbands (each of whom can't let another man be alone with his wife), the problem of the wolf, goat, and cabbage, and the problem of "the two adults and two children where the children weigh half as much as the adults."
In Atkinson's opinion, the most interesting problem in the collection is the following (translated from the Latin):
There is a ladder that has 100 steps. One dove sat on the first step, two doves on the second, three on the third, four on the fourth, five on the fifth, and so on up to the hundredth step. How many doves were there in all?
Alciun's solution is to note that there are 100 doves on the first and 99th steps, 100 more on the second and 98th, and so on for all the pairs of steps, except the 50th and 100th.
Problem 43 was "composed for rebuking" troublesome students, and there's no given solution.
A certain man has 300 pigs. He ordered all of them slaughtered in 3 days, but with an uneven number killed each day. What number were to be killed each day?
"One wonders how long the students needed to realize that three odd numbers can't add up to 300," Atkinson commented.
Problem 26 offers a bit of a calculus flavor.
There is a field that is 150 feet long. At one end stood a dog; at the other, a hare. The dog chased the hare. Whereas the dog went 9 feet per stride, the hare went only 7. How many feet and how many leaps did the dog take in pursuing the fleeing hare until it was caught?
Problem 12 is the forerunner of what are now known as barrel-sharing brainteasers.
A certain father died and left as an inheritance to his three sons 30 glass flasks, of which 10 were full of oil, another 10 were half full, while another 10 were empty. Divide the oil and flasks so that an equal share of the commodities should equally come down to the three sons, both of oil and glass.
Problem 5 from the Alcuin collection represents the first known appearance in Europe of a type often called the "hundred fowls" problem, from a 5th-century version that features 100 birds (cocks, hens, and chicks). By Alcuin's time, the problem and several variants were already available in Indian and Arabic texts.
A merchant wanted to buy 100 pigs for 100 pence. For a boar, he would pay 10 pence; for a sow, 5 pence; while he would pay 1 penny for a couple of piglets. How many boars, sows, and piglets must there have been for him to have paid exactly 100 pence for the 100 animals?
The collection contains six other versions of this problem.
Problem 52 is another one that has survived in various forms to this day.
A certain head of household ordered that 90 modia of grain be taken from one of his houses to another 30 leagues away. Given that this load of grain can be carried by a camel in three trips and that the camel eats one modium per league, how many modia were left over at the end of the journey?
Modern versions are sometimes described as "jeep" problems because they involve a jeep in the desert with n cans of fuel and a distant destination.
Browsing the problems (and solutions) in Propositiones ad acuendos juvenes provides fascinating glimpses of various aspects of life in medieval times. And it testifies to the enduring power of puzzles in mathematical education.
"Recreational problems have immense pedagogic utility," David Singmaster once commented. "Some of these problems have fascinated students for nearly 4,000 years and should continue to do so indefinitely."
Square tiles can be readily fitted together to cover the rectangular floor of a bathroom. If the floor happens to have dimensions that are whole-number multiples of a tile's width, you don't even have to trim any tiles to complete the pattern.
If each tile consists not of a single square but of a certain number of squares joined together along their edges to form a unit, the problem of determining whether you can use such tiles to cover a rectangular floor flawlessly gets more complicated.
Indeed, mathematicians have proved that the general question of whether it's possible to cover the plane (or even a smaller region, such as a rectangle) with identical copies of a given finite set of tiles (or even a single geometric figure) is, in principle, computationally undecidable.
In other words, there's no cookbook recipe or handbook procedure that you can routinely apply to indicate whether you can fit together copies of an arbitrary shape to form a rectangle.
Mathematicians, however, have solved a variety of special cases of the tiling problem in two dimensions, particularly those that involve shapes known as polyominoes—forms that cover connected squares on a checkerboard. The term polyomino was coined in 1953 by mathematician Solomon W. Golomb.
There is only one type of monomino (the unit square by itself) and one domino (two squares stuck together along one edge). There are two distinct trominoes (three squares), five different tetrominoes (four squares), and twelve pentominoes (five squares). As the number of connected squares increases, the number of different possible configurations grows rapidly.
Polyominoes are the basis for thousands of mathematical puzzles. One involves proving that polyominoes of a certain shape can be laid down to form either an infinite strip or a complete rectangle.
With square or rectangular pieces, the problem is easy to solve. But a piece made up of six squares, connected so that five lie in one row with a sixth attached to a row's second square, for example, is much more difficult to deal with. So is the case of a polyomino consisting of seven squares, with five in one row, and two in the second and third places in an adjacent row.
One type of polyomino problem involves showing whether any number of tiles in the shape of a given polyomino can be fitted together to form a rectangle. Such rectangular arrangements have been found for many polyominoes, including trominoes and tetrominoes (top). Finding such an arrangement for the hexomino and heptomino shown (bottom) proved to be difficult.
For a long time, despite a great deal of effort, all anyone knew was that these two polyominoes could be used to tile a "semi-infinite" strip and that copies of each would fill a finite rectangle, except for a small square hole somewhere inside.
Examples of infinite half-strips (bottom) and rectangles with square holes (top) made from hexominoes and heptominoes.
In 1987, software engineer Karl Dahlke, combining perseverance with clever programming, managed to solve both problems (see Dahlke's "Tiling Rectangles with Polyominoes"). Dahlke, who is blind, originally heard the problem in the audio edition of a magazine. After trying out various possibilities, he decided to program his personal computer to search for an answer systematically. Dahlke's computer was equipped with a speech synthesizer that converted the computer's output into sound.
Only after adding several programming tricks designed to circumvent time-consuming situations, in which the computer was trapped in endlessly repeating patterns, could Dahlke find his solutions.
Dahlke's solutions to the problem of forming a rectangle with either hexominoes (bottom) or heptominoes (top).
It turned out that the size of the solutions was a bit beyond what one could easily do by hand, but well within the range of a personal computer.
Cristopher Moore focused on the problem of determining how many different tilings of rectangles of various widths are possible using a given type of polyomino. In one exploration, he calculated the number of ways a rectangle 4 units wide and n units long can be tiled with right trominoes.
Suppose you have a 4 x n rectangle, where n is a multiple of 3. It turns out that you need to consider only three types of interfaces when fitting right trominoes together into a strip of the required width and length.
For example, two trominoes fit together to form a 2 x 3 rectangle, and two such rectangles can be placed next to each other to produce a 4 x 3 rectangle.
There are four ways to do the stacking. These configurations all result from what Moore described as a "straight" interface for building rectangular strips.
Two other interfaces are also possible: deep jog and shallow jog.
Examples of interfaces with a deep jog (left) and a shallow jog (right).
Because the number of possible interfaces is limited to three, Moore could derive a formula that gives the number of different ways to tile rectangles of different sizes. There are four ways to produce a 4 x 3 rectangle (see above), 18 ways to produce a 4 x 6 rectangle, 88 ways to produce a 4 x 9 rectangle, and 468 ways to produce a 4 x 12 rectangle. By definition, there is only one way to build a 4 x 0 rectangle.
Technical details: The number of ways of tiling a m x n rectangle with any finite collection of shapes, where m is fixed, can be found by calculating the nth power of a matrix whose rows and columns correspond to the various interface shapes a partial tiling may have, Moore said.
For the three interfaces possible for right trominoes, Moore solved a system of linear equations to obtain the formula, or generating function, G(z) = (1 − 6z)/(1 − 10z + 22z2 + 4z3). The terms of the Taylor expansion of G give the number of ways to tile rectangles of size 4 x 0, 4 x 3, 4 x 6, and so on: 1, 4, 18, 88, 468, 2672, 16072,….
Moore also worked out formulas giving the number of different ways to tile rectangles of width 5 with right trominoes and rectangles of width 4 with L tetrominoes and T tetrominoes.
You can prove that T tetrominoes cannot tile any rectangles of width 5, 6, or 7. Indeed, a rectangle can be tiled with T tetrominoes only if its length and width are both multiples of 4.
Nonetheless, that still leaves a lot of different patterns you can try out when tiling your bathroom floor with polyominoes.
"Need a zero-volume bottle? Searching for a one-sided surface? Want the ultimate in nonorientability?"
The intriguing subject of these cryptic entreaties is a bizarre mathematical object known as a Klein bottle, discovered in 1882 by German mathematician Felix Klein (1849-1925).
One way to depict a Klein bottle. Computer-generated image by John Sullivan.
An ordinary bottle has an inside and an outside. To walk from the inside to the outside, a fly would have to cross the lip that forms the bottle's mouth. A Klein bottle has no such edge. What appears to be its inside is continuous with its outside.
One way to describe a Klein bottle is in terms of instructions for making one from a rectangular sheet of paper. Glue together the two horizontal sides, and glue together the vertical sides after twisting one side 180 degrees. By itself, the first instruction produces a cylindrical tube. The second instruction by itself produces a Möbius strip, a remarkable mathematical surface that has only one side and one edge (see "Möbius and His Band").
Joining the top and bottom of this rectangle produces a cylinder. Matching the arrows of the remaining two sides produces a Klein bottle.
Combining the two instructions presents a delicate problem. To bring the cylinder's two ends together in their proper orientations, one end has to curl around and plunge through the surface, then meet the other end from the inside. In other words, the surface has to intersect itself.
If you were to make a model of a Klein bottle out of glass, there would have to be a circular hole where the stretched-out, bent end intersects the tube's side. In a true Klein bottle, there is no hole. There's only an intersection of the surfaces. That technical difficulty, however, hasn't stopped glassblowers from taking up the challenge of fashioning glass models of these strange objects.
Astronomer Cliff Stoll first heard of Klein bottles in the 1960s. After reading about them in an article by Martin Gardner in Scientific American, he went to his high school chemistry lab, set up a bunsen burner, and tried his hand at making one. "After burning my fingers and cracking a dozen tubes, I gave up," he recalled.
Three years later while in college, Stoll tried again, this time with gloves and better equipment. Same problem. "Without enough heat, you can't bend the glass. Heat it too much and the glass melts into a glob. And at the right temperature, it's practically impossible to stretch glass around a curve and down inside itself," he said. "So I gave up a second time."
After surviving graduate school and three postdocs, Stoll ended up in Berkeley. One day, a glassblower asked him to help program an oven temperature controller. Stoll spent a few hours on the project, then asked the glassblower some questions about what it would take to make a glass Klein bottle. A week later, thanks to the efforts of glassblowers Tom Adams and Paul Chittenden, who specialize in making intricate, scientific glassware, Stoll finally had his Klein bottle.
One of Stoll's Klein bottles.
"My first Klein bottle was made from a Pyrex 500-milliliter Erlenmeyer flask, with welded glass connections," Stoll reported. He showed it to a math professor, who insisted on buying it.
"So I made another," Stoll says. "Another mathematician asked for it. So I made a few more, and they walked away. I thought, 'Why not make a bunch and try to sell 'em?' So I worked with the glassblowers and made a few dozen. I sold them within two months."
So began Stoll's boutique business: Acme Klein Bottles. Stoll called it the world's only zero-volume enterprise.
"These elegant bottles make great gifts, fantastic classroom displays, and inferior mousetraps," Stoll once noted on his Klein bottle website. "With its circle of singularities, an Acme Klein Bottle can be said to exist inside of itself—especially handy during time reversals."
A Möbius band (or strip) is an intriguing surface with only one side and one edge. You can make one by joining the two ends of a long strip of paper after giving one end a 180-degree twist.
Constructing a Möbius band from a strip of paper.
An ant can crawl from any point on such a surface to any other point without ever crossing an edge.
This curious object is named for the astronomer and mathematician August Ferdinand Möbius (1790-1868), a professor at the University of Leipzig. He had studied theoretical astronomy with Carl Friedrich Gauss (1777-1855) in Göttingen for two semesters early in his life and became director of the Leipzig observatory in 1848. His publications spanned a broad range of topics in both astronomy and mathematics.
In 1858, Möbius was deeply immersed in the development of a geometrical theory of polyhedra. The Académie des Sciences in Paris had called attention to the problem by offering a prize for its solution.
In the eighteenth century, Leonhard Euler (1707-83) had observed that in solids enclosed by plane faces, the number of vertices (V) minus the number of edges (E) plus the number of faces (F) equals 2, or V − E + F = 2. A cube, for example, has 8 vertices, 12 edges, and 6 faces: 8 − 12 + 6 = 2.
However, the formula doesn't work for all polyhedra. For example, a polyhedron that looks like a squared-off bagel with a square hole in the middle (below) has 16 vertices, 32 edges, and 16 faces, so V − E + F = 0.
Mathematicians were forced to consider the question of what constitutes a hole in a solid and what that has to do with Euler's formula for polyhedra. If there were several holes, how should they be counted? What if the holes were a network of tunnels threading the solid's interior?
Möbius identified an additional complication. In the course of his investigations, he had started to think of surfaces in terms of flat polygonal pieces glued together in various ways. If you start with a row of triangles, for instance, the strip can be twisted and joined at its ends to form a one-sided surface. He explained this remarkable one-sidedness in terms of how the triangular pieces fit together.
Möbius's notebooks reveal that he developed this concept in September 1858. His discovery of what we now call the Möbius strip was published in 1865, in a paper titled "On the determination of the volume of a polyhedron." In that paper, Möbius also proved that there are polyhedra to which no volume can be assigned. His essay on the theory of polyhedra, which outlined his investigations and results, was published after his death a few years later.
The description of the Möbius band as it appears in Ueber die Bestimmung des Inhaltes eines Poyëders, 1865
In one of those strange quirks of mathematical fate, Möbius was not actually the first to discover or describe the object. Credit should go to the German mathematician Johann Benedict Listing (1808-1882). Working independently, Listing first encountered the surface in July 1858. He published his discovery in an 1861 paper devoted to generalizations of Euler's formula for polyhedra.
Nonetheless, it is the name Möbius rather than Listing that we now associate with one-sidedness. I haven't been able to track down exactly how that came about. It's also a bit surprising that no one before Möbius and Listing seems to have noticed the existence of one-sided surfaces.
Interestingly, the first "application" of the idea that I can find outside of mathematics is in the realm of magic.
Martin Gardner noted that the earliest known reference to the Möbius strip in the context of magic is in the 1882 enlarged English edition of Gaston Tissandier's Les Récréations Scientifiques ou l'enseignement par les jeux, first published in Paris in 1880. Magicians have long been among the earliest of adopters of new technologies and exploiters of novel scientific and mathematical ideas, all in the name of befuddling, astonishing, and mystifying their audiences.
In this parlor trick, the magician gives a spectator three large bands, each formed by gluing together the ends of a long paper strip. The magician then invites the spectator to cut each strip in half lengthwise, cutting along the strip until he or she returns to the starting point. Cutting the first band produces two, separate rings. The second forms a single band twice as long as the original. The third becomes two, interlocking rings.
What happens in each case depends on how the ends of the paper strips were joined. In the first case, there is no twist. The second is a Möbius strip, and the third is formed by twisting one end through 360 degrees before joining the ends.
Nowadays, magicians often use the term "Afghan bands" to describe the trick. That name goes back to 1904. "Why this curious name was given to the trick remains a mystery," Gardner noted.
Discovered in a purely mathematical context, the Möbius strip is the best known of the various toys of topology. Since its discovery in the 19th century, it has also achieved a life of its own beyond mathematics—in magic, science, engineering, literature, music, and art, and in the form of the ubiquitous symbol for recycling (see "Recycling Arrows").
In engineering, for example, Lee De Forest obtained a 1923 U.S. patent for a Möbius filmstrip that records sound on both "sides." The same idea was later applied to tape recorders so that a twisted tape would run twice as long as it would otherwise. Several patents have also been granted for Möbius-strip conveyor belts designed to wear equally on both sides.
An object of fascination since its discovery, the Möbius strip shows up in all sorts of unexpected settings, from monumental sculptures, synthetic molecules, and postage stamps to knitting patterns and skiing acrobatics.
Henry Spearman is a prominent economics professor at Harvard University. He is also an amateur detective, applying reason, logic, and economic theory to solve puzzling murders at a Virgin Islands resort and among members of Harvard's promotion and tenure committee.
Spearman is a fictitious character, the hero of a series of murder mysteries written by Marshall Jevons. Short and balding, the economist-detective does, however, share several characteristics with the eminent, real-life economist Milton Friedman.
Clearly a good sport, Friedman himself contributed the following comment for the jacket of The Fatal Equilibrium, the second Spearman mystery, which was published in 1985: "It is hard to conceive of a more pleasant and painless way of imbibing sound economic principles than reading this fascinating, absorbing, and well-written mystery story."
The prominent role of economic principles is certainly evident in all the mysteries. It doesn't take long for Spearman to begin invoking economic theory in Murder at the Margin, which started the series off in 1978. On his way to the island of St. John in the Caribbean, he ponders the apparent paradox of finding less time to enjoy vacations and leisure activities as his income increased substantially.
"But the paradox was not puzzling to an economist who understood the doctrine of 'opportunity cost,'" Jevons writes. "For each evening spent enjoying his stamp collection, Spearman gave up the opportunity to work on a lecture, article, or book that would bring him a large monetary return. On balance he decided to choose work over leisure. As his book sales and fees rose, the cost to him of that leisure time went up accordingly. Consequently vacations were rare, his stamp collection generally went unattended, and many extracurricular books remained neglected."
Like a professor determined to instruct, the author can't resist the temptation to halt the narrative to present minitutorials on various subjects, from the basics of constructing the drums used by a Caribbean steel band to the pricing strategy of Filene's Basement in Boston and the intricacies of operating an ocean liner.
In one aside, we learn that Spearman ranks Marshall over John Maynard Keynes as the preeminent economic thinker of the twentieth century. Marshall is quoted as saying that economics is "the study of mankind in the ordinary business of life."
Jevons co-developed marginal utility theory, which explains the value of goods and services in terms of the subjective valuation of consumers. That principle plays a crucial role in unmasking the perpetrator in The Fatal Equilibrium.
As a fan of detective stories in offbeat settings, I enjoyed reading the first two books in the Spearman series. Though entertaining, Murder at the Margin, however, seemed to me somewhat contrived and the writing rather stilted. I found The Fatal Equilibrium much more engrossing and lively, almost as if it were written by a different team of authors. I haven't yet read the third book in the Spearman series, A Deadly Indifference.
In recent years, college students appear to have been the primary audience for the series, which sometimes shows up on supplementary reading lists for introductory economics courses. Online student reviews of the books have not been kind.
The Spearman books haven't convinced me that economic theory, which generally posits a world of total economic rationality, represents an all-purpose, widely applicable explanation of human behavior. Spearman solves mysteries by believing that if someone does something that is apparently irrational, there must be hidden rationality behind it, and he seeks to discover what it is.
"So there are two puzzles in Murder at the Margin," economist Herbert Stein wrote in the foreword to the 1993 Princeton University Press edition of the book. "One is who killed A and B. The other is how much resemblance the world of rational economics in which the story takes place bears to the real world. The second puzzle adds to, rather than subtracts from, the fascination of the first."
Ivars Peterson is a freelance writer and editor. He was Director of Publications at the Mathematical Association of America from 2007 to 2014. As an award-winning mathematics writer, he previously worked at Science News for more than 25 years and served as editor of Science News Online and Science News for Kids. His books include The Mathematical Tourist, Islands of Truth, Newton's Clock, and Fragments of Infinity: A Kaleidoscope of Math and Art.