It happened on a typically foggy morning soon after I had arrived at the Mathematical Sciences Research Institute (MSRI) in Berkeley, Calif. Glancing out my office window at the waves of mist swirling around the building, I caught a glimpse of a herd of goats. Confined within a temporary fence, the goats were busily chomping on thick clumps of yellow grass and the green leaves of scraggly bushes scattered across the steep hillside down below.
Part of an effort to reduce the risk of fire sweeping through the Berkeley hills, the goats served as natural grass eliminators, able to go where no lawn mower dare venture.
The view, on a misty day, from the Mathematical Sciences Research Institute in Berkeley, Calif., down to the Lawrence Hall of Science and beyond to San Francisco Bay.
Housed in a three-story structure clad in weathered dark-brown wood, MSRI occupies a spectacular location—longitude 122 degrees, 14 minutes, 23 seconds west; latitude 37 degrees, 52 minutes, 49 seconds north; elevation 1,260 feet above sea level. In such a setting, it isn't difficult to imagine MSRI as an important center of the mathematical world.
Ivars Peterson at the entrance to the Mathematical Sciences Research Institute, August 1999.
Indeed, the institute's programs, conferences, and workshops attract mathematicians from all over the world. Some stay just for a few days; others spend a semester or more at the center, often focusing on some hot topic in mathematical research. In the fall of 1999, after my stay at MSRI, attention shifted to the specialized fields of noncommutative algebra and Galois groups.
"For the semester that we're running a program, we like to think ... that we are usually the strongest center in the world in the field," noted David Eisenbud, then MSRI director, as quoted in a 1999 article in SIAM News.
The idea is to gather a diverse group of mathematicians representing, when possible, different approaches to and interests in a given topic. In some cases, physicists and other scientists join in the discussions and presentations. The resulting interactions turn such programs into exciting learning experiences for everyone involved, as they did in the spring 1999 session on random matrices, which had links to both number theory and quantum mechanics (see "The Mark of Zeta").
In many ways, MSRI programs represent an effort to overcome the fragmentation of mathematical research into private conversations and highly specialized endeavors accessible only to a handful of experts.
It's not unusual to hear a variety of accents and languages when MSRI members and visitors compare notes and trade tips. Each office has a blackboard (and plenty of chalk). Additional blackboards are strategically located in the atrium, along corridors, and even outdoors in the patio, ever ready to bear the scribblings that inevitably accompany an impromptu seminar.
Ivars Peterson in his MSRI office, August 1999.
Even in an age of instant communication via email, telephone, fax, and the Internet, nothing beats face-to-face encounters—at a blackboard or over a table—to work things out. Afternoon teatime, in particular, draws people out of their offices and away from their solitary pursuits.
Remarkably often during my three-month 1999 sojourn at MSRI, I witnessed a mathematician standing at the ubiquitous blackboard, coffee cup or cookie in one hand and chalk in the other, answering a question or patiently explaining some new mathematical wrinkle to interested bystanders. A book could be written about mathematical advances that came about because of chance encounters at afternoon tea (see "The Return of Zeta").
The skylights, floor-to-ceiling windows, white walls, gray carpeting, and potted bamboo plants create a subdued environment pleasantly conducive to mathematical thought and interchange. "It's a nice place to work," Eisenbud insisted, gently understating the pleasure he took at being at the institute.
The summer of 1999 saw the introduction of an annual two-week program for graduate students interested specifically in the application of mathematics. The subject of that first course was nonlinear dynamics of low-dimensional continua, and students had a chance to try out their understanding of both mathematical model and physical theory in simulation projects ranging from microfluidic mixing to turbulent convection and pattern formation in liquids (see "Row Your Boat"). The summer 2000 course focused on mathematical issues in molecular biology.
Even during a lengthy hike (led by intrepid outdoorsman David Eisenbud) into nearby Tilden Regional Park for a lunchtime picnic and barbecue, the students taking the summer course continued to puzzle over their projects, in between comparing experiences at different universities, exchanging gossip, telling travel tales, and pondering job prospects. Those conversations, too, represent an affirmation of the collaborative nature of contemporary mathematical research.
Mathematical outreach can extend to all sorts of audiences. In July 1999, a class of high school math students visited MSRI to learn a little about what mathematicians do and what makes them tick. From Hugo Rossi, just ending his term as MSRI deputy director, they obtained a glimpse of both the immense appeal and inevitable frustrations of mathematical research at the frontiers of thought. They even got a brief lesson in the curious arithmetic of the ancient Egyptians.
The students also saw an impressively dramatic demonstration of juggling with balls and clubs, performed by Joe Buhler, about to start his stint as MSRI deputy director. They got a feel for the combinatorics of juggling—how numbers can be used to represent different juggling patterns (see "Juggling by Design"). To their delight, the students discovered they could, in addition, tap into some of the mathematical talent on display to glean hints on how to handle homework problems involving slope (rise over run) and linear equations.
MSRI offers more for the mind than mathematics. Special art exhibitions add provocative color and form to the white walls and the space within the high-ceilinged atrium. The summer 1999 display featured the collage-style work of Berkeley artist Mari Marks Fleming—visually rhythmic speculations on "time, nature, and the space between."
A more permanent fixture is an artwork by mathematician and sculptor Helaman Ferguson. Called The Eightfold Way, the sculpture sits in the middle of the patio, framed by a backdrop of hills, pines, and eucalyptus trees.
The Eightfold Way by Helaman Ferguson.
Carved from a block of Vermont white marble, the roughly tetrahedral form rests on a black serpentine column. Covered with mysteriously indented curves and sinuous ridges, the sculpture invites comment and touch.
My final image is of a late-afternoon concert in the atrium—sunlight streaming through the skylights to illuminate the members of the Peregrine Trio and of the sublime music of Mozart, Beethoven, and Haydn sailing throughout the building. It seemed a fitting finale to a stimulating summer spent immersed in a world devoted to the pursuit of mathematics.
It was the middle of a foggy, drizzly night, but the featured attraction was a solar eclipse.
Hosted by the Exploratorium in San Francisco, the eclipse party on August 11, 1999, brought hundreds of people to the science center's first-ever all-night event. Young and old, ranging from the densely tattooed and pierced to the wildly costumed or soberly business-suited, the celebrants camped out for the night, sprawled across the floor among the Exploratorium's exhibits.
Many people laid out sleeping bags, blankets, pads, or air mattresses, staking out prime territory near the Webcast studio or in front of TV monitors. A group unpacked a sumptuous meal of steaming lasagna and other gourmet delights, along with the requisite bottle of vintage wine.
In one area, children created giant, iridescent soap bubbles, which jiggled and glittered brightly in the spotlights as they floated upward, before finally bursting. Roving astronomers answered questions and provided eclipse-related information.
Listening to early music from England, a French accordion band, a classical trio, an exuberant Hungarian quartet, a Romanian choir, traditional Turkish and Iranian music, and Indian (very early) morning ragas, the eclipse worshipers talked, ate, napped, and played. Dazzling performances by a sword swallower, a fire eater, and a bevy of belly dancers heightened the festive mood.
The musical program tracked the path of the moon's shadow as it cut a swath from Cornwall, England, to India. An Exploratorium team in Amasya, Turkey, provided commentary and stunning images of the solar eclipse, which reached totality in that location at roughly 4:30 a.m. Exploratium time.
Broadcast live on the Exploratorium's website, the eclipse coverage proved a triumph for the technicians and camera operators, who captured amazing images of the sun's face blotted out by the moon, leaving just a beaded ring of brilliant light against a darkened, star-pocked sky.
The mix of ancient tradition and high-tech wizardry that marked the Exploratorium's eclipse party reminded me of the many mathematicians and scholars who had worked over millennia to develop geometric and physical models of the motions in the sky to be able to predict such awe-inspiring events as lunar and solar eclipses. Indeed, precisely pinpointing the timing of these occurrences has long served as a stringent test of physical theory and mathematical model.
"Although we think primarily of the planets orbiting the sun as the fundamental issue for the origin of modern science, it is really the moon that provided the principal ideas as well as the crucial tests of our understanding of the universe," Martin C. Gutzwiller (1925-2014) noted in a 1998 article in Reviews of Modern Physics. "The moon played the role of the indispensable guide without whom we might not have found our way through the maze of possibilities."
Indeed, the daily motion of the moon through the sky has a number of idiosyncrasies that a careful observer can discover even without the help of instruments. More than 3,000 years ago, Mesopotamian astronomers observed and recorded the moon's position on the horizon, in effect measuring important characteristic frequencies of the lunar orbit.
Time-exposure photo of a lunar eclipse, showing the nearby tracks of stars, as seen in Kingston, Ontario, 1976.
"Both the [moon's] varying speed and the spread of moonrises and moonsets on the horizon proceed at their own rhythm, which is most clearly displayed in the schedule of lunar and solar eclipses," Gutzwiller said.
About 1,000 years later, Greek astronomers and mathematicians provided an explanation of those numbers and eclipse cycles in terms of a geometric model involving circular motion.
The second stage "was initiated by early Greek philosophers, who thought of the universe as a large empty space with Earth floating at its center, the sun, the moon, and the planets moving in their various orbits around the center in front of the background of the fixed stars," Gutzwiller remarked. "This grand view may have been the single most significant achievement of the human mind."
"Without the moon, visible both during the night and during the day, it is hard to imagine how the sun could have been conceived as moving through the Zodiac just like the moon and the planets," he added.
Toward the end of the 17th century, Isaac Newton (1642-1727) provided the physical model. His majestic, immortal opus Philosophiae naturalis principia mathematica (Mathematical Principles of Natural Philosophy) represented the first significant endeavor to explain observations both on Earth and in the heavens on the basis of a few physical laws in the form of mathematical relations.
Newton's three laws of motion, as stated in Latin in his Philosophiae naturalis principia mathematica.
Newton showed how the whole clockwork mechanism in the sky operated on the basis of physical law. To demonstrate the power of his methods, he sought to derive from those laws key features of the moon's orbit, including the relationship between three different periods (or frequencies) characterizing the moon's motion.
Unsatisfied with his initial attempt to solve the problem in Principia, Newton returned to the subject of the moon's motion in 1694 for a year of intense labor. He would later comment that "his head never ached but with his studies of the moon."
Newton ultimately failed to achieve his goal, and he expressed his intense frustration in the following words: "The Irregularity of the Moon's Motion hath been all along the just Complaint of Astronomers; and indeed I have always look'd upon it as great Misfortune that a Planet so near to us as the Moon,... should have her orbits so unaccountably various, that it is in a manner vain to depend on any calculation..., though never so accurately made."
In studying the moon's motion, Newton had been forced to confront the inevitable dynamical complexities of what is now known as the three-body problem. The moon feels the gravitational pull of not only Earth but also the sun. Those solar tugs subtly distort the moon's orbit in ways that greatly complicate the moon's movement, making precise prediction difficult.
Nearly a century before Newton, Johannes Kepler (1571-1630) had suspected something similar. When asked why a spring lunar eclipse had occurred one and a half hours later than predicted, Kepler had replied, after some thought, that the sun apparently had a retarding influence on the moon, especially in winter, when the sun is closest to Earth.
In the 18th century, after considerable effort, mathematicians managed to calculate key aspects of the moon's motion, succeeding where Newton had failed. Pierre-Simon de Laplace (1749-1827) eventually provided a successful, though cumbersome apparatus for doing such calculations so that their precision finally brought them into good agreement with observation.
Near the end of the 19th century, George William Hill (1838-1914), an astronomer at the U.S. Naval Almanac Office, discovered a computational trick that considerably simplified the mathematical machinery of lunar theory. Whereas previous mathematicians and astronomers had begun with ellipses and then modified these simple orbits step by step to accommodate the effect of a third body, Hill started with a particular orbit defined by a special, simple solution of the three-body problem.
Hill's technique exploited the fact that although mathematicians could not come up with a general formula describing the motion of all three bodies for all time, they could come up with precise solutions for certain special cases. Hill's starting point was a particular periodic orbit that already included the sun's perturbing influence. He then mathematically superimposed additional wiggles and shifts representing the movements of the lunar perigee and nodes to bring this main, smooth loop closer to the moon's true orbit.
In modified form, Hill's method formed the basis for all subsequent lunar calculations and, in effect, helped land the first men on the moon in 1969.
From solar eclipse to space travel, the moon's persistently puzzling peregrinations have left quite a legacy.
The career of Olga Taussky Todd (1906-1995) served as a worthy model for the participants. Taussky was born in Olmütz, which was then part of the Austro-Hungarian Empire and is now in the Czech Republic (as Olomouc). As a child, she loved writing, especially essays, poems, and music. In high school, her interests turned to science, particularly astronomy, then finally to mathematics.
Taussky studied at the University of Vienna, focusing on number theory in her doctoral dissertation. By 1937, she was working at the University of London, where she met and married numerical analyst John (Jack) Todd.
Though Taussky's main interest was initially number theory, she was to become what she later termed "a torchbearer" for another branch of mathematics known as matrix theory.
A matrix is a rectangular array of symbols, usually numbers, neatly arranged in columns and rows. Matrices play important roles in algebra, differential equations, probability and statistics, and many other fields. Engineers investigating the vibrations of large structures and theoretical physicists probing the intricacies of quantum systems inevitably tangle with matrices.
Taussky made important contributions to matrix theory. "She had an aesthetic sense and taste for topics that served to elevate the subject from a descriptive tool of applied mathematics or a by-product of other parts of mathematics to full status as a branch of mathematics laden with some of the deepest problems and emblematic of the interconnectedness of all of mathematics," matrix analyst Charles Johnson once commented.
"Still, matrix theory reached me only slowly," Taussky noted in a 1988 article in the American Mathematical Monthly. "Since my main subject was number theory, I did not look for matrix theory. It somehow looked for me."
The allure arose, in part, during World War II, when Taussky took a position at the National Physical Laboratory in Teddington, near London. From 1943 to 1946, she worked with a group investigating an aerodynamic phenomenon called flutter.
In flight, interactions between aerodynamic forces and a flexing airframe induce vibrations. When an airplane flies at a speed greater than a certain threshold, those self-excited vibrations become unstable, leading to flutter. Hence, in designing an airplane, it's important to know what the flutter speed is before the aircraft is built and flown.
To estimate that speed, engineers had to find appropriate approximate solutions of certain differential equations (exact solutions were out of reach). In those days, the computations were done by large numbers of young women, drafted into war work, operating hand-cranked calculating machines.
Solving the differential equations to obtain relevant information about an aircraft's vibrations came down to determining the so-called eigenvalues of a square matrix (in which the number of rows equals the number of columns). Although several recipes for computing the eigenvalues of a matrix were available, it was still often a tricky, complicated, time-consuming task.
Taussky found a way to reduce the amount of calculation, significantly easing the computational workload. Her idea was to exploit and refine a method for getting useful information about the eigenvalues without having to go to a great deal of the extra trouble required to compute them exactly. She turned to an elegant theorem named for the Russian mathematician S. Gershgorin (1901-1933).
A complex number has two parts and can be written as a + bi, where a is the "real" part and bi is the so-called "imaginary" part, with i representing the square root of −1. Such numbers can be plotted as points on a graph. Each complex number has a "real" x coordinate and an "imaginary" y coordinate. The complex number 3 + 4i would be plotted as the point (3,4), for example, on what mathematicians term the complex plane.
Here's an example of such a matrix with three rows and three columns. It would have three eigenvalues.
According to the Gershgorin circle theorem, all the eigenvalues of that matrix lie in the union of certain disks, whose centers are the values along the diagonal and whose radii are the sum of the absolute values of the off-diagonal entries in a given row.
For instance, the circle corresponding to the first row would be centered at the point (1,1) and have a radius of 4. The second circle would be centered at the point (3,3) and have a radius of 1. The third circle would have its center at (−2,0) and a radius of 2. Hence, the three eigenvalues would be complex numbers that lie somewhere in the complex plane within the area defined by those circles.
In the flutter equations, those disks had a particular pattern. That was a lucky break for Taussky and led her to develop ways to make the circles smaller so that they would not overlap as much and would provide much sharper estimates of the eigenvalues.
Taussky published her results in 1949 in an article in the American Mathematical Monthly. By that time, she and her husband were working for the National Bureau of Standards in Washington, D.C.
Taussky helped popularize the Gershgorin circle theorem, strengthening the method and starting off the mathematical study of its fine points. Matrix theory itself became more than just part of a scientist's toolkit and earned a place as an important field of mathematical research.
Once called "The Great Tantalizer," the puzzle looks innocuous and seems quite simple. It consists of a set of four cubes with one of four colors on each of their six faces. Your goal is to arrange the four cubes in a row so that all four colors appear on each of the rows four long sides. The order of the cubes doesn't matter.
That simplicity is deceptive. There are 41,472 different ways of arranging the four cubes in a row. A trial-and-error approach to solving the puzzle would be hopelessly impractical.
Indeed, the puzzle's current incarnation bears the trade name "Instant Insanity." Marketed under various aliases, this tantalizer has been around for more than a century.
Here are the layout plans for the four cubes, colored red (R), green (G), blue (B), and yellow (Y).
It turns out that representing the colored faces of the cubes in terms of a mathematical construct called a graph allows you to solve the puzzle quite efficiently.
In general, a graph consists of an array of points, or nodes, joined by line segments, which are often called edges. Such an array can be very useful for visualizing relationships among various objects and attributes of those objects.
You can start by representing each cube by a graph of the colors that appear on opposite pairs of faces. Four nodes stand for the four colors of the puzzle, and edges link nodes corresponding to two colors on opposite faces. If a pair of opposite sides has the same color, you draw a loop connecting the node to itself.
Because a cube has three pairs of opposite faces, the graph representation of each cube has three edges linking four nodes, one for each color. Each edge has a numerical tag corresponding to the number of the cube on which that pair of colors resides.
In the first cube, for example, one edge would link G and Y, another edge would link G and B, and the third edge would be a loop beginning and ending at R. Each edge would be labeled 1.
The four graphs can then be combined into one representation, which shows the color relationship of the 12 pairs of opposite sides of the puzzle's four cubes. Because the puzzle's solution requires that the cubes be arranged in a row, eight of the 12 numbered edges give you the colors of each of the row's four sides.
To solve the puzzle, you need to find in this combined graph two separate subgraphs that each use all four nodes just once and each of the four edges, numbered from 1 to 4. Moreover, each node would have only two edges (or the two ends of a loop) emanating from it. One subgraph would represent the four front-back pairings, and the other would represent the four top-bottom configurations.
It turns out there is only one way of selecting two such systems without using any edge twice. (A given edge cannot represent both front-back and top-bottom at the same time.)
Look at the combined graph. Suppose you pick as your starting point the loop tagged 1, which is at R. You then need to pick edges tagged 2, 3, and 4, linking nodes Y, G, and B. That can't be done without violating the requirements.
If, instead, you start with the edge tagged 1 and joining B and G, you end up having to select edge 4 between R and B, edge 2 between R and Y, and edge 3 between G and Y. As required, all four colors and all four cubes are represented in the subgraph. It's easy then to work out the second subgraph.
All this may look somewhat cumbersome, but the graph approach is surely more efficient than trying thousands of combinations with no guarantee of success. Those familiar with graph theory can typically work out the solution in minutes. Indeed, the puzzle serves as a neat lesson in logical thinking.
"Puzzles are problems done for fun. They are a form of entertainment, but also a form of exercise—a way to get your mind into shape," puzzle collector Stan Isaacs wrote in the preface of Exploring Math Through Puzzles. "They can excite students, stimulate thought, point to research, and involve students in their own educational process."
The earliest known depiction of juggling is on the wall of an Egyptian tomb nearly 4,000 years old (history). The painting shows a woman keeping three balls aloft. It's only in the last few decades, however, that juggling has become the subject of serious mathematical analysis.
The surprise may be that it took so long for mathematicians to get into the act. A large number of the roughly 3,000 members of the International Jugglers' Association are involved with math or computers. Attracted by juggling's demand for a combination of dexterity, precision, invention, and experiment, they find it an immensely appealing pastime.
"Like music-making, it is a common ground between abstract form and physical dexterity; like mathematics, it is a form of pure play," mathematicians Joe Buhler and Ron Graham remarked in a 1984 article in The Sciences.
Since then, mathematicians have developed a notation for and mathematical model of juggling that has allowed performers to gain a better understanding of their tricks and to develop new juggling routines to amaze their audiences.
A juggling pattern is usually periodic. The juggler repeats a sequence of movements at regular intervals, with the balls (or other objects) moving along precise trajectories to create a pleasing pattern.
One common pattern is known as the shower (video). A ball is thrown upward in a high arc by the right hand, caught by the left, then quickly passed in a low arc to the right. In effect, three or more balls chase each other along (more or less) a circular path.
The cascade pattern requires an odd number of balls (video). The left and right hands alternate throwing balls to each other, and the balls follow a looping path that resembles a figure 8 on its side (or the mathematical symbol for infinity). The world record for a sustained cascade is nine balls for 60 consecutive catches. On a good day, Buhler or Graham can handle seven.
In the fountain (or waterfall) pattern, a juggler uses an even number of balls and the balls never change hands. Early in the 20th century, the famed juggler Enrico Rastelli (1896-1931) managed 20 consecutive catches of a 10-ball fountain.
The initial step in the mathematical study of juggling was the development by several mathematicians around 1985 of a special sort of notation to convert juggling patterns into numbers.
The so-called siteswap notation represents the order in which balls are thrown and caught in each cycle of a juggle, assuming that the throws happen on beats that are equally spaced in time. In essence, only one ball is thrown at any instant and every ball is thrown repeatedly.
Let's look at a three-ball cascade. Ball 1 is thrown at time 0, again at time 3, then at time 6, and so on. Ball 2 follows the same pattern, thrown at times 1, 4, 7, and so on. Ball 3 is thrown at times 2, 5, 8, and so on.
The pattern can be characterized by using the intervals between the throws. In a three-ball cascade, the time between throws of any ball is three beats. so its siteswap is 3333..., or 3 for short.
The siteswap notation offers a snapshot of a juggling pattern. A "1" throw, for instance, goes from hand to hand in one beat; a "4" returns a ball to the same hand in four beats. A "0" represents a rest when no catch or toss is made.
Given a siteswap sequence, it's possible to figure out what a juggler has to do to perform that pattern.
Suppose the sequence is 531. Write down a row of integers, starting at 0, to represent consecutive beats. Beneath those integers, write the corresponding siteswap digits, repeating the sequence 5 3 1 as needed.
Even integers (and 0) in the top row correspond to throws from the right hand, and odd integers to throws from the left. The throw height must increase as the interval between tosses of a ball gets longer.
In the 531 pattern, the first ball at time 0 is tossed high (5 beats, as seen in the second row below 0) by the right hand and caught by the left hand at time 5. The 1 in the second row beneath 5 means that the ball is then tossed low to the right hand, which catches it at time 6. The right hand then tosses it high again (5 beats) and the ball is caught by the left hand at time 11.
The second ball starts off at time 1, is tossed by the left hand in a moderately high arc (3 beats) and is caught by the right hand at time 4, by the left hand at time 7, and so on.
The third ball is thrown at time 2, travels in a low arc for 1 beat (going to 3), then in a high arc for 5 beats (going to 8).
In effect, the first and third balls move in a shower pattern, but in opposite directions. The second ball weaves between the two showers in a relatively slow cascade rhythm.
Not all possible sequences lead to legitimate juggling patterns. The sequence 21, for example, has two balls landing simultaneously in the same hand. Other illegal sequences require a juggler to toss two balls at once.
Several computer programs are now available to identify legitimate juggling patterns and animate them. An avid juggler can see what a particular pattern looks like before trying it out and even check out juggling feats that are humanly impossible.
The siteswap sequences 234, 504, 345, 5551, 40141, 561, 633, 55514, 7562, 7531, 566151, 561, 663, 771, 744, 753, 426, 459, 9559, and 831 all represent legitimate patterns in the siteswap characterization of juggling. Indeed, the mathematical model indicates that infinitely many potential juggling patterns exist—though it might take a multi-armed, superdextrous robot to perform most of them.
It turns out that the strings of numbers that correspond to legitimate juggling patterns have unexpected mathematical properties. Buhler and Graham, along with David Eisenbud and Colin Wright, discovered those results in the course of developing a mathematical theory of juggling, based on the numerical sequences resulting from the siteswap notation developed by others.
The number of balls needed for a pattern, for example, equals the average of the digits in the siteswap sequence. Thus, the pattern 45141 would require (4 + 5 + 1 + 4 + 1)/5, or 3, balls.
You can also determine if a sequence is legitimate from the digits of its siteswap designation. For example, suppose the sequence is 566151, which consists of six digits. Add each of the six digits of the sequence to the values 0, 1, 2, 3, 4, and 5 in order and in turn to get 5 + 0, 6 + 1, 6 + 2, 1 + 3, 5 + 4, and 1 + 5, or 578496. If any resulting value is 6 or greater, subtract 6. The sequence 578396 becomes 512430. If that sequence is a permutation of 012345 (all six digits in any order), it is, in principle, possible to juggle the given pattern. A similar analysis would show that the sequence 561651 is not a permissible juggling pattern.
Buhler also worked out a remarkably simple formula for counting the number of different juggling patterns. The number of legitimate siteswaps of n digits using b or fewer balls is exactly b raised to the nth power.
Siteswap juggling theory actually captures only a subset of all possible juggling feats. It concerns only the order in which balls are tossed and caught and ignores such features as the location and style of throws and catches (behind your back, under your leg, and so forth), which contribute greatly to juggling showmanship.
Nonetheless, mathematical theory has suggested novel juggling patterns, and some have started to gain popularity. Next time you see a juggling performance, watch out for 441!
Designed for speed, a racing shell has a distinctive shape. The boat's slim, needle-like profile allows it to skim the water at a rapid rate, propelled by oar.
In varsity and Olympic competition, races may involve boats with one, two, four, or eight rowers. Interestingly, although a shell with eight rowers is much larger than one with a single rower, all the boats have roughly the same proportions (at least for the surface area over which the shell makes contact with water).
Data from 2000-meter world and Olympic championship races show that the larger boats go faster than the smaller ones. In the late 1960s, that fact caught the eye of Thomas A. McMahon (1943-1999), a professor of applied mechanics and an expert in animal locomotion at Harvard University. He wondered why that might be true. How does the speed depend on the number of rowers?
McMahon recognized that because racing shells seating one, two, four, or eight rowers happen to be built with roughly the same proportions, it may be possible to use a simple mathematical model to predict the speed as a function of boat size, even though the physics, in all its gory detail, is quite complex.
The total mass of the rowers plus that of the boat equals the mass of the water displaced by the boat. Because the boats are geometrically similar, this displacement is proportional to the volume, or the cube of the boat's length. The length is, in turn, proportional to the number of rowers.
Two main effects produce the drag experienced by a boat moving through water: wave generation and friction between hull and water. The long, thin shapes of the boats minimize the part of the drag due to wave-making, so that component can be neglected. Skin-friction drag is proportional to the product of the wetted area and the square of the speed. The wetted area itself is proportional to the square of the boat's length. So the drag force is proportional to speed squared times length squared.
McMahon then made the additional assumption that the power available to drive a boat is proportional to the number of rowers. That power is used to overcome the drag force and provide speed. Hence, the power is proportional to the drag force times the speed, or the speed cubed times the length squared.
Putting these proportionalities together algebraically allows you to deduce a relationship between speed and the number of rowers. This model predicts that the speed should be proportional to the number of rowers raised to the one-ninth power. McMahon found that plotted data from various races fit that theoretical relationship quite nicely. It would be interesting to see if it still holds for more recent sculling events.
McMahon's study of rowing is a striking example of how a relatively simple mathematical model can capture essential features of a complex physical phenomenon and yield insights into what is going on. The trick is to come up with an appropriate model. That task requires not only a firm grasp of the relevant areas of mathematics but also an understanding of physical law and behavior.
The summer program in "dynamics of low-dimensional continua," held in 1999 at the Mathematical Sciences Research Institute (MSRI) in Berkeley, Calif., introduced students to tools and concepts for developing and analyzing mathematical models applicable to fluid flow, crystal growth, and many other phenomena encountered in materials science, chemistry, biology, engineering, and physics.
Conducted by L. Mahadevan and Anette Hosoi, the two-week course gave the students some insights into the art of building mathematical models relevant to the real world. The students ended up doing computational projects on such topics as fluid mixing, vortex formation, turbulent convection, and dendritic crystal growth.
McMahon's rowing study was just one of a number of examples Mahadevan cited to illustrate how scaling arguments and dimensional analysis can provide a good starting point. You get good guesses for extremely complicated problems, he said. [L. Mahadevan lecture video: Scaling and Dimensional Analysis]
Once the basic framework is in place, you can then focus on details and study deviations from an initially derived theoretical relationship. In the rowing example, boats with more rowers actually perform a little better than the one-ninth power relationship suggests. It's possible, for instance, that the longer, more heavily laden boats are better at overcoming wave-making drag, which was neglected in the initial model.
In a 1915 paper published in Nature, Lord Rayleigh (1842-1919) extolled the value of the principle of similitude (now called dimensional analysis) in providing physical insight and deducing physical laws. "It happens not infrequently that results in the form of 'laws' are put forward as novelties on the basis of elaborate experiments, which might have been produced a priori after a few minutes' consideration," he wrote.
In 1999, at the age of 82, Irving Kaplansky (1917-2006) remained actively engaged in mathematical research.
Then Director Emeritus of the Mathematical Sciences Research Institute (MSRI) in Berkeley, Calif., Kaplansky spent much of his time in the library, poking into various nooks and crannies of mathematical history. Tidying up loose ends and filling in unaccountable gaps in the mathematical literature, he patiently worked through mathematical arguments, proved theorems, and prepared papers for publication. His remarkably wide-ranging efforts belied the oft-repeated notion that mathematicians are most productive when they are young.
A distinguished mathematician who made major contributions to algebra and other fields, Kaplansky was born in Toronto, Ontario, several years after his parents had emigrated from Poland. In the beginning, his parents thought he was going to become a concert pianist. By the time he was five years old, he was taking piano lessons. That lasted for about 11 years, until he finally realized that he was never going to be a pianist of distinction.
Nonetheless, Kaplansky loved playing the piano, and music remained one of his passions. "I sometimes say that God intended me to be the perfect accompanist—the perfect rehearsal pianist might be a better way of saying it," he said. "I play loud, I play in time, but I don't play very well."
While in high school (Harbord Collegiate in Toronto), Kaplansky started to play in dance bands. During his graduate studies at Harvard University, he was a member of a small combo that performed in local night clubs. For a while, he hosted a regular radio program, where he played imitations of popular artists of the day and commented on their music. A little later, when Kaplansky became a math instructor at Harvard, one of his students was Tom Lehrer, later to become famous for his witty ditties about science and math.
In 1945, Kaplansky moved to the University of Chicago, where he remained until 1984, when he retired, then became MSRI director.
Songs had always interested him, particularly those of the period from 1920 to 1950. These songs tended to have a particular structure: the form AABA, where the A theme is repeated, followed by a contrasting B theme, then a return to the original A theme.
Early on, Kaplansky noticed that certain songs have a more subtle, complex structure. This alternative form can be described as AA'BAA'(B/2)A", where A is a four-bar phrase, A' and A" are variants, and B is a contrasting eight-bar phrase. "I don't think anyone had noticed that before," he remarked. Kaplansky's discovery is noted in a book about the American musical by University of Chicago film scholar Gerald Mast (1940-1988).
Kaplansky argues that the second structure is really a superior form for songs. To demonstrate his point, he once used it to turn an unpromising source of thematic material—the first 14 decimal digits of pi—into a passable tune. In essence, each note of the song's chorus corresponds to a particular decimal digit. When Chicago colleague Enid Rieser heard the melody at Kaplansky's debut lecture on the subject in 1971, she was inspired to write lyrics for the chorus.
A SONG ABOUT PI
Through all the bygone ages, Philosophers and sages Have meditated on the circle's mysteries. From Euclid to Pythagoras, From Gauss to Anaxag'ras, Their thoughts have filled the libr'ies bulging histories. And yet there was elation Throughout the whole Greek nation When Archimedes did his mighty computation! He said:
3 1 4 1 Oh (5) my (9), here's (2) a (6) song (5) to (3) sing (5) about (8,9) pi (7). Not a sigma or mu but a well-known Greek letter too. You can have your alphas and your great phi-bates, and omegas for a friend, But that's just what a circle doesn't have—a beginning or an end. 3 1 4 1 5 9 is a ratio we don't define; Two pi times radii gives circumf'rence you can rely; If you square the radius times the pi, you will get the circle's space. Here's a song about pi, fit for a mathematician's embrace.
The chorus is in the key of C major, and the musical note C corresponds to 1, D to 2, and so on, in the decimal digits of pi.
The music and lyrics are unpublished. However, singer-songwriter Lucy Kaplansky (Irving Kaplansky's daughter) sometimes includes a rendition of "A Song About Pi" in her programs. Although she has her own distinctive style, she doesn't mind occasionally showcasing her father's old-fashioned tunesmanship. Video.
In 1993, Irving Kaplansky wrote new lyrics for the venerable song "That's Entertainment" (video) to celebrate his enthusiasm for mathematics. He dedicated the verses to Tom Lehrer.
The fun when two parallels meet Or a group with an action discrete Or the thrill when some decimals repeat, That's mathematics. A nova, incredibly bright, Or the speed of a photon of light, Andrew Wiles, proving Fermat was right, That's mathematics. The odds of a bet when you're rolling two dice, The marvelous fact that four colors suffice, Slick software setting a price, And the square on the hypotenuse Will bring us a lot o' news. In genes a double helix we see And we cheer when an algebra's free And in fact life's a big PDE. We'll be on the go When we learn to grow with mathematics. With Lagrange everyone of us swears That all things are the sums of four squares, Like as not, three will do but who cares. That's mathematics. Sporadic groups are the ultimate bricks, Finding them took some devilish tricks. Now we know—there are just 26. That's mathematics. The function of Riemann is looking just fine, It may have its zeros on one special line. This thought is yours and it's mine. We may soon learn about it But somehow I doubt it. Don't waste time asking whether or why A good theorem is worth a real try, Go ahead—prove transcendence of pi; Of science the queen We're all of us keen on mathematics.
The playing cards slip instantly, precisely, and soundlessly into place. No smudges or creases ever mar the crisp, bright faces heading the unnaturally tidy columns and rows. With no deck of cards to handle, shuffle, deal, sort, or position, there's only the point and click of a mouse pushed across a pad beside the computer's keyboard.
This is solitaire on the computer screen. Every day, countless people sneak in a game or two (or more) as a break from various chores or to cure a case of writer's block. For a game played with such regularity, it's natural to ask what your chances are of winning.
Called Klondike, or Canfield, this form of solitaire has resisted mathematical analysis. Partly because Klondike involves player choice, no one has yet been able to construct a mathematical model of the game from which it is possible to deduce theoretical odds of winning.
Even computer simulations fail to capture the game's nuances. In one research effort at a probabalistic analysis of solitaire, Harvard student Amy Rabb developed a program that could win about 8 percent of the games. When she played the same cards herself, she won 15 percent of the time.
In general, the computer versions typically available do no more than let you play solitaire, showing you the cards and letting you move them around.
When confronted with a difficult problem, it's often useful to tackle a simpler problem that has some of the ingredients of the more complex situation, statistician Persi Diaconis has remarked.
Diaconis has studied a simple (practically mindless) version of solitaire called Floyd's game, named for computer scientist Bob Floyd, who invented it in 1964. Starting with a shuffled deck, the player turns up cards one by one, placing them in piles according to the rule that only a lower card can be played on top of a card already on the table (aces count as 1). When the cards have the same face value, clubs beat hearts, which beat spades, which beat diamonds. The object is to end up with as few piles as possible.
Suppose the first card is a 6. The next card is a 5, so it goes on top of the 6. An 8 follows; it starts a new pile to the right of the first pile. A jack starts off a third pile to the right of the existing piles. A 9 goes on top of the jack, a 2 goes on top of the 5. a queen starts off a fourth pile, and so on, until the deck is exhausted.
The best strategy is to place each turned-up card on a pile as far to the left as possible. Once an ace appears atop a pile, no more cards can be added to that pile.
How many piles would you expect to get in a typical game? Here are the results from 10,000 games played by a computer.
The average number of piles is 11.6. You would get only one pile if the cards were perfectly arranged to begin with.
Playing this game of solitaire also serves as a quick way to sort a deck of cards. It doesn't take long to lay out the cards in piles, each of which is strictly ordered. It's then easy to pick them up in sequence, starting with the aces. Indeed, the game is sometimes called patience sorting, where patience is the British term for solitaire.
"Whether this is the fastest practical method for sorting real cards (or alphabetizing final exams) is an interesting topic for coffee-room conversation," Diaconis and David Aldous commented in a paper published in the Bulletin of the American Mathematical Society.
Mathematically, playing the game involves constructing increasing subsequences. For example, suppose you have just 9 cards, numbered from 1 to 9. One possible shuffling, or permutation, of those cards is 5 2 1 3 6 9 7 4 8. This sequence includes various increasing subsequences, such as 5 6 9 or 1 3 6 7 8. The longest increasing subsequence consists of 5 cards. It turns out that the number of piles you get in the solitaire game equals the length of the longest increasing subsequence in the card sequence of the original shuffled deck.
Suppose you start with a shuffled deck. It contains a longest increasing subsequence of some length, Eventually, you play the first card of this subsequence. Cards that come later will have to go in different piles. Whatever that first card is, cards that are played on top of it have a lower value. As soon as a card of higher value comes up, it must start a new pile. Hence, the cards in the longest increasing subsequence must go into different piles.
Mathematical analysis indicates that the most likely number of piles for a shuffled deck of n cards is roughly two times the square root of n. For n equal to 52, the average number of piles is 14.4, which is reasonably close to the computer simulation results. Additional analysis shows that the formula works better when it incorporates the extra error term −n(1/6).
Finding the longest increasing subsequence plays an important role in a variety of scientific sorting tasks, including determination of matches between DNA strands. In such cases, the fastest way to identify that subsequence is, in effect, to play the solitaire game, Diaconis noted.
Jinho Baik, Percy Deift, and Kurt Johansson explored the possible lengths of the longest increasing subsequences associated with random permutations, describing their results in a paper published in the Journal of the American Mathematical Society.
For example, suppose you start with five cards, numbered from 1 to 5. One possible permutation of those cards is the sequence 5 1 3 2 4. In this case, the longest increasing subsequences are 1 2 4 and 1 3 4. The length of that subsequence is 3. Other permutations of the five cards can give different values for the longest increasing subsequence.
Baik and his colleagues proved that the resulting distribution of lengths fits a relationship that can be derived from so-called random matrix theory to describe the quantum behavior of large atoms. Moreover, the formula for the average length of the longest increasing subsequence (or average number of piles in simplified solitaire) emerges naturally from their analysis.
Why there should be a connection between playing solitaire and random matrix theory, however, remains a mystery, Diaconis noted.
In the end, an effort to solve a "wimpy" card-game problem—a highly simplified version of standard solitaire—led into all sorts of deep mathematics, from complex analysis to random matrix theory.
"The mathematics that came out in analyzing solitaire is just beautiful," Diaconis remarked. "It allowed me to come in contact with mathematical tools and results ... that I never would have touched if I had not been interested in the original problem."
In the late 1990s, however, a cautious optimism took hold among mathematicians tangling with the problem. New approaches showed promise, potentially bringing a proof of the Riemann hypothesis within reach.
The Riemann hypothesis was first proposed in 1859 by the German mathematician Georg Friedrich Bernhard Riemann (1826-1866). It concerns the so-called zeta function, which encodes a great deal of information about the seemingly haphazard distribution of prime numbers among the integers.
Gaussian integers are complex numbers of the form a + bi, where a and b are integers. The Gaussian primes are Gaussian integers that cannot be factored in a nontrivial way into a product of other Gaussian integers. In this weaving, the dark blue squares represent Gaussian primes. Department of Mathematics, University of Nebraska, Lincoln, Nebraska.
Mathematicians know that the Riemann zeta function (zeta(s) = 1 + 1/2s + 1/3s + ... + 1/ns + ...) is zero when the complex numbers is a negative even number. Riemann conjectured that all other solutions of the equation have the form 1/2 + bi. In other words, the so-called zeros of the Riemann zeta function lie along a line in the complex plane at positions specified by the number b. Knowing precisely where those zeros lie (the values of b) would give accurate estimates of the distribution of primes.
The Riemann hypothesis is "one of the finest examples of the extraction of order from chaos in the whole of mathematics," Philip J. Davis (1923-2018) and Reuben Hirsch wrote in The Mathematical Experience.
In a sense, if the zeros of the zeta function are like musical notes, then prime numbers are chords, and theorems about these entities are symphonies, quantum chaologist Michael Berry has remarked. The Riemann hypothesis imposes a pleasing harmony on the zeta-zero notes, he added.
One of the first inklings of a connection between number theory and quantum mechanics came in 1972. Hugh L. Montgomery had discovered a formula that describes the average spacing between consecutive zeros (values of b) of the zeta function. During a visit to the Institute for Advanced Study in Princeton, New Jersey, he showed his result to quantum physicist Freeman Dyson, when they happened to meet at afternoon tea.
Dyson immediately recognized it as identical to the result obtained for so-called random matrix models, which are used to describe the energy levels of large atoms or heavy nuclei. By an amazing coincidence, Dyson was one of just a handful of physicists in the world who had done such calculations and could appreciate Montgomery's work.
The fundamental equation of quantum mechanics is known as the Schrödinger equation. It describes the behavior of microscopic entities such as atoms, electrons, and nuclei. Solving the equation for a given quantum system means determining its so-called eigenvalues—whole numbers that correspond to energy levels. Such a set of eigenvalues is termed the system's spectrum.
Equivalently, physicists can represent a given quantum system by a matrix—a rectangular array of whole numbers. Solving that matrix gives the system's eigenvalues.
Calculating the eigenvalues of a simple system, such as a lone electron trapped in a rectangular box or an electron orbiting a single proton (as in a hydrogen atom), can be done by any beginning student of quantum mechanics. Determining the energy levels of a heavy nucleus or an atom with many electrons is immensely more difficult.
"One way to understand a complicated object is to study properties of a random thing which matches it in a few salient features," mathematician and statistician Persi Diaconis noted.
In the 1970s, theoretical physicists started to use sets of random matrices (with certain specified properties) as mathematical models of quantum systems. In a random matrix, the rows and columns are filled with numbers randomly selected from a normal, or Gaussian, distribution. (Imagine randomly choosing numbered balls from an urn, where certain numbers are represented more often than others.)
By solving a sufficiently large random matrix, physicists can determine a system's "typical" eigenvalues. Combining the results of many such calculations, they can come up with statistical laws about those eigenvalues.
If the random matrices belong to a class of matrices known as the Gaussian Unitary Ensemble (GUE), physicists obtain good estimates of the average spacing between consecutive energy levels of heavy atomic nuclei and other complex quantum systems.
It turns out that the spacings between consecutive zeros of the zeta function also appear to behave statistically like the spacings between consecutive eigenvalues of these large, random matrices. Indeed, this observation also suggests that the infinitely many zeros specified in the Riemann hypothesis are irregularly distributed in a particular way along the line 1/2 + bi.
These computational results gave mathematicians a hint that random matrix theory could provide an avenue to a proof of the Riemann hypothesis.
There was also new hope for physicists, who generally have a hard time computing the eigenvalues and experimentally measuring the energy levels of complex quantum systems. The Riemann zeta function itself could provide a new way to do such calculations in quantum mechanics.
In particular, the zeros of the zeta function can be interpreted as energy levels in the quantum version of a conventional, or classical, system that behaves chaotically. Such a simple mathematical model would allow physicists to test ideas about how to bridge the apparently incompatible chaotic and quantum-mechanical descriptions of the microscopic world.
Why the Riemann zeta function so convincingly mimics a quantum system without being one remains a matter of conjecture and mystery. Whatever the reason, physicists and mathematicians have opened a new window on the fascinating interplay of order and chaos—in nature and among integers.
Welcome to an occasional series devoted to "cool stuff" that I encounter while browsing the world of mathematics and computer science. I'll peek at new developments in math and its applications, and I'll revisit old puzzles, famous problems, and historic events—anything mathematical that happens to catch my eye. I hope you'll find something of value in these brief, informal forays into the world of math.
Ivars Peterson is a freelance writer and editor. He was Director of Publications at the Mathematical Association of America from 2007 to 2014. As an award-winning mathematics writer, he previously worked at Science News for more than 25 years and served as editor of Science News Online and Science News for Kids. His books include The Mathematical Tourist, Islands of Truth, Newton's Clock, and Fragments of Infinity: A Kaleidoscope of Math and Art.