The trouble with sitting down at a computer keyboard to enter a password is that someone may be looking over your shoulder. Because your password could be stolen as you type it, the computer system isn't completely secure.
But if you could somehow provide the computer with information that persuades the computer you know the password without actually giving away the password itself, you would be on your way to solving the security problem.
Furthermore, if no onlooker could reconstruct the password from the information you gave the computer, no one could break into the system—at least by using a purloined password.
The mathematical basis for such a scheme is available, and it depends on something called a "zero-knowledge" proof.
The idea is that a "prover" has found a proof for a theorem and wants to let a "verifier" know that he knows the proof without revealing the proof itself. The verifier can ask a special question that requires the equivalent of a yes-or-no answer. If the prover really knows the proof, he can answer the question correctly every time it is asked.
If the prover doesn't know the proof, he has only a 50 percent of being right each time. After, say, a dozen tries, the chances of fooling the verifier get very small.
Neither the question not the possible answers give away even a hint of the proof itself—hence, the term zero-knowledge proof.
The three researchers demonstrated the procedure for a mathematical coloring problem, which involves ensuring that no two linked points in certain networks of connected points have the same color.
Manuel Blum then showed how to give an efficient zero-knowledge proof for any mathematical theorem.
One of the more difficult steps, Blum noted, is finding the right question for the verifier to ask. Once this is done, a zero-knowledge scheme can handle any theorem posed within any logic system.
All a verifier finds out is that a theorem is provable and that the prover actually knows the proof. And because the prover has to use at least as many words as the proof itself contains, he gives away an upper limit for the proof's length.
To show how the scheme works, Blum chose an example from graph theory. Any network of points (or nodes) connected by lines (or edges) is called a graph. In Blum's example, the graph consists of a star-shaped pattern of lines linking 11 points.
The prover has found a continuous path among the links that passes only once through through each of the 11 points on the graph and returns to where it started. This special type of path is called a Hamiltonian cycle.
The prover's aim is to persuade a verifier that such a path is known without giving the verifier the slightest idea of how to construct the path.
To do this in Blum's example, the prover privately marks 11 nodes along the circumference of a circle and labels them randomly from 1 to 11. The nodes on the circle are then connected in the same way as the points in the original graph. That is, lines would join nodes 1 and 5, 2 and 6, and so on.
Deciding whether a particular graph—a network of points and connecting lines—has a Hamiltonian cycle is often difficult. This involves finding a continuous path along the links that passes only once through each of the graph's points and then returns to where it started. In this example, such a path exists through the points in the order 1, 5, 6, 2, 8, 4, 10, 11, 9, 3, 7, and then back to 1. For a zero-knowledge proof, the star-shaped graph (above) can be redrawn so that all the points fall on the circumference of an imaginary circle (below) yet the connecting lines still join the same points as in the original graph.
The resulting diagram is covered up by, say, an erasable opaque film like that used on some lottery or contest tickets.
The verifier can ask the prover to uncover the complete graph, which shows that all the points are properly linked, or she can ask to see the Hamiltonian cycle.
In the latter case, the prover erases enough of the film to reveal just the lines that make up the cycle. He can do this only if he knows the right path. However, because the nodes are still covered up, the verifier doesn't know the actual path from point to point. All she can verify is that a Hamiltonian cycle exists. This process can be repeated as many times as the verifier wishes.
Because the prover doesn't know whether the verifier will ask for the graph or the cycle, he has to be ready for either choice and therefore must know the cycle. Failure to produce either the correct graph or the cycle during any turn is equivalent to a wrong answer, and the verifier then knows that either the prover is lying or he doesn't actually have the proof.
As outlined, this scheme sounds somewhat cumbersome. But the opaque film used in the example can be replaced by encryption schemes to hide information. Thus, proof checking can be done electronically when the whole procedure is encoded as strings of binary digits. This makes it possible to use this concept for password protocols and in cryptological games like tossing a coin by telephone or exchanging secret keys.
And in the sometimes turbulent world of mathematics research, it gives a wary mathematician a way to claim credit for being the first to find a particular proof without the necessity of giving away the proof's details. All that someone else can find out, until the proof itself is finally revealed, is that a particular theorem is provable.
Nestled beside a national wildlife refuge, the Noyes Museum of Art in Oceanville, N.J., seemed an unlikely place for an exhibit featuring art rooted in mathematical concepts. Nonetheless, in 2006, its galleries (now closed at this location) featured works by four contemporary artists whose art had a strong mathematical element.
The Noyes Museum of Art in Oceanville, N.J. Photo by I. Peterson.
"In conceptual art the idea or concept is the most important aspect of the work," LeWitt wrote in 1967. "When an artist uses a conceptual form of art, it means that all the planning and decisions are made beforehand and the execution is a perfunctory affair. The idea becomes a machine that makes the art."
LeWitt's series of vast wall drawings exemplified this approach. In effect, the artist conceptualized the work and provided the instructions—an algorithm—for what should appear on a wall. Assistants and volunteers then implemented the instructions.
For the Noyes exhibit, LeWitt contributed three wall drawings. Here are the instructions for these creations, each drawn in thick pencil on white walls.
Wall drawing #142: A twelve-inch grid inside an eight-foot square an increasing number of horizontal not straight lines from the left side.
Wall drawing #143: A twelve-inch grid inside an eight-foot square an increasing number of horizontal straight lines from the left side.
Wall drawing #167: A line from the center of the wall to the midpoint of top side, and a line from the midpoint to the right side.
Catalog cover, "Form + Function" exhibit, Noyes Museum of Art.
The instructions were curiously ambiguous and somewhat puzzling. Given just the instructions, I tried to imagine what each artwork would look like. (Try it yourself.) However, when I compared my imaginings with the actual works drawn on the walls, my images didn't quite match the reality. And it wasn't clear to me whether the instructions had been misinterpreted or were incomplete or whether the artist had provided additional guidance.
Steven Gwon's pieces typically consisted of colored pencil lines, hand drawn on sheets of finely ruled graph paper. The resulting sheets revealed spectral forests of subtly shaded lines, spaced at regular intervals and of precisely defined lengths. Seen from afar, these gently rendered patterns shimmered in the ambient light.
Gwon said that his drawings encapsulated days of the year, as recorded through numbers that measure time and space and through the colors of the spectrum—sunrises or sunsets; minutes, hours or days, months or years. The example on display, called Year, had 36 panels.
Pencil on Paper U by Steven Gwon. Courtesy of Steven Gwon.
Mark Pomilio's art wasn't explicitly tied to a grid. Instead, it explored, through layering and translucency, the geometry of growth and form. Pomilio likened his approach to the generation of a complex form from a single cell (or seed). His forms multiplied, grew, and overlapped to produce subtly complex, translucent landscapes of polygons, lines, and shadows, whether rendered in charcoal or deep, rich colors.
Family Circle VII by Mark Pomilio. Courtesy of Mark Pomilio.
Pomilio used his abstract works as ways to articulate, in a visual sense, profound recent developments in the life sciences, as seen in developmental processes and modeled by simple geometrical expressions.
Several large quilts provided an engaging introduction to John Sims and his visual ruminations on the digits of pi (see "Quilting Pi"). These square patterns mapped the decimal digits of pi to colors, starting from 3 at the center and spiraling outward from that point. Different choices of color led to intriguing variants—the same digits and arrangement in each case, but strikingly different visual effects.
Seeing Pi by John Sims. Courtesy of John Sims.
My favorite, however, was Sims's signature piece, which paired a representation of a tree with a branched, fractal structure (see "Fractal Roots and Artful Math"). It highlighted the connections that Sims saw among mathematics, art, and nature (and presented his own vision of growth and form). In different orientations, this dual image encoded a variety of relationships and concepts.
Mathematical Art Philosophy 101: With Titles (Square Roots of Tree, Mathematical Art Brain, and Tree Root of Fractal) by John Sims. Courtesy of John Sims.
"What is exciting about all this is that these individuals are making work using systems with innumerable applications," curator A. M. Weaver wrote in the catalog accompanying the exhibit.
Each artist, in his own way, alerted viewers to concepts and processes that bring together mathematics, art, and nature. Mathematics itself offers a multifaceted, solid frame on which to hang provocative creations of artistic imagination.
A view of the groundbreaking 2002 MathArt/ArtMath show at the Selby Gallery in Sarasota, Fla.
The realm of mathematical art is far wider and more diverse than many people realize, however. A groundbreaking 2002 exhibition at the Ringling College of Art + Design's Selby Gallery (now closed) in Sarasota, Fla., dramatically illustrated the broad range and depth of the burgeoning interaction between mathematics and art.
Titled MathArt/ArtMath, the exhibition was assembled by Kevin Dean, then director of the Selby Gallery, and John Sims, a mathematician and artist who taught at the Ringling School. Sims encouraged the linking of mathematics and art—in his own work, in the classroom, and by calling attention to the endeavors of others devoted to bringing about such interactions.
"My role as an artist is to encode mathematics in what I do," Sims remarked. "The mathematical art that I seek to develop combines mathematical language and analysis with the expressiveness and creativity of the process to make expressive visual theorems."
In constructing his signature piece, Square Roots of a Tree, Sims paired a representation of a tree with a branched fractal structure to highlight the tree-root relationship and interdependency that he sees between mathematics and art. In other orientations, his artwork becomes Tree Root of a Fractal (rotated 180 degrees) and Math Art Brain (rotated 90 degrees).
Square Roots of a Tree. Courtesy of John Sims.
Tree Root of a Fractal, Sims noted, "shows how the latent geometry of nature can inspire and support abstraction." See also "Form Plus Function."
Sims brought the same sort of sensibility to the classroom. Mathematical ideas inspire artistic creations. Conversely, "to see mathematically, one draws from creativity and intuition, as in the case with the art process itself," he said.
One sequence of artworks arose out of a study of the Pythagorean theorem (given a right triangle with sides a, b, and c, a2 + b2 = c2), particularly the theorem's manifestations in different guises at different times in history and in different cultures.
Holly's Rose: Artwork by Holly Brafflet as part of a course offered by John Sims at the Ringling School of Art and Design. Courtesy of John Sims.
It was a classroom journey, Sims said, that blended "the worlds, vocabularies, and strategies of mathematics and art into an interdisciplinary mixture that celebrates the interconnectedness of analysis and creativity, left brain and right brain, theory and practice, structure and expression, and the liberal arts and studio praxis."
Sims has a strong commitment to bringing art—especially mathematical art—to the public. To that end, he promoted the installation of a maze of pillars draped with murals in a Sarasota neighborhood, where visitors could view artworks created by local and internationally known artists and even find space to paint their own artistic impressions.
One ambitious scheme, Time Sculpture, involved installations scattered across the United States—familiar objects (vase, chess set, chair, clock, and so on) that are connected yet dispersed. Sims saw it as another sort of journey—one that maps "orbits from abstract places into diverse geographies, celebrating the search for cycles in both human and natural systems."
Sims's passion for making mathematical art known and accessible to the public led to the original MathArt/ArtMath exhibition at the Selby Gallery. The show represented a significant effort not only to illustrate the diversity of mathematical art and but also to illuminate its common threads.
The exhibition presented a broad spectrum of math-related paintings, prints, sculptures, fabrics, digital prints, electronic music, and videos. Euclidean geometry, Fibonacci numbers, the digits of pi, the notion of algorithms, concepts of infinity, fractals, and other ideas furnished the mathematical underpinnings.
The assembled artworks, carefully arranged to highlight similarities in theme, exemplified the diverse ways in which different artists can approach the same fundamental ideas, yet still reflect their mathematical essence.
One contribution from Sims was a visualization of pi's digits, in a digital video format—with music by composer Frank Rothkamm and the participation of Paul D. Miller, who is better known on the New York City scene and elsewhere as DJ Snoopy (see "Quilting Pi").
When John Sims contemplates a number, he sees color and shape. And an intriguing, enigmatic number such as pi, the ratio of a circle's circumference to its diameter, conjures up vivid patterns that belong on quilts.
Starting with 3.14159265, the decimal digits of pi run on forever, and there's no discernible pattern to ease the task of compiling (or memorizing) these digits. Computer scientists have so far succeeded in computing 50 trillion decimal digits of pi.
Both a mathematician and an artist, Sims taught for many years at the Ringling College of Art + Design in Sarasota, Fla. He's passionately interested in the collision of mathematical ideas and visual culture.
Pi is one of the few mathematical constants that have successfully entered the pop-culture psyche, Sims noted. Pi has appeared as the title of movie, for instance, and as the name of a perfume.
A while ago, Sims created a visualization of pi's digits in a digital video format—with music by Frank Rothkamm and the participation of Paul D. Miller, who was better known on the New York City scene and elsewhere as DJ Spooky. In this visualization, each of the digits from 0 to 9 is represented by its own color on a vast grid of squares.
Seeing Pi by John Sims.
Working in base 2 and using the colors black and white, Sims then created Black White Pi. In base 3, using red, white, and blue, he made American Pi.
From left to right, Seeing Pi, American Pi, and Black White Pi. Courtesy of John Sims.
A second pi-based project involved a collaboration with conceptual artist Sol LeWitt (1928-2007). LeWitt's instructions were to put 1,000 straight lines inside a square. Sims achieved that result by dividing each side of the square into 10 parts (like the axes of a graph), labeling the divisions from 0 to 9, and drawing lines from a division on one side to a division on an adjacent side. The lines followed successive digits of pi from side to side, starting at the top and moving in a clockwise direction until the wall drawing had 1,000 lines.
Sims' former student, Brandon Styza, drew the lines. The result formed the basis for a LeWitt wall drawing in the math lounge at Wesleyan University.
In 2006, before heading for New York City, Sims completed a number of pi works, including several quilts that were constructed by an Amish quilting group in Sarasota. These artworks were on display at Sarasota's mack b gallery.
John Sims working with a group of Amish women to create a pi-based quilt. Courtesy of John Sims.
Sims started out with a drawing of pi's decimal digits on a square grid, with successive digits forming a clockwise spiral from the center.
A square spiral of the digits of pi. Photo by Tobey Albright.
In the gallery, this drawing was displayed with a phonograph that played a recording of Sims reciting the digits of pi in order. A second track presented the digits in German.
With each digit from 0 to 9 mapped to a different color (but not black or white), the central portion of the drawing was then converted into a striking, square quilt of colored patches, with a black border. Sims called the creation Pi sans Salt and Pepper.
Pi sans Salt and Pepper by John Sims. The square quilt is 8 feet wide. Photo by Tobey Albright.
In a variation on this pi-based theme, another quilt designed by Sims featured several, differently color-coded representations of pi. It was called Civil Pi Movement.
In this pi-based quilt, called Civil Pi Movement, the upper left unit shows the first 36 binary digits of pi (0 = white and 1 = black) and the lower right unit reverses the color scheme. The upper right unit shows the first 36 ternary digits of pi (0 = dark blue, 1 = red, and 2 = white) and the lower left unit uses a different color scheme (0 = green, 1 = red, and 2 = black). The center unit matches the center of Pi sans Salt and Pepper. The square quilt is 8 feet wide. Photo by Tobey Albright.
"The mathematical art that I seek to develop combines mathematical language and analysis with the expressiveness and creativity of the process to make expressive visual theorems," Sims said. "To see mathematically, one draws from creativity and intuition, as in the case with the art process itself."
When we see patterns—whether in the arrangement of stars in the sky (see "Spying Pi in the Sky") or in the distribution of guests at a dinner party (see "Party Games")—we are constantly tempted to think of these patterns as existing for a purpose and being the effect of a cause. Ramsey's theorem (see "Playing Fields of Logic") suggests otherwise. Patterns can, and indeed must, arise out of pure randomness (or chance).
In mathematics, in science, and in life, we constantly face the delicate, tricky task of separating design from happenstance. We must make decisions about the meaning of apparent coincidences, whether what we have before us is medical data showing an unusual cluster of cancers in a community, a sequence of events that suggests a conspiracy, or a peculiar arrangement of stars in the sky.
We detect eight stars in a straight line and think, "That can't be an accident." Is this pattern the result of some cosmic ordering driven by gravity? Is an unknown force involved? Is it a string of deliberately placed beacons for interstellar travel?
In the absence of an obvious explanation, the human tendency has been—and still is—to invoke supernatural forces, extraterrestrial visitors, or other fantastic beings and events to make sense of what we observe.
Humans are model builders, and the human mind is very good at identifying patterns and constructing theories. We are programmed to search for patterns and to invent explanations, and we find it difficult to imagine that patterns emerge from randomness.
As builders, humans can also create edifices on a vast range of scales, from the giant pyramids of ancient times to the intricate microscopic components of a computer's microprocessor. We can manipulate individual atoms to spell out tiny words on a silicon surface, and we can orchestrate the construction of giant skyscrapers and vast malls.
To do so, we design, make plans, create blueprints, organize labor, and marshal resources to create the order that we desire. In nature, however, that order and structure arises from randomness, and we face the puzzle of how the components of life assemble themselves without blueprints as guides.
Mathematical research is generally thought to be a solitary pursuit. You might even imagine a mathematician squirreled away in a dingy garret, an isolated wilderness cabin, or a sparsely appointed cubicle, thinking deeply, scrawling inscrutable equations across scraps of paper, to emerge from a self-imposed exile at last with a proof in hand.
A few mathematicians do spend their professional lives in solitary contemplation of a deep mathematical problem. In general, however, mathematical research is a remarkably social process. Colleagues meet constantly to compare notes, discuss problems, look for hints, and work on proofs together.
The abundance of conferences, symposia, workshops, seminars, and other gatherings devoted to mathematical topics attests to a strong desire for interaction.
Paul Erdős (1913-1996), perhaps more than any other mathematician in modern times, epitomized the strength and breadth of mathematical collaboration. Because he had no permanent home and no particular job, Erdős simply traveled from one mathematical center to another, sometimes seeking new collaborators, sometimes continuing a work in progress. His well-being was the collective responsibility of mathematicians throughout the world.
"My brain is open," Erdős typically declared on stepping into a mathematician's office, and the work would begin. For him, doing mathematics was as natural as breathing, and he did if for more than 65 years.
Driven by the notion that life represents a cosmic struggle to uncover beautiful truths hidden away by a stubborn, contrary God, Erdős applied himself to his pursuit with frenetic zeal.
"A mathematician is a device for turning coffee into theorems," Erdős wryly remarked.
To Erdős, mathematics and its elements were more real than trees and grass, transcending reality and awaiting discovery. At the same time, though he did not like having possessions, Erdős was not an ascetic. He liked to eat well in good restaurants and stay in fine hotels when he got the chance.
A compassionate, generous, gentle man, he was well informed on almost any topic of conversation and deeply aware of the tragedies of world politics.
Erdős wrote hundreds of research papers on a wide range of mathematical topics. Especially astonishing was the extent to which he also worked with other mathematicians to produce joint papers.
Collaboration on such a scale had never been seen before in mathematics, and it has entered the folklore of the mathematical community. Of course, there's a characteristically mathematical way to describe this webbiness—a quantity called the Erdős number.
Mathematicians assign Erdős the number 0. Anyone who has coauthored a paper with him has the cherished Erdős number 1. As of August 2020, there were 512 such coauthors. Another 12,600 mathematicians have the Erdős number 2, because they wrote a paper not with Erdős himself but with someone who wrote a paper with Erdős.
People belonging to these two categories already encompass a significant proportion of all mathematicians in academia worldwide.
The Erdős number 3 goes to anyone who has collaborated with someone who has collaborated with someone who coauthored a paper with Erdős, and so on. Thus, any person not yet assigned an Erdős number who has written a joint mathematical paper with a person having Erdős number n earns the Erdős number n + 1..
Anyone left out of this assignment process has the Erdős number infinity.
Keeping track of these mathematical links has become a kind of game, punctuated by published, tongue-in-cheek explorations of the properties of Erdős numbers and the extent of mathematical links to the rest of the world.
It's possible to draw a collaboration graph in which every point represents a mathematician, and lines join mathematicians who have collaborated with each other on at least one published paper. The resulting tangle is one of the largest, most elaborate graphs available to mathematicians.
Sone scholars have conjectured that this monstrous graph snares nearly every present-day mathematician and has threads into all areas of mathematics, into computer science and physics, and even into the life sciences and social sciences.
This collaboration graph's breadth is astonishing and vividly demonstrates the vital role of collaboration in math and science.
Several decades ago, mathematician Jerrold W. Grossman took on the task of maintaining a comprehensive listing of mathematicians who have earned an Erdős number 1 or 2. "It's fun," Grossman said. "But more seriously, it shows that mathematical research is very webby, with almost everybody talking to people who talk to people."
Indeed, fascinating patterns emerge from the study of Erdős collaboration lists. For example, the average number of authors per research article in the mathematical sciences increased dramatically during Paul Erdős's lifetime.
About 70 years ago, more than 90 percent of all papers were solo works, according to Grossman. In more recent years, barely half of the papers were individual efforts.
In the same period, the fraction of two-author papers rose from less than one-tenth to roughly one-third. In 1940, virtually no papers had three authors. Now, more than 10 percent of all papers have three or more authors.
Erdős himself may have played a role in this relatively recent explosion of collaboration.
Throwing darts at a target may sound like a curiously haphazard way to solve a mathematical problem. Properly applied as a kind of intelligent guessing, however, it can become a highly effective technique for obtaining answers to certain problems in Ramsey theory (see "Playing Fields of Logic") and in other areas of mathematics.
Suppose that instead of the usual rings and diagonals, a dartboard features a target in the shape of a triangle inside a circle. If you were to throw darts at the board without aiming, the darts would land randomly within the circle and the triangle. By counting up the number of darts that land anywhere on the target and those that land only within the triangle, you can estimate the triangle's size relative to that of the circle.
For example, if 10 out of 100 randomly thrown darts fall inside the triangle, the triangle has approximately one-tenth the area of the circle.
For a dartboard consisting of a triangle inside a circle inside a square, the proportion of randomly thrown darts that land inside the triangle provides an estimate of the triangle's area relative to that of the square. You can also estimate the circle's area relative to that of the square or find an approximate value of π.
Such poorly played games of darts can provide remarkably good estimates of a variety of mathematical quantities, including π (pi), the ratio of the circumference of a circle to its diameter. Just make a target in which a circle of a given radius fits snugly inside a square with sides equal to twice the circle's radius.
The number of darts that hit the circle divided by the number that hit the square outside the circle provides an estimate of the value of π divided by four. The more inaccurate the aim and the larger the number of throws, the better the estimate becomes (see also "Buffon's Needling Ants" and "Targeting Pi").
Random choices can also play a prominent role in mathematical proofs. In 1947, Paul Erdős (1913-1996) used a probabilistic method to show, without actually determining the Ramsey number itself, that a given Ramsey number has to be greater than a certain value (see "Puzzling Groups and Ramsey Numbers").
The darts of Erdős's thought experiment were flips of a coin. In effect, he started with a room of mutual strangers, brought each of the pairs together, and introduced them only if a coin flip came up heads. The result was a random mixture of strangers and fresh acquaintances, linked in various ways.
By an ingenious chain of reasoning, Erdős then demonstrated that the probability of getting a party with a desired mix of strangers and acquaintances is practically certain beyond a particular size of group.
However, although Erdős could demonstrate the existence of the right kinds of patterns among the random pairings of his thought experiment, he could give no hint as to how to find such pairings or draw and color the relevant graphs and subgraphs to determine the Ramsey number.
Using mathematical recipes that involve the equivalent of coin flips has proved a useful strategy for solving a wide range of problems in mathematics, from testing whether a number is prime (evenly divisible only by itself and 1) to estimating such mathematical quantities as π.
Such randomized algorithms are also extremely useful in computer science for sorting data, recognizing patterns, generating images, and searching databases on a computer. In many cases, random choices or paths shorten the task at hand and help to avoid cases of computational gridlock.
The same thing can happen when two people walking in opposite directions along a sidewalk both step to the same side to avoid colliding, then do it again and again. They get trapped in an embarrassing dance until the stalemate is somehow finally broken.
A coin toss can solve the problem for both the inadvertent dancers and a computer that is engaged in resolving conflicting instructions or deciding which of two or more equally likely courses to take.
Probabilities also enter into Monte Carlo simulations for modeling the behavior of atoms in a gas and other physical systems (see also "Random Bits"). You can toss coins or throw dice to obtain answers for both mathematical and physical questions, and, with the help of a computer, this can sometimes be done surprisingly quickly and efficiently.
Ironically, Erdős himself never worked with a computer. His insights, however, suggested novel probabilistic methods of tackling tough problems well suited to the powerful, ubiquitous computers of today.
Throughout his long, itinerant life, Paul Erdős (1913-1996) spent most of his waking hours and, apparently, all his sleeping hours doing mathematics. He was a superb problem solver, and his phenomenal memory allowed him to cite exact references to thousands of mathematical papers, including their page numbers.
"If you don't know how to attack a particular problem, ask Erdős" was the constant refrain among his colleagues and collaborators. He was a one-man clearinghouse for information on the status of unsolved problems in the fields of mathematics that attracted his attention (see "Paul Erdos and an Infinity of Problems").
By the time he died in 1996, Erdős had also earned a reputation as the greatest problem solver of all time. He had the knack of asking just the right mathematical question—one that was neither so simple that it could be solved in a minute nor so difficult that a solution was unattainable. Ramsey theory, which offered a wide variety of challenges, was one of his joys.
A typical question involving Ramsey theory concerns how many numbers (see "Pigeonhole Congestion"), points (see "Planes of Budapest"), or specified objects are needed to guarantee the presence of a certain structure or pattern. You look for the smallest "universe" that automatically contains a given object.
For example, it may be the smallest group of arbitrarily positioned stars that would include a convexquadrilateral or the number of people required to ensure the attendance of three strangers or three acquaintances at a gathering (see "Party Games").
Fascinated by such problems, Paul Erdős particularly liked those he called "party puzzles," which involve different combinations of strangers and acquaintances brought together at some sort of gathering. But he didn't usually think of these problems in terms of people—except as a way of describing them informally.
Erdős reasoned in terms of points, lines, and colors—all elements of a branch of mathematics known as graph theory. In general, a graph (in this context) consists of an array of points, or vertices, connected by straight lines, which are often called edges.
To convert a party puzzle into a graph problem, you draw a point for each person present. You then draw lines between the points, with red lines joining two people who know each other and blue lines for two people who are strangers. When all pairs of points are joined, the resulting network of points and lines is known as a complete graph.
Depending on the relationships specified in a given case, such a graph may contain only red lines, only blue lines, or a mixture of red or blue lines joining the points. The problem comes down to proving that no matter how the lines are colored, you can't avoid producing, in the case of six points (or people), either a red triangle (representing three mutual acquaintances) or a blue triangle (three strangers).
Such a coloring is guaranteed for a complete graph of six points, but not necessarily for a graph of five or fewer points. Hence, the minimum number of points that necessarily has the requisite pattern is six, often designated by the so-called Ramsey numberR(3,3), where the first number inside the parentheses gives the number of acquaintances and the second gives the number of strangers.
What about subgroups of four strangers or four acquaintances? It turns out that an array of 17 points is insufficient to guarantee that four points will be linked by the same color. Only a graph of 18 or more points will always do the job. Thus, R(4,4) = 18.
In a graph representing seventeen people (left), it's possible to color the lines so that no four points are connected by a network of lines that are completely blue or completely red. In contrast, in a graph representing eighteen people (right), there is always such a network, meaning that a group of four mutual acquaintances or four mutual strangers will always be present in such a gathering.
In other words, at a dinner party of 18 there will always be at least four mutual acquaintances or strangers, but that may not necessarily happen when only 17 people are present,
It may be tempting to consider Ramsey's theorem in drawing up a guest list to ensure an appropriate balance of novelty and familiarity in some random gathering—guests selected, perhaps, by sticking pins in a phone book.
The trouble is that calculating the minimum number of partygoers to have—say, seven acquaintances or nine strangers—can be extraordinarily difficult. Though Ramsey's theorem guarantees the existence of such arrangements, determining their actual size is an onerous task.
The chief problem is that the number of cases that must be checked escalates rapidly with each step up in the size of the group. For example, a group of five represents 1,024 different cases, a group of six has 32,768 possibilities, and a group of seven has 221 possibilities.
A prodigious amount of effort involving considerable mathematical ingenuity has gone into producing the rather scant table of Ramsey numbers known in the year 2000 (below). In several cases, mathematicians have been able to find the answer only to within a range of the true value—all for the sake of solving an intriguing puzzle rather than for any particular practical application.
A Ramsey number represents the minimum number of people needed to guarantee the presence of a given number of mutual acquaintances and a given number of mutual strangers. In some cases, mathematicians have not yet pinpointed the actual Ramsey number and can give only the range within which the number must fall.
In the last few decades, computers have played a significant role in determining Ramsey numbers. For example, in 1990, Brendan McKay and Zhang Ke Min relied on computers to help them sift through thousands upon thousands of possibilities to determine R(3,8).
A little later, McKay collaborated with Stanislaw P. Radziszowski in a concerted effort to find R(4,5). Checking all possible graphs was out of the question. Instead, McKay and Radziszowski focused on developing efficient procedures for mathematically "gluing" together small graphs, which could be checked more easily, to make graphs large enough for the problem at hand.
Because their search technique involved using the same computer program many times with different data, the researchers could readily partition their task into small pieces. This meant that they could do the necessary computations on many desktop computers rather than having to rely on a single supercomputer.
Both of the institutions where the researchers were based had a large number of workstations located in staff offices and student laboratories. Many of these machines were often idle during the night or on weekends. By commandeering these resources at odd times, the mathematicians could have as many as 110 computers working simultaneously on the problem.
By early 1993, McKay and Radziszowski had their answer: R(4,5) = 25. At that time, however, the prospect of cracking the next candidate, R(5,5) appeared bleak.
Erdős liked to tell an allegorical tale about the difficulties of determining Ramsey numbers. Suppose an evil spirit were to come to Earth and declare, "You have two years to find R(5,5). If you don't, I will exterminate the human race." In such an eventuality, it would be wise to get all the computers in the world together to solve the problem. "We might find it in two years," Erdős predicted.
But if the evil spirit were to ask for R(6,6), Erdős noted, it would be best to forgo any attempt at the computation and instead launch a preemptive strike against the spirit to destroy him before he destroys us.
On the other hand, "if we could get the right answer just by thinking, we wouldn't have to be afraid of him, because we would be so clever that he couldn't do any harm," Erdős concluded.
Erdős himself was a master of innovative techniques for solving problems, sometimes even exploiting randomness and probability to establish the existence of structures that he was determined to corral in his many party puzzles.
Nearly every Sunday during the winter of 1933 in Budapest, a small group of students would meet somewhere in the city at a park or cafe to discuss mathematics. The gathering typically included Paul Erdős (1913-1996), George Szekeres (1911-2005), and Esther Klein (1910-2005).
In addition to feeding their passion for mathematics, the students enjoyed exchanging personal gossip and talking politics. In a way, they had their own, informal university without walls, and they relished the chance to explore new ideas.
On one particular occasion, Klein challenged the group to solve a curious problem in plane geometry that she had recently encountered. Suppose there are five points positioned anywhere on a flat surface, as long as no three of the points form a straight line. When four points are joined, they form a quadrilateral—a four-sided figure.
Klein had noticed that given five points, four of them always appeared to define a convex quadrilateral. The term convexmeans that the figure has no indentations. Thus, a square is convex, but a crescent quadrilateral (an angular boomerang or arrowhead shape with four vertices) is not.
Here, ACDE is a convex quadrilateral, but ABCE (shaded) is not because of the indentation at B.
Klein asked if anyone could prove that a convex quadrilateral has to be present in any given set of five points lying in a plane, with no three points in a line. After letting her friends ponder the problem, Klein explained her proof.
She reasoned that there are three different ways in which a convex polygon encloses all five points. You can imagine the points as nails sticking out of a board with an elastic band stretched so that it rings the largest possible number of nails.
The simplest case occurs when a convex polygon results from joining four points to create a quadrilateral, which fences in the remaining point and automatically satisfies the requirement.
In the second case, if the convex polygon is a pentagon incorporating all five points, then any four of these points can be connected to form a quadrilateral.
Finally, if the convex polygon includes just three points to create a triangle, the two points left inside the triangle define a line that splits the triangle so that two of the triangle's points are on one side of the line. These two points plus the two interior points automatically form a convex quadrilateral.
Inspired by Klein's argument, Szekeres went on to generalize the result, proving that among points randomly scattered across a flat surface, you can always find a set that forms a particular polygon—if there are enough points. For example, you can always find a convex pentagon in an array of nine points but not among just eight points.
Szekeres and Erdős then proposed that if the number of points that lie in a plane is 1 + 2k-2, you can always select k points so that they form a convex polygon with k sides. Thus, for a quadrilateral, k is 4, and the number of points required will be 1 + 24-2 = 1 + 22 = 5. For a convex pentagon, k is 5, and the number of points required will be 9.
In describing this occasion in a memoir written many years later, Szekeres recalled: "We soon realized that a simple-minded argument would not do and there was a feeling of excitement that a new type of geometrical problem [had] emerged from our circle which we were only too eager to solve."
Interestingly, no one has yet proved the conjecture concerning the precise number of points required to guarantee the presence of a convex polygon. Indeed, the problem is solved only for the values k = 3, 4 (due to Klein), and 5. So, no one has even demonstrated that any set of 17 points in the plane always contains six points that are the vertices of a convex hexagon.
Erdős himself later played a central role in the transformation of Ramsey theory, which involves the application of Ramsey's theorem and related propositions, from a collection of isolated results into a coherent body of work,
Though pursued mainly for the thought-provoking mathematical puzzles that it suggests, Ramsey theory has begun to play a role in the design of vast computer and telephone networks. Techniques used in the theory can help engineers and computer scientists to take advantage of patterns that inevitably arise in large systems in which data storage, retrieval, and transmission have a significant component.
In the meantime, work has continued on the original conjecture proposed by Erdős and Szekeres. Expressed mathematically, the problem asks: For any integer n greater than or equal to 3, determine the smallest positive integer N(n) such that any set of at least N(n) points in general position in the plane (that is, no three of the points are on a line) contains n points that are the vertices of a convex n-gon.
Despite the efforts of many mathematicians, the Erdös-Szekeres problem is still far from being solved. At the same time, the unsolved problem has suggested a wide variety of related questions worthy of exploration. Some mathematicians have looked for other types of patterns among sets of points, and others have generalized the question to higher dimensions and other situations.
That's the rich legacy of an elegant problem first suggested many decades earlier and still attracting mathematical attention.
Frank Ramsey was born in 1903 and spent nearly his entire life in Cambridge. After he graduated in 1925 as Cambridge University's top math scholar, he remained at the university, quickly developing a reputation as an enthusiastic lecturer who could present ideas with great clarity. Applying his keen intellect, he made significant, wide-ranging contributions to the study of theoretical economics and other fields.
One of Ramsey's contemporaries, philosopher George E. Moore (1873-1958), wrote that Ramsey "combined very exceptional brilliance with very great soundness of judgment in philosophy. He was an extraordinarily clear thinker: no one could avoid more easily than he the sort of confusions of thought to which even the best philosophers are liable, and he was capable of apprehending clearly, and observing consistently, the subtlest distinction."
Ramsey was fascinated by questions of logic. He had been deeply influenced by Bertrand Russell (1872-1970), a philosopher and mathematician who believed that mathematics is synonymo.us with logic. Russell's ambitious goal was to prove that "all pure mathematics deals exclusively with concepts definable in terms of a very small number of fundamental logical concepts, and that all propositions are deducible from a very small number of fundamental logical principles."
These words appear in Russell's book The Principles of Mathematics (Principia Mathematica), in which he set forth his landmark thesis.
Russell was keenly interested in how we come to know what we know—how we draw valid conclusions from evidence. For him, such questions required close study of logic puzzles and paradoxes to help expose potential cracks in structures of belief.
"A logical theory may be tested by its capacity for dealing with puzzles, and it is a wholesome plan, in thinking about logic, to stock the mind with as many puzzles as possible, since these serve the same purpose as is served by experiments in physical science," Russell argued.
Russell was not the only one who was concerned about logic and the foundations of mathematics. At the beginning of the 20th century, David Hilbert (1862-1943) had argued that there had to be a clear-cut procedure for deciding whether a given logical proposition follows from a given set of axioms (an axiom is a statement thta is assumed to be true without proof).
Ramsey set out to demonstrate that such a decision process exists for at least one particular type of mathematical problem. As one of the steps in his argument, he considered relationships among sets of whole numbers, and he proved that if the number of objects in a set is sufficiently large and each pair of objects has one of a number of relations, there is always a subset containing a certain number of objects where each pair has the same relation.
In other words, there are patterns implicit in any sufficiently large set or structure. We can see Ramsey's theorem applied in cases in which the objects are groups of six people (see "Party Games") and whole numbers in increasing and decreasing sequences (see "Pigeonhole Congestion").
Though Ramsey was probably aware that his theorem could be applied to many different areas of mathematics, he focused on its application to mathematical logic. He showed that, given a set of axioms, certain true relationships must exist within the logical system.
Ramsey died in 1930, at the age of 26, as a result of complications associated with abdominal surgery for a grave illness. Yet he left an astonishing legacy of achievement in several fields. Ironically, the mathematical work for which he is now best known attracted little attention in his day.
Moreover, subsequent developments showed that Ramsey's success in finding a decision procedure for a particular class of mathematical problems was not representative of the general case.
Just a few years after Ramsey's death, Kurt Gödel (1906-1978), followed by Alan Turing (1912-1954) and others, proved that no such decision procedure was possible for any logical system of axioms and propositions sufficiently sophisticated to encompass the kinds of problems that mathematicians work on every day.
Typically, it's possible to make true statements or propose theorems that you can't prove within the given system without adding additional axioms.
Although Ramsey's theorem is accurately attributed to Frank Ramsey, its initial popularization stemmed from its application in geometry and the playful antics of a group of young mathematicians in Hungary in the 1930s.
Sorting the mail that comes into an office generally requires that each piece be slipped into the appropriate slot of an array of pigeonholes—one for each employee. Suppose that a small business needs ten such slots. When 11 pieces of mail arrive, one or more of the slots will have to contain at least two items.
So, if there are more pigeons than holes, some of the pigeons have to double up. Expressed mathematically, the so-called pigeonhole principle is a handy idea for proving theorems, and it often comes up in Ramsey theory to demonstrate the inevitability of certain patterns (see "Party Games").
Perhaps the simplest possible application of the pigeonhole principle concerns groups of men and women. If there are three people present in a gathering, at least two must be men or at least two must be women.
The same principle can be applied to any allocation of a given type of object, whether it is balls dropped into boxes, people slotted into a number of categories, or numbers meeting particular criteria.
For a more subtle, nontrivial use of the pigeonhole principle, we can look at sequences of numbers. Consider, for example, the set of numbers 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. In this case, the numbers are written in an increasing sequence. Each successive number is larger than its predecessor.
The numbers can also be written as a decreasing sequence: 9, 8, 7, 6, 5, 4, 3, 2, 1, 0.
Now suppose the order of these numbers is randomly scrambled. Mathematicians argue that the resulting sequence of numbers will always contain a shorter sequence (or subsequence) of at least four increasing or at least four decreasing numbers.
For example, in the scrambled set 1, 2, 8, 0, 3, 6, 9, 4, 5, 7, the numbers 1, 2, 8, 9 represent an increasing subsequence. The scrambled sequence 8, 4, 9, 2, 1, 6, 3, 7, 0, 5 happens to contain a decreasing consisting of 8, 4, 2, 1, 0.
How can we be sure that an arbitrary sequence of ten different numbers will have an increasing or decreasing subsequence containing at least four numbers?
The number of differently ordered sets containing ten members is 10 ✕ 9 ✕ 8 ✕ 7 ✕ 6 ✕ 5 ✕ 4 ✕ 3 ✕ 2 ✕ 1 (or 10!), which equals 3,628,800. As in the party-of-six example, the array of possible arrangements is too large for each one to be checked individually.
The most direct, elegant proof of this claim hinges on the pigeonhole principle. Consider the sequence 3, 0, 6, 2, 1, 8,7, 9, 5, 4. The idea is to put the numbers in an array of slots so that consecutive numbers in each row increase going to the right and in each column decrease going down (below). The first number goes into the slot in the upper-left corner.
The next number, 0, is smaller than 3, so it goes into the slot below 3. The following number, 6, is larger than 0, so it can't go in the slot beneath 0. It goes in the slot to the right of 3.
The number 2 is larger than 0 but smaller than 6, so it fits into the slot to the right of 0 and below 6. Similarly, 1 is smaller than 2 and 6 and larger than 0, so the only place it fits according to the allocation rule is in the slot below 2.
Each number in turn can be slotted into the array, and from the result it's easy to read off both increasing (3, 6, 8, 9) and decreasing (8, 7, 5, 4) subsequences of length four (below).
Because every number has to fit into its own slot, the diagram suggests that the very best that anyone can do is to cram nine of the numbers into a three-by-three corner, leaving the tenth number to complete either an increasing or decreasing subsequence of length four.
In general, there has to be at least one such subsequence in any scrambled set of numbers.
For strings of 101 arbitrarily ordered numbers, you find increasing or decreasing subsequences of length 11 or more, and that's not necessarily true of a sequence of just 100 different, scrambled numbers.
Generally, it's possible to prove that for n2 + 1 different numbers in a sequence, you will always get increasing or decreasing subsequences of length at least n + 1.
In the example given above, n = 3, so sequences have 32 + 1, or 10, members and subsequences have a length of at least 3 + 1, or 4.
The pigeonhole principle is one tool that mathematicians can use to identify certain patterns that are inevitably present in a random array. In a more general sense, it also applies to everyday situations.
Anyone who frequently plays solitaire card games has undoubtedly noticed that when the cards start falling into place early, the game more often than not proceeds rapidly and open slots get filled, leaving chance a smaller field on which to cofound the ultimate result. In some games, there's no need to unveil the last few hidden cards because a successful outcome is already guaranteed.
So, patterns and predictability can arise in a variety of ways even in the realm of pure chance. Ramsey theory involves establishing the guarantees for such unexpectedly regular behavior among sequences of numbers, in geometric arrays of dots and lines, and in mathematical logic.
Ivars Peterson is a freelance writer and editor. He was Director of Publications at the Mathematical Association of America from 2007 to 2014. As an award-winning mathematics writer, he previously worked at Science News for more than 25 years and served as editor of Science News Online and Science News for Kids. His books include The Mathematical Tourist, Islands of Truth, Newton's Clock, and Fragments of Infinity: A Kaleidoscope of Math and Art.