August 26, 2021

Flights of the Albatross

The wandering albatross (Diomedea exulans) flies extraordinarily long distances in search of food. Riding the wind on long, thin, rigidly outstretched wings, it skirts the waves as it glides for hours over the ocean surface. Its white plumage, white beak, black wing tips, and wingspan of eleven feet or more make the wandering albatross a dramatic sight in the sky.


Wandering albatross. JJ Harrison

Truly a world traveler, this seabird regularly circles the globe at southern latitudes, plunging into the sea to scoop up squid and fish along the way and sometimes following cruise ships and other vessels to pick up scraps thrown overboard. In one week, it can travel thousands of kilometers on a single foray to gather food for its baby chick.

Biologists at the British Antarctic Survey have investigated the role of seabirds and seals as the top predators in the marine food web of the southern ocean. Their long-term goal is to assess the impact of these animals on the ecosystem.

As one component of this effort, biologist Peter Prince and his coworkers equipped wandering albatrosses with electronic recorders or radio transmitters for satellite tracking to determine the birds' foraging behavior and identify patterns in the way they search for food.

In one experiment, the researchers attached electronic activity recorders to the legs of five adult birds, who made 19 separate foraging trips. The devices recorded the number of 15-second intervals in each hour during which the bird was wet for 9 seconds or longer. The wet periods indicated interruptions in a bird's flight path when it alighted on the water to eat or rest.

To cope with the large quantity of data generated by such studies, the biologists enlisted the aid of a team of physicists to identify patterns in the way albatrosses search for food. According to the analysis performed by the physicists, the data showed that the flight patterns of wandering albatrosses—as they fly, settle on the sea, then fly off again—fit the type of random motion characteristic of a Lévy flight (see "Galaxies and Coffee Cups"), in which the birds make long journeys interspersed with short foraging flights clustered in a small area.

Ecologists speculated that the flight patterns of the wandering albatross had evolved to exploit the patchy distribution of fish and squid, which may reflect the distribution of plankton in the restless ocean, and this patchwork in turn may arise from ocean turbulence effects.

Such patterns may also occur in other biological systems. Some scientists have suggested that Lévy random walks and Lévy statistics may apply to the behavior of ants and bees, but the evidence remains spotty. Others have studied the possible uses of these models in physiology and medicine, including the characteristics of heartbeat rhythms and the branched structure of the lung's airways.

It's possible that in biological systems there's an evolutionary advantage to behaving according to Lévy statistics. Because the environment appears to be fractal, an organism that behaves fractally can better take advantage of such patchy opportunities. For the wandering albatross, this means wide-ranging, stop-and-go searches for food that may be unpredictably scattered across the ocean.

"Newtonian physics began with an attempt to make precise predictions about natural phenomena, predictions that could be accurately checked by observation and experiment," physicist Michael Shlesinger and two colleagues noted in a 1996 essay titled "Beyond Brownian Motion." The physicists further explained, "The goal was to understand nature as a deterministic, 'clockwork' universe."

The application of probability to physics developed much more slowly, the authors contended. The earliest uses of probability were in characterizing data—how the average represented the most probable value in a series of repeated measurements, and how the various measured values fitted the familiar bell-shaped curve of a Gaussian (or normal) distribution.

In the nineteenth century, such distributions also came to represent the behavior of molecules in a gas. In constant motion, molecules repeatedly collide and change speeds. At a given temperature at any moment, however, the largest number of molecules travel at a well-defined characteristic speed, and the distribution of speeds among all the molecules fits a specific type of curve.

Thinking about random walks ("Walking Wild I") and Brownian motion ("Quivering Particles I"), about Lévy flights and chaos, and about Markov processes and fractals means tangling with the complex interplay of pattern and randomness in nature and in human activity.

It's no simple matter to analyze phenomena that change randomly and uncontrollably from moment to moment, whether those changes are physical, biological, or mathematical. Making such efforts, however, begins to cast light on the essence of life, which seems to teeter far out of equilibrium and perpetually on the brink of randomness.

August 25, 2021

Galaxies and Coffee Cups

One of the most striking features of the universe, as revealed by modern telescopes, is the clustering that's apparent in the sky. Made up of more than a billion stars, our galaxy, the Milky Way, is a member of a cluster of galaxies, and this local cluster, in turn, is a member of a larger aggregation known as a supercluster.

On even larger scales, clusters of clusters of galaxies appear to group into great walls, strings, sheets, and other structures, with unexpectedly large voids completely free of visible matter between these assemblages.

Interestingly, perfectly simple statistical rules can generate random collections of points that show the same self-similar clustering behavior and exhibit the same large voids evident in the distribution of galaxies in the universe. One such approach involves random walks ("Walking Wild I") in which the size of the steps are not fixed but vary in particular ways.

In the early part of the twentieth century, mathematician Paul Lévy (1886-1971) explored the possibilities and discovered a class of random walks in which steps vary in size, from infinitesimally small to infinitely large, so no average or characteristic length can be calculated.

The movements are different from Brownian motion ("Quivering Particles I") in that a Lévy walker takes steps of different lengths, with longer steps recurring proportionally less often than shorter steps. A jump ten times longer than another, for example, happens only one-tenth as often. It makes sense to call such excursions "flights."

In two dimensions, Lévy flights correspond roughly to a sequence of long jumps separated by what looks like periods of shorter ventures in different directions. Each "stopover," however, is itself made up of extended flights separated by clusters of short flights, and so on.

Magnifying any of the clusters or subclusters reveals a pattern that closely resembles the original, large-scale pattern, which means that Lévy flights have a fractal geometry—the parts on all scales closely resemble the whole.


A Lévy walker takes steps of different lengths, with longer steps occurring proportionally less often than shorter steps. The illustrated flight, in three dimensions, goes for 2,000 segments. The color of a step depends on its length. G.M. Viswanathan.

In two dimensions, the most striking visual difference between Brownian random walks and Lévy flights is the area they cover in a given time. Lévy flights sample a much larger territory than the corresponding Brownian random walks.


A comparison of a Lévy flight (left) with a Brownian random walk (right) reveals that, for the same number of steps, a Lévy flight covers a considerably larger area. The Brownian random walk lacks the long segments characteristic of a Lévy flight. G.M. Viswanathan.

A similar structure in games of chance governs the pattern of successive ruins, when the player loses everything yet is given the opportunity to continue (on credit) and lose again. The resulting Lévy distribution of the frequency of successive ruins is very different from the bell-shaped symmetry if a Gaussian distribution, which characterizes ordinary random walks.

In the Lévy case, the distribution is strongly skewed, with a very long tail that drops off to zero very slowly as the size of the events increases.

Mathematician Benoit B. Mandelbrot (1924-2010) originally learned about these different random walks from Lévy himself. Mandelbrot later extended and applied Lévy's ideas in his formulation of fractal geometry.

Mandelbrot found that he could use Lévy flights to create convincing portraits of the distribution of visible matter in the universe. He simply erased the long jumps and made each stopover represent a star, a galaxy, or some other blob of matter. The resulting pattern of clustered spots, each of which in turn is made up of subclusters, resembles the sheets, bubbles, and other aggregations of galaxies evident in astronomical observations.

Of course, Mandelbrot's model doesn't necessarily account for the way galaxies actually formed in the universe, but it does suggest the kind of structure that may be present.

Lévy flights and the statistics associated with them also provide useful models of turbulent diffusion. If you add a drop of cream to your coffee without unduly disturbing the liquid, the random motion of the molecules slowly spreads the cream into the coffee. Stirring, however, adds turbulence, and the liquids mix much more rapidly.

Mathematically, it's possible to think of turbulence as the combined effect of a large number of vortexes—whirlpools of all sizes and strengths. Any particles (or molecules of the constituents of cream) caught in such whirlpools would be rapidly separated from one another and dispersed. A plot of the changes in distance between two initially adjacent particles would look much more like a Lévy flight than a Brownian random walk.

Lévy flights can arise out of chaotic systems, in which a sensitive dependence on initial conditions plays a crucial role, and out of random systems, reflecting the same sort of haphazardness shown in Brownian motion. New statistics based on Lévy flights must be used to characterize these unpredictable phenomena. Such models may be useful for describing, for example, the transport of pollutants and the mixing of gases in Earth's atmosphere.

"In these complex systems, Lévy flights seem to be as prevalent as diffusion is in simpler systems," noted physicist Michael Shlesinger, who pioneered the applications of Lévy statistics to turbulent diffusion and other physical phenomena ("Time to Relax II"). (See An Unbounded Experience in Random Walks with Applications by Michael F. Shlesinger.)

In both the atmosphere and the ocean, fractal patterns associated with turbulence may have a strong influence on ecosystems, affecting, for example, the foraging behavior of certain birds. Weather systems and the distribution of plankton, krill, and other organisms in the ocean may guide the flight patterns of the wandering albatross.

Next: Flights of the Albatross

August 24, 2021

Money Matters II

Louis Bachelier argued, in essence, that charting a stock's past performance as a means of predicting its future price has little value because successive price changes are statistically independent. His approach reflected specific ideas about stock markets and their behavior, namely that the price of a stock reflects all the information available on that stock ("Money Matters I").

In this model, any variations in price would correspond to the arrival of new information. Some of the information is predictable, and some isn't. If the market works effectively, predictable information is already accounted for in the price. In other words, a rise or fall has already taken place on the basis of predictions. When the news finally arrives, it can no longer affect the price.

Hence, price fluctuations occur only in response to truly new pieces of information—the parts that aren't predictable based on available data. It makes sense, then, to model the fluctuations in stock prices as a Markov process, which proceeds in independent increments (see "Walking Wild I").

This picture of a stock market, however, isn't completely realistic, and Brownian motion generally fails as a full measure of stock-price fluctuations. Human ingenuity, fear, intuition, fallibility, and incompetence conspire to complicate the situation and put it outside the realm of pure randomness.


Interestingly, random-walk and Brownian-motion models have made a comeback in recent decades in finance theory, specifically in methods of evaluating stock options.

An option is a contract by which a seller commits himself to selling specified shares within a certain time period at a price determined today. The buyer has the right not to exercise the option if this price turns out to be higher than the price at the date the option falls due.

Such contracts allow holders of a large number of shares to hedge their bets. They protect themselves against a fall in price by buying the corresponding options with the idea of not using them if prices rise or remain stable. The person who sells the option takes a risk and must be compensated.

A technique for determining a fair price for such options was initially established in 1970 by mathematician Fischer Black (1938-1995) and economist Myron Scholes, who worked out a formula that takes into account a stock's tendency to oscillate in price.

According to their model, it's not necessary to know whether a stock will go up or down in price. The direction of the price change doesn't matter. Instead, the only thing that matters is how much the stock price is likely to vary during the period of the option.

According to the model, a company whose stock price fluctuates a great deal over a wide range presents a bigger investment risk than one whose stock price is expected to remain relatively stable.

The Black-Scholes equation and model were so innovative when they were first proposed that the authors had difficulty getting their paper published. Eventually they prevailed, and soon after its publication in 1973 traders were successfully applying their pricing formula to real markets. Nonetheless, the formula represented an idealization of the behavior of markets, and researchers have since considerably refined and extended the basic model.

Finance theory has become one of the most active fields of research in modern applied mathematics. At the heart of these developments is the constant need to evaluate assets by determining fair, rational market prices and predicting how those prices will change in concert with the prices of other goods and financial instruments.

As a result, many of the mathematical tools and computational techniques already familiar to mathematicians, scientists, and engineers are finding new homes in the world of finance.

One concept that may prove of great value is the notion of scaling—the way phenomena look on different scales. The motion of a simple pendulum, for example, has an intrinsic scale because the pendulum oscillates at a particular, fixed frequency. Avalanches, on the other hand, appear to be totally random events, and they have no intrinsic scale. They come in all sizes, from just a few pebbles rolling down a hill to massive rock slides roaring and crashing down a mountainside.

In the case of avalanches, earthquakes, and even stock-market crashes, it's possible to show that the number of events goes down as the size of the events increases. Big events occur less frequently than small events, a relationship that can in many cases be expressed by a simple mathematical formula.

Discovering such relationships in experimental data provides potentially valuable information about apparently random events. Similarly, fractals, which look the same on all scales, also make an appearance as geometric models of phenomena that occur on a wide range of scales.

"We don't predict when an event like an earthquake will happen because it is random; all we predict is the probability that it will happen," said physicist H. Eugene Stanley, a pioneer in this field. "There are a zillion problems like this, ranging from the stock market to lots of other things that are scale-free phenomena."

Next: Galaxies and Coffee Cups

August 23, 2021

Money Matters I

Curiously, the first person to explore the mathematical connection between random walks ("Walking Wild I"), Brownian motion ("Quivering Particles I"), and diffusion was not Norbert Wiener but mathematician Louis Bachelier (1870-1946)—a name not widely known in the realm of physical science.

Bachelier's studies, published in 1900 in a doctoral thesis, had nothing to do with the erratic movements of particles suspended in water. Instead, he focused on the apparent random fluctuations of prices of stocks and bonds on the Paris Stock Exchange.

Because of its context, perhaps it's not surprising that physicists and mathematicians ignored or didn't even notice Bachelier's work. Those who did notice tended to dismiss his "theory of speculation" as unimportant.

Mathematician Henri Poincaré (1854-1912), for one, in reviewing Bachelier's thesis, observed that "M. Bachelier has evidenced an original and precise mind [but] the subject is somewhat remote from those our other candidates are in the habit of treating."

Instead of receiving the highest award, mention très honorable, which would have assured Bachelier a job in the academic community, he merited a mere mention honorable for his thesis.

Nonetheless, Bachelier anticipated many of the mathematical discoveries later made by Wiener and others, and he correctly foretold the importance of such ideas in today's financial markets. Bachelier himself believed in the importance of his theory, insisting that "it is evident that the present theory solves the majority of problems in the study of speculation by the calculus of probability."

In examining the role that Brownian motion may play in finance, one place to start is with the link between one-dimensional random walks and gambling.

Imagine a walker starting at position zero at time zero, with coin flips deciding the direction of each step to the left or right. If the track has barriers at its two ends, which swallow up the walker to end the walk, this random process serves as an illuminating model of a famous betting dilemma called the gambler's ruin.

Suppose that Ernie starts $8 and Bert with $11. The two players repeatedly flip a coin. For each head, Bert gives Ernie $1 and for each tail Ernie gives Bert $1. The game ends when either player runs out of money.

Working out the probability that Ernie will win and the probability that Bert will win is equivalent to determining the probability of a walk ending at either barrier of a track that extends eight units in one direction from the origin and eleven units in the other.

In the example given, the probabilities of winning are simply 8/19 for Ernie and 11/19 for Bert (the original capital of each player divided by the total number of dollars held by both players).

Now, what if one barrier is removed and the track in that direction goes to infinity? If the walk continues long enough, it is certain to end at the remaining barrier. Thus, in betting, if Ernie plays against an opponent with an unlimited supply of capital, he will eventually be ruined.

That is certainly bad news for the compulsive gambler, who, even at fair odds, faces an opponent—the entire gambling world—with virtually unlimited funds.

It's possible to plot a player's cumulative total, showing the wins and losses as a line that fluctuates up and down as the number of coin tosses increases (see "Rolls and Flips" and "The Long Run"). For increasingly long sequences of data, the line looks more and more like one-dimensional Brownian motion.


The cumulative results of honest coin tosses serve as a model of a one-dimensional random walk. Heads represents a gain of 1 and tails a loss of 1. Plotting the running total of a sequence of tosses against the number of tosses generates a fluctuating curve. As the number of tosses increases, the plots become increasingly erratic and the line appears to jiggle, much like the track of a tiny particle suspended in a liquid and undergoing Brownian motion.

To Bachelier, it also resembled the day-to-day variations in bond prices, which suggests that the market has a strongly random character. Probability, he concluded, could diffuse in the same manner as heat.

Next: Money Matters II

August 21, 2021

Walking Wild III

One of the first major uses of probabilistic methods in computers was in calculating the random walks (see "Walking Wild I") of neutrons through different materials—a crucial issue in the design of nuclear weapons and atomic power plants after World War II.

Physicists applied similar techniques to show that a particle of light near the sun's center takes about fifty centuries to stroll in a random walk to the surface before finally escaping the sun and speeding to Earth in about eight minutes.

Mathematicians and scientists also extended the random-walk and Brownian-motion models (see "Quivering Particles II") to encompass other types of phenomena.

The long molecular chain of a polymer floating in a solvent, for example, resembles a miniature, truncated version of the path of a particle undergoing Brownian motion. In other words, you can imagine the chain's small molecular units (called monomers) as points and the chemical bonds between the units as the steps of a three-dimensional random walk.

However, to account for the fact that no two monomers can occupy the same region in space, the random walk has to be modified. A more realistic model is the self-avoiding random walk, which is a path that doesn't intersect itself. Such walks spread out much faster and tend to cover a larger area or volume than their standard counterparts for a certain number of steps.


A short self-avoiding random walk in three dimensions.

Using a self-avoiding random-walk model, polymer scientists can tackle such questions as: How many possible configurations can a long polymer chain adopt? What is the typical distance separating a polymer's ends.

The first question is really the same as asking for the number of different self-avoiding walks that are possible for a given number of steps. That's easy to answer in two dimensions for an ordinary random walk. There are four choices for each step, leading to four one-step walks, 4 ✕ 4 (or 16) two-step walks, and, in general, 4n n-step walks. Similar formulas can be worked out for other dimensions.

The calculations are a little trickier for a self-avoiding random walk. In two dimensions, there are four one-step walks, 4 ✕ 3 (or 12) two-step walks, and 4 ✕ 3 ✕ 3 (or 36) three-step walks. It turns out that there are 100 four-step walks.

Calculating the number of possible five-step walks is considerably more difficult, and even with computers to help out, no one has ever gone beyond about 39 steps, which has 1.13 ✕ 1017 possibilities.

Many other problems involving self-avoiding walks—including determination, in different dimensions, of the typical end-to-end distance after a certain number of steps—have turned out to be difficult to solve.

"The Brownian frontier and many other examples of random motions and their interaction properties continue to be an active area of research," commented mathematician Gordon Slade, who worked on self-avoiding random walks as models for polymers.

"Many of the remaining problems are appealing not just because of their relevance to applied fields beyond mathematics but also because the simplicity of their statements has an attraction of its own," Slade added. "This has drawn investigators from diverse backgrounds to study these problems, and there is hope that the progress of the recent past will continue in the coming years."

August 20, 2021

Walking Wild II

In the 1920s, Norbert Wiener transformed a random walk (see "Walking Wild I"), which is made up of discrete steps, into a mathematical model suitable to represent Brownian motion (see "Quivering Particles I"). He did it by making the steps or time intervals between steps infinitesimally small.

In Wiener's rigorous approach, once the position of a particle is established at the start, its position at any later time is governed by a Gaussian curve, as it is in Albert Einstein's physical model of Brownian motion (see "Quivering Particles II").

Wiener also proved that although the path of a Brownian particle is continuous, at no point is it smooth. Such a curious, incredibly jagged mathematical curve actually makes physical sense because a particle in Brownian motion can't jump instantaneously from one position to another, so its path must be continuous.

At the same time, as Jean Baptiste Perrin noted, erratic changes in direction appear to take place constantly, so you might expect the path to consist entirely of sharp corners. In fact, a two-dimensional Brownian trajectory wiggles so much that it ends up filling the entire area over which the motion occurs.


Two-dimensional Brownian motion.

With Brownian movement put on a solid mathematical footing by Wiener and others who followed his lead, such abstract formulations began to play a significant role in the creation of models of random phenomena.

Such models were used to represent the diffusion of heat through a metal, the spread of flu epidemics via the random walks of microbes, the structure of background noise and static affecting radio signals, the transport of perfume molecules from source to nose, and even the spread of rumors.

Wiener himself applied his model of Brownian movement to the problem of electronic interference, which disturbs the transmission of radio signals and causes the type of static heard on AM radio between stations.

Because the static has a strongly random character, Wiener was able to use the mathematics of Brownian motion to design an electronic filter to separate the signal from the background noise. Applied to the development of radar during World War II, his results were long kept a military secret.

It is interesting to note that applications of Brownian motion often began with the study of biological processes. The law of diffusion and models of the spread of heat in a material, for instance, initially arose from studies of heat generation in muscles.

August 19, 2021

Walking Wild I

Though physicists began looking at random processes involving large numbers of molecules near the start of the twentieth century (see "Quivering Particles II"), it wasn't until 1920 that mathematicians began to develop a convincing mathematical model of Brownian motion, starting with the work of Norbert Wiener (1894-1964).

At the heart of the mathematics was the difficult problem of making precise mathematical sense of the notion of a particle moving at random.

A Brownian particle suspended in a liquid knows neither when the next shove will occur nor in which direction and how forcefully it will be propelled. Its displacement at any given moment is independent of its past history. These characteristics put Brownian motion in the category of a Markov process, named for mathematician Andrey A. Markov (see "Climbing and Sliding").

One of the simplest examples of such a process is a one-dimensional random walk, in which a "walker" is confined to a long, narrow path and moves forward or backward according to the results of repeatedly tossing a coin. The walker takes a step in one direction if the outcome is heads and in the opposite direction if the outcome is tails.



This graph shows the results of a one-dimensional random walk. The horizontal axis represents the number of steps taken, and the vertical axis shows how many steps you are away from your starting point if you start of 0. Steps in the forward direction are positive (upward) and steps backward are negative (downward).

For a walk along an infinite track, you can calculate a walker's long-term behavior. The resulting trail wanders back and forth along the track, and the probability of the wanderer's being a certain distance away from the starting point after taking a given number of steps is defined by a bell-shaped curve known as a Gaussian (or normal) distribution. The larger the number of steps, the wider would be the curve.

Indeed, the expected average distance of the walker from the starting point after a certain number of equal steps is simply the length of the step times the square root of the number of steps. For infinitely many coin tosses, a random walk confined to a line corresponds to one-dimensional Brownian motion.

One consequence of this type of erratic movement back and forth along a line is that a random walker is certain to return to the origin (or to any particular position on the track)—eventually.

This might sound like a good strategy in the admittedly unlikely situation of someone who's lost on a tightrope: Just take steps at random and you'll end up anywhere you want to be. But it might take longer than a lifetime to get there.

It's straightforward to extend the random-walk model to two dimensions. Take steps to the east, west, north, or south, randomly choosing each direction with equal probability (perhaps by using a tetrahedral die). You can imagine this walk going from vertex to vertex on an infinite checkerboard lattice.

If such a walk continues for an arbitrarily long time, the walker is certain to touch every vertex, including a return visit to its starting point.


In this illustration of a random walk in two dimensions, a walker starts off from a point on the left-hand side of the checkerboard grid, taking random steps to the east, north, west, or south.

The fact that returning to the origin is guaranteed in one and two dimensions suggests that there will be infinitely many returns. Once a walk gets back to the origin, it's like starting from scratch, and there will be a second return, then a third, and so on.

Similarly, such a random walk will visit every point infinitely many times.

Things are a little different in three dimensions. A walker can go up and down as well as in each of the four compass directions, so a standard cubic die serves as a suitable randomizer to determine the movements.


This time, however, even if a walker takes infinitely many steps, the probability of returning to the origin is only about .34. There's so much room available in three dimensions that it becomes considerably harder for a walker to find the way back to the starting point by chance.


In this computer-generated representation of a random walk in three-dimensional space, a walker has an equal probability of moving forward, backward, right, left, up, or down. This particular walk goes for 2,100 steps, beginning with blue, then continuing with magenta, red, yellow, green, cyan, and white segments. Extended indefinitely, the walk has only a 34 percent chance of ever returning to its starting point. G.M. Viswanathan

Indeed, this mathematical result affords an important lesson for anyone who is lost in space. Unless you make it home again within your first few steps, you're likely to end up lost forever (see "Wandering in Space"). No amount of aimless wandering will get you back after such a start. There are simply too many ways to go wrong.

August 18, 2021

Quivering Particles II

Quivering Particles I

In 1905, theoretical physicist Albert Einstein (1879-1955) provided an elegant explanation of how tiny, randomly moving molecules could budge particles large enough to be observable under a microscope.

In his paper "On the Movement of Small Particles Suspended in Stationary Liquids Required by the Molecular-Kinetic Theory of Heat," Einstein used statistical methods to show that a suspended particle would get shoved in different directions by the combined effect of the modest impacts of many molecules.

For particles smaller than about twenty micrometers in diameter, the impact would generally fail to average out equally on all sides, giving the suspended particle a net shove in some direction.


The erratic jiggling of a microscopic particle suspended in water stems from the uneven distribution of impacts by water molecules at any given moment.

Interestingly, Einstein had not been aware of the experimental work on Brownian motion. His paper came about because he had begun to consider what effects might follow from the existence of atoms and molecules moving at high speeds that depended on the temperature.

His main goal, Einstein later wrote, was "to find facts which would guarantee as much as possible the existence of atoms of definite size." But, he continued, "in the midst of this, I discovered that, according to atomistic theory, there have to be a movement of suspended microscopic properties open to observation, without knowing that observations concerning the Brownian motion were already long familiar."

A few years later, physicist and chemist Jean-Baptiste Perrin (1870-1942) confirmed experimentally some of Einstein's key predictions.

In particular, Perrin and his students were able to track the movements of nearly spherical Brownian particles, which they recorded every thirty seconds and plotted on sheets of paper. Armed with these data, the researchers then used a formula derived by Einstein to determine the number of molecules present in a given volume of fluid.

The experiments gave Perrin a sense of the complexity of the path of a Brownian particle. His plots showed a highly irregular track, yet they gave "only a meager idea of the extraordinary discontinuity of the actual trajectory," Perrin noted.

If the researchers could have increased the resolving power of their microscope to detect the effects of bombardment by progressively smaller clusters of molecules, they would have found that parts of a path that originally appeared straight would themselves have had a jagged and irregular structure.


As the number of steps in a computer simulation of Brownian motion increases from 1,000 (left) to 10,000 (middle) to 100,000 (right), the same overall pattern of erratic movements persists, though on increasingly larger scales. G.M. Viswanathan

In fact, Brownian motion isn't the only place where such self-similar patterns occur.

"Consider, for instance, one of the white flakes that are obtained by salting a solution of soap," Perrin wrote in 1906. "At a distance, its contour may appear sharply defined, but as we draw nearer, its sharpness disappears…. The use of a magnifying glass or microscope leaves us just as uncertain, for fresh irregularities appear every time we increase the magnification, and we never succeed in getting a sharp, smooth impression, as given, for example, by a steel ball."

Nowadays such patterns, in which the magnified structure looks similar to and just as complicated as the overall structure, are known as fractals. And the paths of Brownian particles can be modeled mathematically as random walks.

August 17, 2021

Quivering Particles I

Trails of a Wanderer


Brown had traveled widely, exploring the lands around the Cape of Good Hope in Africa and many parts of Australia before returning to London with specimens of more than four thousand plant species for identification. In the course of establishing a major botanical collection at a national museum, he had looked closely at pollen grains obtained from many of the collected plants.


While examining pollen grains of the plant Clarkia pulchella (pinkfairies) under a microscope, botanist Robert Brown observed tiny particles in constant motion within a grain.

Brown noted that, in some cases, "the membrane of the grain of pollen is so transparent that the motion of the…particles within the grain was distinctly visible."

The suspended granules were only a few micrometers in size, less than one-tenth of a typical pollen grain's width. Observed under a microscope, they appeared to be in continuous, erratic motion.

Physician and plant physiologist Jan Ingenhousz (1730-1799), who is credited with discovering photosynthesis, had reported similar particle activity in 1785 when he looked closely at powdered charcoal sprinkled on an alcohol surface.

Brown's experiments with granules extracted from crushed pollen grains, soot particles, and fragments of other materials suspended in water revealed the same type of unceasing, quivering movement.

Some scientists were quick to attribute the effect to minute, heat-induced currents in the liquid surrounding the particles or to some obscure chemical interaction between the solid particles and the liquid. However, the observation that the movements of two neighboring particles appeared to be quite uncorrelated helped rule out currents in the fluid as a cause.

Other characteristics of so-called Brownian motion were just as intriguing. For example, a given particle appeared equally likely to go in any direction, and its past motion seemed to have no bearing on its future course. Particles also traveled faster at higher temperatures, and smaller particles moved more quickly than larger ones. Most striking of all, the jerky movements never stopped.

To explain such observations, some scientists boldly ascribed the phenomenon to molecular movements within the liquid. They pictured the liquid as being composed of tiny molecules whizzing about at high speeds and colliding with each other. When molecules bumped into particles, they would give the particles random pushes.

Although some researchers hailed Brownian motion as visible proof that matter is made up of atoms and molecules, the question of the true structure of matter was at that time still a contentious issue. At the beginning of the twentieth century, for instance, the prominent physical chemist Wilhelm Ostwald (1853-1932) regarded atoms as merely "a hypothetical conception that affords a very convenient picture" of matter.

August 16, 2021

Trails of a Wanderer

He calmly rode on, leaving it to his horse's discretion to go which way it pleased, firmly believing that therein consisted the very essence of adventures.

A sharp knife bites through the papery skin of a clove of garlic, exposing the bulb's pale interior to the air. Within seconds, a pungent, penetrating odor drifts to the chef's nostrils.


Garlic bulb.

The distinctive tang of freshly chopped or crushed garlic arises from the rupture of cell membranes, which allows the huge protein molecules of an enzyme called alliinase to reach and capture tiny molecules of a chemical compound known as alliin.

The enzyme stretches and twists the trapped molecules, snapping the chemical bonds and rearranging the atoms of alliin to create the unstable compound allicin. Allicin, in turn, readily transforms itself into diallyl disulfude, which is responsible for garlic's distinctive odor.

As the odor-inducing molecules of diallyl disulfide seep out of the mashed garlic into the air, they begin to drift randomly, jostled by oxygen, nitrogen, and other molecular and atomic constituents of air and transported by wavering currents. Some soon reach the chef's nose.

The wandering molecules of diallyl disulfide travel along nasal passages to the back of the nose to reach a delicate sheet of moist, mucus-bathed tissue, where a large number of nerve cells cast their hairy, odor-sensing nets. When a receptor strand snags a molecule, it triggers a set of actions that generates an electric signal, which travels to the brain, and the chef immediately recognizes the smell of garlic.

Our capacity to smell depends on the passage of specific molecules from the source to the sensors in the nose. In contrast, we see and hear because waves of energy, whether electromagnetic radiation or vibrations of the air, carry the signals. No molecules or atoms actually make the journey from firefly and drum to eye and ear.

In the realm of smell, we deal with a statistical process—the effect of thousands upon thousands of random steps as molecules venture into the rough-and-tumble domain of incessant collisions and follow haphazard paths to the nose.

We can imagine such journeys as random walks, with molecules traveling in straight lines until they collide. With each collision, they rebound like billiard balls and start off in a new direction—not very different from the disordered steps that a blindfolded person might take if he or she were walking in an unfamiliar landscape studded with obstacles.


Random movements play a significant role in a variety of natural phenomena. Hence, the mathematics of random walks is a key element in modeling and understanding not only the jiggling of molecules but also the possible configurations of a polymer's long molecular chain, the flight paths of such foraging birds as the wandering albatross, and even the erratic fluctuations of stock market prices.

August 13, 2021

Woody Mushrooms

 

Black Canyon Trail, Santa Fe National Forest, New Mexico, 2021.

Photo by I. Peterson

August 12, 2021

Time to Relax III

Time to Relax I

Several decades ago, John T. Bendler and his colleagues applied the notion of fractal time and defect mobility to understanding the properties of Lexan, a tough polycarbonate resin used for making bullet-resistant windows for automobiles. Defect diffusion turned out to be a good model for how the material responds to stresses and how it ages.

A chunk of Lexan consists of an irregular, three-dimensional network of long polymer molecules, with a precisely defined, repeating pattern of atoms. Experiments indicate that cooling the polycarbonate results in the freezing in of a small population of high-energy kinks in the molecular chains.


Molecular model of a polycarbonate molecule showing the carbonate linkages that act as kinks. The fractal-time motion of such kinks leads to stress relaxation.

The movement of these kinks in a fractal-time process along the molecular chains leads to relaxation. Kink movements reorganize the molecular backbone and effectively absorb mechanical energy, such as the impact of a bullet or sledgehammer.

They are also responsible for aging. As energy-absorbing kinks reach chain ends, the material gradually becomes more brittle and weak. With this insight, researchers looked into the possibility of slowing down aging by modifying chain ends.

The fractal-time, or defect-diffusion, model also helps to explain the stretching of silk and glass threads.

In 1835, physicist Wilhelm Weber noticed that attaching a weight to a long silk thread causes it to stretch to a certain length immediately. But that instantaneous elongation is unexpectedly followed by a gradual further lengthening that depends on how long the weight is applied.

The reason for such behavior lies in the fractal-time movement of defects within the materials.

Silk is a complicated natural polymer and has a variety of amorphous and crystalline forms. Under an applied load, the material tries to rearrange itself to redistribute and minimize stresses. Under these conditions, silk molecules relax by unwinding and changing the hydrogen bonding along their backbones.

In a glass fiber, the mobile defects correspond to imperfections in the distorted, tetrahedral network of oxygen and silicon atoms. Under a load, materials such as silk and glass mechanically reorganize themselves.

Although ceramicists, engineers, and artisans such as glassblowers have long been aware of the peculiar behavior of glasses, polymers, and ceramics and have taken these properties into account when working with the materials, little progress in understanding relaxation phenomena occurred for a long time because the mathematics initially used to describe such processes seemed complicated and difficult.

The concepts of mobile defects and fractal-time motion appear to offer a more tractable, self-consistent picture of the relaxation behavior of supercooled liquids and glassy solids.

One of the chief merits of the defect-diffusion theory is that it's mathematically simple. Researchers can use fractal-time mathematics—the mathematics of intermittent pausing—to model the kind of behavior displayed by almost all amorphous materials.

The theory also suggests ways of modifying in a useful manner the properties of industrially important materials.

August 11, 2021

Time to Relax II

Time to Relax I

An amorphous material's constituent atoms or molecules lie in random positions rather than at well-defined sites as in an orderly crystal lattice.


In a crystalline material, atoms or molecules sit in an orderly array (left). In amorphous solids, the pattern is more irregular (right).

Moreover, just as crystal structures are rarely perfect and contain dislocations, vacancies, and other imperfections, amorphous materials also contain "defects," in which bonds between atoms or molecules may be strained, distorted, or displaced.

For example, such defects occur during glass formation because molecules find they have too little time during the cooling process to orient themselves into their proper positions to form a closely packed crystal structure. Inevitably, glasses end up containing low-density regions, the analog of vacancies in crystals.

In 1983, physicists Michael Shlesinger and Elliott Montroll proposed that migration, or diffusion, of mobile defects could account for stretched exponential relaxation in the case of an amorphous material relaxing after the application of an electric field. (See An Unbounded Experience in Random Walks with Applications by Michael F Shlesinger.)

They suggested that defects, in order to move, have to overcome different energy barriers scattered throughout the material. Whereas small barriers are easy to hurdle, larger ones significantly reduce mobility.

In the early stages of relaxation, defects that experience low barriers don't have any trouble. There's enough thermal energy for them to jump such barriers, and relaxation ensues. Others, faced with moderate barriers, take longer to get moving.

Thus, a random distribution of energy barriers implies a wide range of relaxation times, leading to the stretched exponential relaxation observed for amorphous materials. Relaxation stretches over a long period of time.

Mathematically, the situation is closely related to the problem of determining the length of a fractal. Magnifying a fractal by any amount reveals a miniature version of the larger form. Finer and finer scales show more and more detail and lead to greater and greater estimates of total length.

For example, measuring the length of a fractal coastline leads to different answers, depending on the scale used.


Scale matters. Taking long steps carries you past a lot of tiny indentations (top). Taking shorter steps means that you end up traveling a longer distance along such an indented shoreline (bottom).

On a world globe the size of a basketball, the eastern coast of the United States looks like a fairly smooth curve, which, according to the globe's scale, may be roughly 3,000 miles long. The same coast drawn on an atlas page showing only the United States looks much more ragged. Adding in the lengths of capes and bays now evident extends the coast's length to 5,000 or so miles.

Piecing together detailed navigational charts to create a giant coastal map reveals an incredibly complex curve perhaps 12,000 miles long. Each change in scale reveals a new array of features to be included in the measurement.

Just as every distance scale occurs in the coastline problem, every time scale occurs for relaxation in amorphous materials. Each shift in time scale—from days to minutes to seconds—adds new features to be included in a relaxation measurement.

Although it isn't as picturesque to think of infinitely many time scales as it is to think of patterns within patterns on different length scales, the analogy is mathematically exact.

This comparison leads to the concept of fractal time. Instead of occurring in a sequence of regular, equally spaced intervals, events that occur in fractal time are clustered.


Instead of occurring at regular intervals, events that happen in fractal time are clustered in a self-similar pattern that features rapid bursts interspersed with long pauses.

Such clusters consist of events that happen rapidly, one after the other, interspersed with long stretches of nothing happening in between.

To support this theoretical picture, researchers have discovered that in polymer relaxation, some phenomena occur within picoseconds whereas other effects aren't apparent for years. Such an astonishing array of time scales shows how tricky it is to do experiments investigating the phenomenon because it's hard to measure physical characteristics over so many orders of magnitude in time.

Time to Relax III

August 10, 2021

Time to Relax I

The rubber in a pair of boots, retrieved after a long stay in an attic, shows its age in an annoying way. No longer as flexible as it once was, the material (an elastomeric polymer) readily cracks and falls apart. Under the same conditions, many other plastics suffer a similar fate.

One cause of this aging process is chemical. Sunlight or oxygen can initiate chemical reactions that alter the material's properties. But deterioration occurs even when a material is kept in the dark or away from oxygen. The material gradually becomes more dense and brittle, losing its toughness and impact resistance.

The explanation for this behavior lies in the way "defects" within amorphous, or noncrystalline, materials reorganize themselves over long periods of time.


Glass is an example of an amorphous material. PPG Place, Pittsburgh, Pennsylvania.

When expressed in terms of the concept of fractal time, the same mathematical model used to describe polymer aging also applies to the stretching of glass or silk fibers; the recovery, or relaxation, of glassy materials after the removal of a stress; and a wide range of other phenomena in amorphous materials.

In such processes, events occur in self-similar bursts—featuring distinct clusters of activity interspersed with long stretches of inactivity. Some changes in materials occur right away while others take years to show up.

Relaxation is an issue of practical importance. Slow aging processes, both environmental and physical, control the lifetimes of a great many manufactured products, from electronic devices to optical fibers and advanced composite materials. Elucidating of how such processes occur can suggest novel techniques for toughening ceramics and for designing polymers having particular characteristics.

Relaxation processes are common in physical systems. For example, pull on a glass fiber, then let go. The glass first stretches, then shrinks. Apply a strong electric field to a polymer, then turn it off. Areas of positive and negative charge in the polymer line up with the field, then drift out of alignment.

In each case, the material endures a stress, then recovers, or relaxes, when the stress is removed.

Relaxation in a crystalline material typically proceeds at an exponential pace. That type of relaxation follows the same pattern as the decay of a radioactive isotope. Such a process is characterized by a certain time, known in the case of radioactive decay as the half-life.

Normally, you find that relaxation is clustered around a certain time. It might take a second, a day, or a week. But an amorphous solid takes a longer time to relax than would be expected if relaxation simply followed an exponential pattern.

In amorphous systems, some parts relax very quickly. If those parts relax in, say, seconds, other pieces might relax on a time scale of minutes, and still others on a scale of days or weeks. If you were to wait long enough—even years—you would still detect changes taking place. No characteristic time can be defined for such an extended relaxation process.

This type of behavior has come to be known as stretched exponential relaxation. It fits a wide range of relaxation processes in disordered systems, including the way many polymers, glasses, and ceramics respond to stresses caused by changes in pressure and temperature and the imposition of electric and magnetic fields.

Because so many different systems behave in such a strikingly similar fashion, physicists, in their search for an explanation, have concentrated on what these systems have in common. They have found that what's important is not the details of a material's atomic or molecular structure but rather its state of disorder.

August 9, 2021

Butterflies

 

Bear Wallow Trail, Santa Fe National Forest, New Mexico, 2021.

Photo by I. Peterson