The Golden Ratio - Part 7
Library

Part 7

The intriguing question is: Why did these two outstanding cosmologists decide to get involved in recreational mathematics and quasi-crystals?

I have known Penrose and Steinhardt for many years, being in the same business of theoretical astrophysics and cosmology. In fact, Penrose was an invited speaker in the first large conference that I organized on relativistic astrophysics in 1984, and Steinhardt was an invited speaker in the latest one in 2001. Still, I did not know what motivated them to delve into recreational mathematics, which appears to be rather remote from their professional interests in astrophysics, so I asked them.

Roger Penrose replied: "I am not sure I have a deep answer for that. As you know, mathematics is something most mathematicians do for enjoyment." After some reflection he added: "I used to play with shapes fitting together since I was a child; some of my work on tiles therefore predated my work in cosmology. At the particular time, however, my recreational mathematics work was at least partially motivated by my cosmological research. I was thinking about the large-scale structure of the universe and was looking for toy models with simple basic rules, which could nevertheless generate complicated structures on large scales."

"But," I asked, "what was it that induced you to continue to work on that problem for quite a while?"

Penrose laughed and said, "As you know, I have always been interested in geometry; that problem simply intrigued me. Furthermore, while I had a hunch that such structures could occur in nature, I just couldn't see how nature could a.s.semble them through the normal process of crystal growth, which is local. To some extent I am still puzzled by that."

Paul Steinhardt's immediate reaction on the phone was: "Good question!" After thinking about it for a few minutes he reminisced: "As an undergraduate student I really wasn't sure what I wanted to do. Then, in graduate school, I looked for some mental relief from my strenuous efforts in particle physics, and I found that in the topic of order and symmetry in solids. Once I stumbled on the problem of quasi-periodic crystals, I found it irresistible irresistible and I kept coming back to it." and I kept coming back to it."

FRACTALS.

The Steinhardt-Jeong model for quasi-crystals has the interesting property that it produces long-range order from neighborly interactions, without resulting in a fully periodic crystal. Amazingly enough, we can also find this general property in the Fibonacci sequence. Consider the following simple algorithm for the creation of a sequence known as the Golden Sequence. Start with the number 1, and then replace 1 by 10. From then on, replace each 1 by 10 and each 0 by 1. You will obtain the following steps:

and so on. Clearly, we started here with a "short-range" law (the simple transformation of 0 1 and 1 10) and obtained a nonperiodic long-range order. Note that the numbers of 1s in the sequence of lines 1, 1, 2, 3, 5, 8... form a Fibonacci sequence, and so do the numbers of 0s (starting from the second line). Furthermore, the ratio of the number of 1s to the number of 0s approaches the Golden Ratio as the sequence lengthens. In fact, an examination of Figure 27 Figure 27 reveals that if we take 0 to stand for a baby pair of rabbits and 1 to stand for a mature pair, then the sequence just given mirrors precisely the numbers of rabbit pairs. But there is even more to the Golden Sequence than these surprising properties. By starting with 1 (on the first line), followed by 10 (on the second line), and simply appending to each line the line just preceding it, we can also generate the entire sequence. For example, the fourth line, 10110, is obtained by appending the second line, 10, to the third, 101, and so on. reveals that if we take 0 to stand for a baby pair of rabbits and 1 to stand for a mature pair, then the sequence just given mirrors precisely the numbers of rabbit pairs. But there is even more to the Golden Sequence than these surprising properties. By starting with 1 (on the first line), followed by 10 (on the second line), and simply appending to each line the line just preceding it, we can also generate the entire sequence. For example, the fourth line, 10110, is obtained by appending the second line, 10, to the third, 101, and so on.

Recall that "self-similarity" means symmetry across size scale. The logarithmic spiral displays self-similarity because it looks precisely the same under any magnification, and so does the series of nested pentagons and pentagrams in Figure 10 Figure 10. Every time you walk into a hair stylist shop, you see an infinite series of self-similar reflections of yourself between two parallel mirrors.

The Golden Sequence is also self-similar on different scales. Take the sequence

and probe it with a magnifying gla.s.s in the following sense. Starting from the left, whenever you encounter a 1, mark a group of three symbols, and when you encounter a 0, mark a group of two symbols (with no overlap among the different groups). For example, the first digit is a 1, we therefore mark the group of the first three digits 101 (see below). The second digit from the left is a zero, therefore we mark the group of two digits 10 that follow that follow the first 101. The third digit is 1; therefore we mark the three digits 101 that follow the 10; and so on. The marked sequence now looks like this the first 101. The third digit is 1; therefore we mark the three digits 101 that follow the 10; and so on. The marked sequence now looks like this

Now from every group of three symbols retain the first two, and from every group of two retain the first one (the retained symbols are underlined):

If you now look at the retained sequence

you find that it is identical to the Golden Sequence.

We can do another magnification exercise on the Golden Sequence simply by underlining any pattern or subsequence. For example, suppose we choose "10" as our subsequence, and we underline it whenever it occurs in the Golden Sequence:

If we now treat each 10 as a single symbol and we mark the number of places by which each pattern of 10 needs to be moved to overlap with the next 10, we get the sequence: 2122121... (the first "10" needs to be moved two places to overlap with the second, the third is one place after the second, etc.). If we would now replace each 2 by a 1 and each 1 by a 0 in the new sequence, we recover the Golden Sequence. In other words, if we look at any pattern within the Golden Sequence, we discover that the same pattern is found in the sequence on another scale. Objects with this property, like the Russian Matrioshka dolls that fit one into the other, are known as, fractals. as, fractals. The name "fractal" (from the Latin The name "fractal" (from the Latin fractus fractus, meaning "broken, fragmented") was coined by the famous Polish-French-American mathematician Benoit B. Mandelbrot, and it is a central concept in the geometry of nature and in the theory of highly irregular systems known as chaos. chaos.

Fractal geometry represents a brilliant attempt to describe the shapes and objects of the real world. When we look around us, very few forms can be described in terms of the simple figures of Euclidean geometry, such as straight lines, circles, cubes, and spheres. An old mathematical joke tells of a physicist who thought that he could become rich from betting at horse races by solving the exact equations of motion for the horses. After much work, he indeed managed to solve the equations-for spherical horses. Real horses, unfortunately, are not spherical, and neither are clouds, cauliflowers, or lungs. Similarly, lightning, rivers, and drainage systems do not travel in straight lines, and they all remind us of the branching of trees and of the human circulatory system. Examine, for example, the fantastically intricate branching of the "Dolmen in the Snow" (Figure 111), a painting by the German romantic painter Caspar David Friedrich (17741840; currently in the Gemaldegalerie Neue Meister in Dresden).

Figure 111 Mandelbrot's gigantic mental leap in formulating fractal geometry has been primarily in the fact that he recognized that all of these complex zigs and zags are not merely a nuisance but often the main mathematical characteristic of the morphology. Mandelbrot's first realization was the importance of self-similarity- self-similarity-the fact that many natural shapes display endless sequences of motifs repeating themselves within motifs, on many scales. The chambered nautilus (Figure 4) exhibits this property magnificently, as does a regular cauliflower-break off smaller and smaller pieces and, up to a point, they continue to look like the whole vegetable. Take a picture of a small piece of rock, and you will have a hard time recognizing that you are not looking at an entire mountain. Even the printed form of the continued fraction that is equal to the Golden Ratio has this property (Figure 112)-magnify the barely resolved symbols and you will see the same continued fraction. In all of these objects, zooming in does not smooth out the degree of roughness. Rather, the same irregularities characterize all scales.

At this point, Mandelbrot asked himself, how do you determine the dimensions of something that has such a fractal structure? In the world of Euclidean geometry, all the objects have dimensions that can be expressed as whole numbers. Points have zero dimensions, straight lines are one-dimensional, plane figures like triangles and pen tagons are two-dimensional, and objects like spheres and the Platonic solids are three-dimensional. Fractal curves like the path of a bolt of lightning, on the other hand, wiggle so aggressively that they fall somewhere between one and two dimensions. If the path is relatively smooth, then we can imagine that the fractal dimension would be close to one, but if it is very complex, then a dimension closer to two can be expected. These musings have turned into the by now-famous question: "How long is the coast of Britain?" Mandelbrot's surprising answer is that the length of the coastline actually depends on the length of your ruler. Suppose you start out with a satellite-generated map of Britain that is one foot on the side. You measure the length and convert it to the actual length by multiplying by the known scale of your map. Clearly this method will skip over any twists in the coastline that are too small to be revealed on the map. Equipped with a one-yard stick, you therefore start the long journey of actually walking along Britain's beaches, painstakingly measuring the length yard by yard. There is no doubt that the number you get now will be much larger than the previous one, since you managed to capture much smaller twists and turns. You immediately realize, however, that you would still be skipping over structures on smaller scales than one yard. The point is that every time you decrease the size of your ruler, you get a larger value for the length, because you always discover that there exists substructure on even smaller scale. This fact suggests that even the concept of length as representing size needs to be revisited when dealing with fractals. The contours of the coastline do not become a straight line upon magnification; rather, the crinkles persist on all scales and the length increases ad infinitum (or at least down to atomic scales).

Figure 112 This situation is exemplified beautifully by what could be thought of as the coastline of some imaginary land. The Koch snowflake is a curve first described by the Swedish mathematician Helge von Koch (18701924) in 1904 (Figure 113). Start with an equilateral triangle, one inch long on the side. Now in the middle of each side, construct a smaller triangle, with a side of one-third of an inch. This will give the Star of David in the second figure. Note that the original outline of the triangle was three inches long, while now it is composed of twelve segments, one-third of an inch each, so that the total length is now four inches. Repeat the same procedure consecutively-on each side of a triangle place a new one, with a side length that is one-third that of the previous one. Each time, the length of the outline increases by a factor of 4/3 to infinity, in spite of the fact that it borders a finite area. (We can show that the area converges to eight-fifths that of the original triangle.)

Figure 113 The realization of the existence of fractals raised the question of the dimensions that should be a.s.sociated with them. The fractal dimension is really a measure of the wrinkliness of the fractal, or of how fast length, surface, or volume increases if we measure it with respect to ever-decreasing scales. For example, we feel intuitively that the Koch curve (bottom of Figure 113 Figure 113) takes up more s.p.a.ce than a one-dimensional line but less s.p.a.ce than the two-dimensional square. But how can it have an intermediate dimension? There is, after all, no whole number between 1 and 2. This is where Mandelbrot followed a concept first introduced in 1919 by the German mathematician Felix Hausdorff (18681942), a concept that at first appears mind boggling-fractional dimensions. In spite of the initial shock we may experience from such a notion, fractional dimensions were precisely the tool needed to characterize the degree of irregularity, or fractal complexity, of objects.

In order to obtain a meaningful definition of the self-similarity dimension or fractal dimension, it helps to use the familiar whole-number dimensions 0, 1, 2, 3 as guides. The idea is to examine how many small objects make up a larger object in any number of dimensions. For example, if we bisect a (one-dimensional) line, we obtain two segments (for a reduction factor of f f = ). When we divide a (two-dimensional) square into subsquares with half the side length (again a reduction factor ( = ). When we divide a (two-dimensional) square into subsquares with half the side length (again a reduction factor (f=) we get 4 = 22 squares. For a side length of one-third ( squares. For a side length of one-third (f=), there are 9=32 subsquares ( subsquares (Figure 114). For a (three-dimensional) cube, a division into cubes of half the edge-length (f=) produces 8 = 23 cubes, and one-third the length ( cubes, and one-third the length (f=) produces 27=33 cubes ( cubes (Figure 114). If you examine all of these examples, you find that there is a relation between the number of subobjects, n n, the length reduction factor, f f, and the dimension, D. D. The relation is simply The relation is simply n = (1/f) n = (1/f)D. (I give another form of this relation in Appendix 7.) Applying the same relation to the Koch snowflake gives a fractal dimension of about 1.2619. As it turns out, the coastline of Britain also has a fractal dimension of about 1.26. Fractals therefore serve as models for real coastlines. Indeed, pioneering chaos theorist Mitch Feigenbaum, of Rockefeller University in New York, exploited this fact to help produce in 1992 the revolutionary (I give another form of this relation in Appendix 7.) Applying the same relation to the Koch snowflake gives a fractal dimension of about 1.2619. As it turns out, the coastline of Britain also has a fractal dimension of about 1.26. Fractals therefore serve as models for real coastlines. Indeed, pioneering chaos theorist Mitch Feigenbaum, of Rockefeller University in New York, exploited this fact to help produce in 1992 the revolutionary Hammond Atlas of the World. Hammond Atlas of the World. Using computers to do as much as possible una.s.sisted, Feigenbaum examined fractal satellite data to determine which points along coastlines have the greatest significance. The result-a map of South America, for example, that is better than 98 percent accurate, compared to the more conventional 95 percent scored by older atlases. Using computers to do as much as possible una.s.sisted, Feigenbaum examined fractal satellite data to determine which points along coastlines have the greatest significance. The result-a map of South America, for example, that is better than 98 percent accurate, compared to the more conventional 95 percent scored by older atlases.

Figure 114 For many fractals in nature, from trees to the growth of crystals, the main characteristic is branching. Let us examine a highly simplified model for this ubiquitous phenomenon. Start with a stem of unit length, which divides into two branches of length at 120 (Figure 115). Each branch further divides in a similar fashion, and the process goes on without bound.

Figure 115

Figure 116 If instead of a length reduction factor of we had chosen a somewhat larger number (e.g., 0.6), the s.p.a.ces among the different branches would have been reduced, and eventually branches would overlap. Clearly, for many systems (e.g., a drainage system or a blood circulatory system), we may be interested in finding out at what reduction factor precisely do the branches just touch and start to overlap, as in Figure 116 Figure 116. Surprisingly (or maybe not, by now), this happens for a reduction factor that is equal precisely to one over the Golden Ratio one over the Golden Ratio, 1/ = 0.618.... (A short proof is given in Appendix 8.) This is known as a Golden Tree Golden Tree, and its fractal dimension turns out to be about 1.4404. The Golden Tree and similar fractals composed of simple lines cannot be resolved very easily with the naked eye after several iterations. The problem can be partially resolved by using two-dimensional figures like lunes lunes ( (Figure 117) instead of lines. At each step, you can use a copying machine equipped with an image reduction feature to produce lunes reduced by a factor 1/ . The resulting image, a Golden Tree composed of lunes, is shown in Figure 118 Figure 118.

Figure 117

Figure 118

Figure 119

Figure 120

Figure 121

Figure 122 Fractals can be constructed not just from lines but also from simple planar figures such as triangles and squares. For example, you can start with an equilateral triangle with a side of unit length and at each corner attach a new triangle with a side length of . At each of the free corners of the second-generation triangles, attach a triangle with a side length of , and so on (Figure 119). Again, you may wonder at what reduction factor do the three boughs start to touch, as in Figure 120 Figure 120, and again the answer turns out to be 1/ . Precisely the same situation occurs if you build a similar fractal using a square (Figure 121)-overlapping occurs when the reduction factor is 1/ = 0.618... (Figure 122).

Furthermore, all the unfilled white rectangles in the last Figure are Golden Rectangles. We therefore find that while in Euclidean geometry the Golden Ratio originated from the pentagon, in fractal geometry it is a.s.sociated even with simpler figures like squares and equilateral triangles.

Once you get used to the concept, you realize that the world around us is full of fractals. Objects as diverse as the profiles of the tops of forests on the horizon and the circulatory system in a kidney can be described in terms of fractal geometry. If a particular model of the universe as a whole known as eternal inflation is correct, then even the entire universe is characterized by a fractal pattern. Let me explain this concept very briefly, giving only the broad-brush picture. The inflationary theory, originally advanced by Alan Guth, suggests that when our universe was only a tiny fraction of a second old, an unbridled expansion stretched our region of s.p.a.ce to a size that is actually much larger than the reach of our telescopes. The driving force behind this stupendous expansion is a very peculiar state of matter called a false vacuum. A ball on top of a flat hill, as in Figure 123 Figure 123, can symbolically describe the situation. For as long as the universe remained in the false vacuum state (the ball was on the hilltop), it expanded extremely rapidly, doubling in size every tiny fraction of a second. Only when the ball rolled down the hill and into the surrounding, lower-energy "ditch" (representing symbolically the fact that the false vacuum decayed) did the tremendous expansion stop. According to the inflationary model, what we call our our universe was caught in the false vacuum state for a very brief period, during which it expanded at a fantastic rate. Eventually the false vacuum decayed, and our universe resumed the much more leisurely expansion we observe today. All the energy and subatomic particles of our universe were generated during oscillations that followed the decay (represented schematically in the third drawing in universe was caught in the false vacuum state for a very brief period, during which it expanded at a fantastic rate. Eventually the false vacuum decayed, and our universe resumed the much more leisurely expansion we observe today. All the energy and subatomic particles of our universe were generated during oscillations that followed the decay (represented schematically in the third drawing in Figure 123 Figure 123). However, the inflationary model also predicts that the rate of expansion while in the false vacuum state is much faster than the rate of decay. Consequently, the fate of a region of false vacuum can be ill.u.s.trated schematically as in Figure 124 Figure 124. The universe started with some region of false vacuum. As time progressed, some part (a third in the figure) of the region has decayed to produce a "pocket universe" like our own. At the same time, the regions that stayed in the false vacuum state continued to expand, and by the time represented schematically by the second bar in Figure 124 Figure 124, each one of them was actually the size of the whole first bar. (This is not shown in the Figure because of s.p.a.ce constraints.) Moving in time from the second bar to the third, the central pocket universe continued to evolve slowly as in the standard big bang model of our universe. Each of the remaining two regions of false vacuum, however, evolved in precisely the same way as the original region of false vacuum-some part of them decayed, producing a pocket universe to become the same size Figure because of s.p.a.ce constraints). An infinite number of pocket universes thus were produced, and a fractal pattern was generated-the same sequence of false vacua and pocket universes is replicated on ever-decreasing scales. If this model truly represents the evolution of the universe as a whole, then our pocket universe is but one out of an infinite number of pocket universes that exist.

Figure 123

Figure 124 In 1990, North Carolina State University professor Jasper Memory published a poem ent.i.tled "Blake and Fractals" in the Mathematics Magazine. Mathematics Magazine. Referring to the mystic poet William Blake's line "To see a World in a Grain of Sand," Memory wrote: Referring to the mystic poet William Blake's line "To see a World in a Grain of Sand," Memory wrote: William Blake said he could see Vistas of infinity In the smallest speck of sand Held in the hollow of his hand.

Models for this claim we've got In the work of Mandelbrot: Fractal diagrams partake Of the essence sensed by Blake. Basic forms will still prevail Independent of the Scale; Viewed from far or viewed from near Special signatures are clear.

When you magnify a spot, What you had before, you've got.

Smaller, smaller, smaller, yet, Still the same details are set; Finer than the finest hair Blake's infinity is there, Rich in structure all the way- Just as the mystic poets say.

Some of the modern applications of the Golden Ratio, Fibonacci numbers, and fractals reach into areas that are much more down to earth than the inflationary model of the universe. In fact, some say that the applications can reach even all the way into our pockets.

A GOLDEN TOUR OF WALL STREET.

One of the best-known attempts to use the Fibonacci sequence and the Golden Ratio in the a.n.a.lysis of stock prices is a.s.sociated with the name of Ralph Nelson Elliott (18711948). An accountant by profession, Elliott held various executive positions with railroad companies, primarily in Central America. A serious alimentary tract illness that left him bedridden forced him into retirement in 1929. To occupy his mind, Elliott started to a.n.a.lyze in great detail the rallies and plunges of the Dow Jones Industrial Average. During his lifetime, Elliott witnessed the roaring bull market of the 1920s followed by the Great Depression. His detailed a.n.a.lyses led him to conclude that market fluctuations were not random. In particular, he noted: "the stock market is a creation of man and therefore reflects human idiosyncrasy." Elliott's main observation was that, ultimately, stock market patterns reflect cycles of human optimism and pessimism.

On February 19, 1935, Elliott mailed a treatise ent.i.tled The Wave Principle The Wave Principle to a stock market publication in Detroit. In it he claimed to have identified characteristics which "furnish a principle that determines the trend and gives clear warning of reversal." The treatise eventually developed into a book with the same t.i.tle, which was published in 1938. to a stock market publication in Detroit. In it he claimed to have identified characteristics which "furnish a principle that determines the trend and gives clear warning of reversal." The treatise eventually developed into a book with the same t.i.tle, which was published in 1938.

Figure 125

Figure 126 Elliott's basic idea was relatively simple. He claimed that market variations can be characterized by a fundamental pattern consisting of five waves during an upward ("optimistic") trend (marked by numbers in Figure 125 Figure 125) and three waves during a downward ("pessimistic") trend (marked by letters in Figure 125 Figure 125). Note that 5, 3, 8 (the total number of waves) are all Fibonacci numbers. Elliott further a.s.serted that an examination of the fluctuation on shorter and shorter time scales reveals that the same pattern repeats itself Figure 126 Figure 126), with all the numbers of the const.i.tuent wavelets corresponding to higher Fibonacci numbers. Identifying 144 as "the highest number of practical value," the breakdown of a complete market cycle, according to Elliott, might look as follows. A generally upward trend consisting of five major waves, twenty-one intermediate waves, and eighty-nine minor waves (Figure 126) is followed by a generally downward phase with three major, thirteen intermediate, and fifty-five minor waves (Figure 126).

Figure 127 Some recent books that attempt to apply Elliott's general ideas to actual trading strategies go even further. They use the Golden Ratio to calculate the extreme points of maximum and minimum that can be expected (although not necessarily reached) in market prices at the end of upward or downward trends (Figure 127). Even more sophisticated algorithms include a logarithmic spiral plotted on top of the daily market fluctuations, in an attempt to represent a relationship between price and time. All of these forecasting efforts a.s.sume that the Fibonacci sequence and the Golden Ratio somehow provide the keys to the operation of ma.s.s psychology. However, this "wave" approach does suffer from some shortcomings. The Elliott "wave" usually is subjected to various (sometimes arbitrary) stretchings, squeezings, and other alterations by hand to make it "forecast" the real-world market. Investors know, however, that even with the application of all the bells and whistles of modern portfolio theory, which is supposed to maximize the returns for a decided-on level of risk, fortunes can be made or lost in a heartbeat.

You may have noticed that Elliott's wave interpretation has as one of its ingredients the concept that each part of the curve is a reduced-scale version of the whole, a concept central to fractal geometry. Indeed, in 1997, Benoit Mandelbrot published a book ent.i.tled Fractals and Scaling in Finance: Discontinuity, Concentration, Risk Fractals and Scaling in Finance: Discontinuity, Concentration, Risk, which introduced well-defined fractal models into market economics. Mandelbrot built on the known fact that fluctuations in the stock market look the same when charts are enlarged or reduced to fit the same price and time scales. If you look at such a chart from a distance that does not allow you to read the scales, you will not be able to tell if it represents daily, weekly, or hourly variations. The main innovation in Mandelbrot's theory, as compared to standard portfolio theory, is in its ability to reproduce tumultuous trading as well as placid markets. Portfolio theory, on the other hand, is able to characterize only relatively tranquil activity. Mandelbrot never claimed that his theory could predict a price drop or rise on a specific day but rather that the model could be used to estimate probabilities of potential outcomes. After Mandelbrot published a simplified description of his model in Scientific American Scientific American in February 1999, a myriad of responses from readers ensued. Robert Ihnot of Chicago probably expressed the bewilderment of many when he wrote: "If we know that a stock will go from $10 to $15 in a given amount of time, it doesn't matter how we interpose the fractals, or whether the graph looks authentic or not. The important thing is that we could buy at $10 and sell at $15. Everyone should now be rich, so why are they not?" in February 1999, a myriad of responses from readers ensued. Robert Ihnot of Chicago probably expressed the bewilderment of many when he wrote: "If we know that a stock will go from $10 to $15 in a given amount of time, it doesn't matter how we interpose the fractals, or whether the graph looks authentic or not. The important thing is that we could buy at $10 and sell at $15. Everyone should now be rich, so why are they not?"

Elliott's original wave principle represented a bold if somewhat naive attempt to identify a pattern in what appears otherwise to be a rather random process. More recently, however, Fibonacci numbers and randomness have had an even more intriguing encounter.

RABBITS AND COIN TOSSES.

The defining property of the Fibonacci sequence-that each new number is the sum of the previous two numbers-was obtained from an unrealistic description of the breeding of rabbits. Nothing in this definition hinted that this imaginary rabbit sequence would find its way into so many natural and cultural phenomena. There was even less, however, to suggest that experimentation with the basic properties of the sequence themselves could provide a gateway to understanding the mathematics of disordered systems. Yet this was precisely what happened in 1999. Computer scientist Divakar Viswanath, then a postdoctoral fellow at the Mathematical Sciences Research Inst.i.tute in Berkeley, California, was bold enough to ask a "what if?" question that led unexpectedly to the discovery of a new special number: 1.13198824 .... The beauty of Viswanath's discovery lies primarily in the simplicity of its central idea. Viswanath merely asked himself: Suppose you start with the two numbers 1, 1, as in the original Fibonacci sequence, but now instead of adding the two numbers to get the third, you flip a coin to decide whether to add them or to subtract the last number from the previous one. You can decide, for example, that "heads" means to add (giving 2 as the third number) and "tails" means to subtract (giving 0 as the third number). You can continue with the same procedure, each time flipping a coin to decide whether to add or subtract the last number to get a new one. For example, the series of tosses HTTHHTHTTH will produce the sequence 1, 1, 2, 1, 3, 2, 5, 3, 2, 5, 7, 2. On the other hand, the (rather unlikely) series of tosses HHHHHHHHHHHH... will produce the original Fibonacci sequence.

In the Fibonacci sequence, terms increase rapidly, like a power of the Golden Ratio. Recall that we can calculate the seventeenth number in the sequence, for example, by raising the Golden Ratio to the seventeenth power, dividing by the square root of 5, and rounding off the result to the nearest whole number (which gives 1597). Since Viswanath's sequences were generated by a totally random series of coin tosses, however, it was not at all obvious that a smooth growth pattern would be obtained, even if we ignore the minus signs and take only the absolute value of the numbers. To his own surprise, however, Viswanath found that if he ignored the minus signs, the values of the numbers in his random sequences still increased in a clearly defined and predictable rate. Specifically, with essentially 100 percent probability, the one hundredth number in any of the sequences generated in this way was always close to the one hundredth power of the peculiar number 1.13198824..., and the higher the term was in the sequence, the closer it came to the corresponding power of 1.13198824 .... To actually calculate this strange number, Viswanath had to use fractals and to rely on a powerful mathematical theorem that was formulated in the early 1960s by mathematicians Hillel Furstenberg of the Hebrew University in Jerusalem and Harry Kesten of Cornell University. These two mathematicians proved that for an entire cla.s.s of randomly generated sequences, the absolute value of a number high in the sequence gets closer and closer to the appropriate power of some fixed number. However, Furstenberg and Kesten did not know how to calculate this fixed number; Viswanath discovered how to do just that.

The importance of Viswanath's work lies not only in the discovery of a new mathematical constant, a significant feat in itself, but also in the fact that it ill.u.s.trates beautifully how what appears to be an entirely random process can lead to a fully deterministic result. Problems of this type are encountered in a variety of natural phenomena and electronic devices. For example, stars like our own Sun produce their energy in nuclear "furnaces" at their centers. However, for us actually to see the stars shining, bundles of radiation, known as photons, have to make their way from the stellar depths to the surface. Photons do not simply fly through the star at the speed of light. Rather, they bounce around, being scattered and absorbed and reemitted by all the electrons and atoms of gas in their way, in a seemingly random fashion. Yet the net result is that after a random walk, which in the case of the Sun takes some 10 million years, the radiation escapes the star. The power emitted by the Sun's surface determined (and continues to determine) the temperature on Earth's surface and allowed life to emerge. Viswanath's work and the research on random Fibonaccis that followed provide additional tools for the mathematical machinery that explains disordered systems.

There is another important lesson to be learned from Viswanath's discovery-even an eight-hundred-year-old, seemingly trivial mathematical problem can still surprise you.

I should attempt to treat human vice and folly geometrically...the pa.s.sions of hatred, anger, envy, and so on, considered in themselves, follow from the necessity and efficacy of nature. ... I shall, therefore, treat the nature and strength of the emotion in exactly the same manner, as though I were concerned with lines, planes and solids.-BARUCH S SPINOZA (16321677) (16321677)Two and two the mathematician continues to make four, in spite of the whine of the amateur for three, or the cry of the critic for five.-JAMES M MCNEILL W WHISTLER (18341903) (18341903) Euclid defined the Golden Ratio because he was interested in using this simple proportion for the construction of the pentagon and the pentagram. Had this remained the Golden Ratio's only application, the present book would have never been written. The delight we derive from this concept today is based primarily on the element of surprise. surprise. The Golden Ratio turned out to be, on one hand, the simplest of the continued fractions (but also the "most irrational" of all irrational numbers) and, on the other, the heart of an endless number of complex natural phenomena. Somehow the Golden Ratio always makes an unexpected appearance at the juxtaposition of the simple and the complex, at the intersection of Euclidean geometry and fractal geometry. The Golden Ratio turned out to be, on one hand, the simplest of the continued fractions (but also the "most irrational" of all irrational numbers) and, on the other, the heart of an endless number of complex natural phenomena. Somehow the Golden Ratio always makes an unexpected appearance at the juxtaposition of the simple and the complex, at the intersection of Euclidean geometry and fractal geometry.

The sense of gratification provided by the Golden Ratio's surprising emergences probably comes as close as we could expect to the sensuous visual pleasure we obtain from a work of art. This fact raises the question of what type of aesthetic judgment can be applied to mathematics or, even more specifically, what did the famous British mathematician G.o.dfrey Harold Hardy (18771947) actually mean when he said: "The mathematician's patterns, like the painter's or the poet's, must be beautiful."

This is not an easy question. When I discussed the psychological experiments that tested the visual appeal of the Golden Rectangle, I deliberately avoided the term "beautiful." I will adopt the same strategy here, because of the ambiguity a.s.sociated with the definition of beauty. The extent to which beauty is in the eye of the beholder when referring to mathematics is exemplified magnificently by a story presented in the excellent 1981 book The Mathematical Experience The Mathematical Experience by Philip J. Davis and Reuben Hersh. by Philip J. Davis and Reuben Hersh.

In 1976, a delegation of distinguished mathematicians from the United States was invited to the People's Republic of China for a series of talks and informal meetings with Chinese mathematicians. The delegation subsequently issued a report ent.i.tled "Pure and Applied Mathematics in the People's Republic of China." By "pure," mathematicians usually refer to the type of mathematics that at least on the face of it has absolutely no direct relevance to the world outside the mind. At the same time, we should realize that Penrose tilings and random Fibonaccis, for example, provide two of the numerous examples of "pure" mathematics turning into "applied." One of the dialogues in the delegation's report, between Princeton mathematician Joseph J. Kohn and one of his Chinese hosts, is particularly illuminating. The dialogue was on the topic of the "beauty of mathematics," and it took place at the Shanghai Hua-Tung University.

Since, as this dialogue starkly indicates, there is hardly any formal, accepted description of aesthetic judgment in mathematics and how it should be applied, I prefer to discuss only one particular element in mathematics that invariably gives pleasure to nonexperts and experts alike-the element of surprise.

MATHEMATICS SHOULD SURPRISE.

In a letter written on February 27, 1818, the English Romantic poet John Keats (17951821) wrote: "Poetry should surprise by a fine excess and not by Singularity-it should strike the Reader as a wording of his own highest thoughts, and appear almost a Remembrance." Unlike poetry, however, mathematics more often tends to delight when it exhibits an unantic.i.p.ated result rather than when conforming to the reader's own expectations. In addition, the pleasure derived from mathematics is related in many cases to the surprise felt upon perception of totally unexpected relationships and unities. A mathematical relation known as Benford's law provides a wonderful case study for how all of these elements combine to produce a great sense of satisfaction.

Take a look, for example, in the World Almanac World Almanac, at the table of "U.S. Farm Marketings by State" for 1999. There is a column for "Crops" and one for "Livestock and Products." The numbers are given in U.S. dollars. You would have thought that the numbers from 1 to 9 should occur with the same frequency among the first digits of all the listed marketings. Specifically, the numbers starting with 1 should const.i.tute about one-ninth of all the listed numbers, as would numbers starting with 9. Yet, if you count them, you will find that the number 1 appears as the first digit in 32 percent of the numbers (instead of the expected 11 percent if all digits occurred equally often). The number 2 also appears more frequently than its fair share-appearing in 19 percent of the numbers. The number 9, on the other hand, appears only in 5 percent of the numbers-less than expected. You may think that finding this result in one table is surprising, but hardly shocking, until you examine a few more pages in the Almanac Almanac (the numbers above were taken from the 2001 edition). For example, if you look at the table of the death toll of "Some Major Earthquakes," you will find that the numbers starting with 1 const.i.tute about 38 percent of all the numbers, and those starting with 2 are 18 percent. If you choose a totally different table, such as the one for the population in Ma.s.sachusetts in places of 5,000 or more, the numbers start with 1 about 36 percent of the time and with 2 about 16.5 percent of the time. At the other end, in all of these tables the number 9 appears first only in about 5 percent of the numbers, far less than the expected 11 percent. How is it possible that tables describing such diverse and apparently random data all have the property that the number 1 appears as the first digit 30-some percent of the time and the number 2 around 18 percent of the time? The situation becomes even more puzzling when you examine still larger databases. For example, accounting professor Mark Nigrini of the c.o.x School of Business at Southern Methodist University, Dallas, examined the populations of 3,141 counties in the 1990 U.S. Census. He found that the number 1 appeared as the first digit in about 32 percent of the numbers, 2 appeared in about 17 percent, 3 in 14 percent, and 9 in less than 5 percent. a.n.a.lyst Eduardo Ley of Resources for the Future in Washington, D.C., found very similar numbers for the Dow Jones Industrial Average in the years 1990 to 1993. And if all of this is not dumfounding enough, here is another amazing fact. If you examine the list of, say, the first two thousand Fibonacci numbers, you will find that the number 1 appears as the first digit 30 percent of the time, the number 2 appears 17.65 percent, 3 appears 12.5 percent, and the values continue to decrease, with 9 appearing 4.6 percent of the time as first digit. In fact, Fibonacci numbers are more likely to start with 1, with the other numbers decreasing in popularity (the numbers above were taken from the 2001 edition). For example, if you look at the table of the death toll of "Some Major Earthquakes," you will find that the numbers starting with 1 const.i.tute about 38 percent of all the numbers, and those starting with 2 are 18 percent. If you choose a totally different table, such as the one for the population in Ma.s.sachusetts in places of 5,000 or more, the numbers start with 1 about 36 percent of the time and with 2 about 16.5 percent of the time. At the other end, in all of these tables the number 9 appears first only in about 5 percent of the numbers, far less than the expected 11 percent. How is it possible that tables describing such diverse and apparently random data all have the property that the number 1 appears as the first digit 30-some percent of the time and the number 2 around 18 percent of the time? The situation becomes even more puzzling when you examine still larger databases. For example, accounting professor Mark Nigrini of the c.o.x School of Business at Southern Methodist University, Dallas, examined the populations of 3,141 counties in the 1990 U.S. Census. He found that the number 1 appeared as the first digit in about 32 percent of the numbers, 2 appeared in about 17 percent, 3 in 14 percent, and 9 in less than 5 percent. a.n.a.lyst Eduardo Ley of Resources for the Future in Washington, D.C., found very similar numbers for the Dow Jones Industrial Average in the years 1990 to 1993. And if all of this is not dumfounding enough, here is another amazing fact. If you examine the list of, say, the first two thousand Fibonacci numbers, you will find that the number 1 appears as the first digit 30 percent of the time, the number 2 appears 17.65 percent, 3 appears 12.5 percent, and the values continue to decrease, with 9 appearing 4.6 percent of the time as first digit. In fact, Fibonacci numbers are more likely to start with 1, with the other numbers decreasing in popularity in precisely the same manner as the just-described random selections of numbers! in precisely the same manner as the just-described random selections of numbers!

Astronomer and mathematician Simon Newcomb (18351909) first discovered this "first-digit phenomenon" in 1881. He noticed that books of logarithms in the library, which were used for calculations, were considerably dirtier at the beginning (where numbers starting with 1 and 2 were printed) and progressively cleaner throughout. While this might be expected with bad novels abandoned by bored readers, in the case of mathematical tables they simply indicated a more frequent appearance of numbers starting with 1 and 2. Newcomb, however, went much further than merely noting this fact; he came up with an actual formula formula that was supposed to give the probability that a random number begins with a particular digit. That formula (presented in Appendix 9) gives for 1 a probability of 30 percent; for 2, about 17.6 percent; for 3, about 12.5 percent; for 4, about 9.7 percent; for 5, about 8 percent; for 6, about 6.7 percent; for 7, about 5.8 percent; for 8, about 5 percent; and for 9, about 4.6 percent. Newcomb's 1881 article in the that was supposed to give the probability that a random number begins with a particular digit. That formula (presented in Appendix 9) gives for 1 a probability of 30 percent; for 2, about 17.6 percent; for 3, about 12.5 percent; for 4, about 9.7 percent; for 5, about 8 percent; for 6, about 6.7 percent; for 7, about 5.8 percent; for 8, about 5 percent; and for 9, about 4.6 percent. Newcomb's 1881 article in the American Journal of Mathematics American Journal of Mathematics and the "law" he discovered went entirely unnoticed, until fifty-seven years later, when physicist Frank Benford of General Electric rediscovered the law (apparently independently) and tested it with extensive data on river basin areas, baseball statistics, and even numbers appearing in and the "law" he discovered went entirely unnoticed, until fifty-seven years later, when physicist Frank Benford of General Electric rediscovered the law (apparently independently) and tested it with extensive data on river basin areas, baseball statistics, and even numbers appearing in Reader's Digest Reader's Digest articles. All the data fit the postulated formula amazingly well, and hence this formula is now known as Benford's law. articles. All the data fit the postulated formula amazingly well, and hence this formula is now known as Benford's law.

Not all lists of numbers obey Benford's law. Numbers in telephone books, for example, tend to begin with the same few digits in any given region. Even tables of square roots of numbers do not obey the law. On the other hand, chances are that if you collect all the numbers appearing on the front pages of several of your local newspapers for a week, you will obtain a pretty good fit. But why should it be this way? What do the populations of towns in Ma.s.sachusetts have to do with death tolls from earthquakes around the globe or with numbers appearing in the Reader's Digest? Reader's Digest? Why do the Fibonacci numbers also obey the same law? Why do the Fibonacci numbers also obey the same law?

Attempts to put Benford's law on a firm mathematical basis have proven to be much more difficult than expected. One of the key obstacles has been precisely the fact that not all lists of numbers obey the law (even the preceding examples from the Almanac Almanac do not obey the law precisely). In his do not obey the law precisely). In his Scientific American Scientific American article describing the law in 1969, University of Rochester mathematician Ralph A. Raimi concluded that "the answer remains obscure." article describing the law in 1969, University of Rochester mathematician Ralph A. Raimi concluded that "the answer remains obscure."

The explanation finally emerged in 19951996, in the work of Georgia Inst.i.tute of Technology mathematician Ted Hill. Hill became first interested in Benford's law while preparing a talk on surprises in probability in the early 1990s. When describing to me his experience, Hill said: "I started working on this problem as a recreational experiment, but a few people warned me to be careful, because Benford's law can become addictive." After a few years of work it finally dawned on him that rather than looking at numbers from one given source, the mixture mixture of data was the key. Hill formulated the law statistically, in a new form: "If distributions are selected at random (in any unbiased way) and random samples are taken from each of these distributions, then the significant-digit frequencies of the of data was the key. Hill formulated the law statistically, in a new form: "If distributions are selected at random (in any unbiased way) and random samples are taken from each of these distributions, then the significant-digit frequencies of the combined sample combined sample will converge to Benford's distribution, even if some of the individual distributions selected do not follow the law." In other words, suppose you a.s.semble random collections of numbers from a hodgepodge of distributions, such as a table of square roots, a table of the death toll in notable aircraft disasters, the populations of counties, and a table of air distances between selected world cities. Some of these distributions do not obey Benford's law by themselves. What Hill proved, however, is that as you collect ever more of such numbers, the digits of these numbers will yield frequencies that conform ever closer to the law's predictions. Now, why do Fibonacci numbers also follow Benford's law? After all, they are fully determined by a recursive relation and are not random samples from random distributions. will converge to Benford's distribution, even if some of the individual distributions selected do not follow the law." In other words, suppose you a.s.semble random collections of numbers from a hodgepodge of distributions, such as a table of square roots, a table of the death toll in notable aircraft disasters, the populations of counties, and a table of air distances between selected world cities. Some of these distributions do not obey Benford's law by themselves. What Hill proved, however, is that as you collect ever more of such numbers, the digits of these numbers will yield frequencies that conform ever closer to the law's predictions. Now, why do Fibonacci numbers also follow Benford's law? After all, they are fully determined by a recursive relation and are not random samples from random distributions.

Well, in this case it turns out that this conformity with Benford's law is not a unique property of the Fibonacci numbers. If you examine a large number of powers of 2 (21 = 2, 2 = 2, 22 = 4, 2 = 4, 23 = 8, etc.), you'll see that they also obey Benford's law. This should not be so surprising, given that the Fibonacci numbers themselves are obtained as powers of the Golden Ratio (recall that the = 8, etc.), you'll see that they also obey Benford's law. This should not be so surprising, given that the Fibonacci numbers themselves are obtained as powers of the Golden Ratio (recall that the n nth Fibonacci number is close to Fibonacci number is close to ). In fact, we can prove that sequences defined by a large cla.s.s of recursive relations follow Benford's law. ). In fact, we can prove that sequences defined by a large cla.s.s of recursive relations follow Benford's law.

Benford's law provides yet another fascinating example of pure mathematics transformed into applied. One interesting application is in the detection of fraud or fabrication of data in accounting and tax evasion. In a broad range of financial doc.u.ments, data conform very closely to Benford's law. Fabricated data, on the other hand, very rarely do. Hill demonstrates how such fraud detection works with another simple example, using probability theory. In the first day of cla.s.s in his course on probability, he asks students to do an experiment. If their mother's maiden name begins with A through L, they are to flip a coin 200 times and record the results. The rest of the cla.s.s is asked to fake a sequence of 200 heads and tails. Hill collects the results the following day, and within a short time he is able to separate the genuine from the fake with 95 percent accuracy. How does he do that? Any sequence of 200 genuine coin tosses contains a run of six consecutive heads or six consecutive tails with a very high probability. On the other hand, people trying to fake a sequence of coin tosses very rarely believe that they should record such a sequence.

A recent case in which Benford's law was used to uncover fraud involved an American leisure and travel company. The company's audit director discovered something that looked odd in claims made by the supervisor of the company's healthcare department. The first two digits of the healthcare payments showed a suspicious spike in numbers starting with 65 when checked for conformity to Benford's law. (A more detailed version of the law predicts also the frequency of the second and higher digits; see Appendix 9.) A careful audit revealed thirteen fraudulent checks for amounts between $6,500 and $6,599. The District Attorney's office in Brooklyn, New York, also used tests based on Ben-ford's law to detect accounting fraud in seven New York companies.

Benford's law contains precisely some of the ingredients of surprise that most mathematicians find attractive. It reflects a simple but astonishing fact-that the distribution of first digits is extremely peculiar. In addition, that fact turned out to be difficult to explain. Numbers, with the Golden Ratio as an outstanding example, sometimes provide a more instantaneous gratification. For example, many professional and amateur mathematicians are fascinated by primes. Why are primes so important? Because the "Fundamental Theorem of Arithmetic" states that every whole number larger than 1 can be expressed as a product of prime numbers. (Note that 1 is not considered a prime.) For example, 28 = 2 2 7; 66 = 2 3 11; and so on. Primes are so rooted in the human comprehension of mathematics that in his book Cosmos Cosmos, when Carl Sagan (19341996) had to describe what type of signal an intelligent civilization would transmit into s.p.a.ce he chose as an example the sequence of primes. Sagan wrote: "It is extremely unlikely that any natural physical process could transmit radio messages containing prime numbers only. If we received such a message we would deduce a civilization out there that was at least fond of prime numbers." The great Euclid proved more than two thousand years ago that infinitely many primes exist. (The elegant proof is presented in Appendix 10.) Yet most people will agree that some primes are more attractive than others. Some mathematicians, such as the French Francois Le Lionnais and the American Chris Caldwell, maintain lists of "remarkable" or "t.i.tanic" numbers. Here are just a few intriguing examples from the great treasury of primes: The number 1,234,567,891, which cycles through all the digits, is a prime.

The 230th largest prime, which has 6,400 digits, is composed of 6,399 9s and only one 8. largest prime, which has 6,400 digits, is composed of 6,399 9s and only one 8.

The number composed of 317 iterations of the digit 1 is a prime.

The 713th largest prime can be written as (10 largest prime can be written as (10 1951 1951)X(101975 + 1991991991991991991991991)+1, and it was discovered in-you guessed it-1991. + 1991991991991991991991991)+1, and it was discovered in-you guessed it-1991.

From the perspective of this book, the connection between primes and Fibonacci numbers is of special interest. With the exception of the number 3, every Fibonacci number that is a prime also has a prime subscript (its order in the sequence). For example, the Fibonacci number 233 is a prime, and it is the thirteenth (also a prime) number in the sequence. The converse, however, is not true: The fact that the subscript is a prime does not necessarily mean that the number is also a prime. For example, the nineteenth number (19 is a prime) is 4181, and 4181 is not a prime-it is equal to 113 37.