The Singularity Is Near_ When Humans Transcend Biology - Part 3
Library

Part 3

The future GNR (Genetics, Nanotechnology, Robotics) age (see chapter 5) will come about not from the exponential explosion of computation alone but rather from the interplay and myriad synergies that will result from multiple intertwined technological advances. As every point on the exponential-growth curves underlying this panoply of technologies represents an intense human drama of innovation and compet.i.tion, we must consider it remarkable that these chaotic processes result in such smooth and predictable exponential trends. This is not a coincidence but is an inherent feature of evolutionary processes.

When the human-genome scan got under way in 1990 critics pointed out that given the speed with which the genome could then be scanned, it would take thousands of years to finish the project. Yet the fifteen-year project was completed slightly ahead of schedule, with a first draft in 2003.43 The cost of DNA sequencing came down from about ten dollars per base pair in 1990 to a couple of pennies in 2004 and is rapidly continuing to fall (see the figure below). The cost of DNA sequencing came down from about ten dollars per base pair in 1990 to a couple of pennies in 2004 and is rapidly continuing to fall (see the figure below).44 There has been smooth exponential growth in the amount of DNA sequence data that has been collected (see the figure below).45 A dramatic recent example of this improving capacity was the sequencing of the SARS virus, which took only thirty-one days from the identification of the virus, compared to more than fifteen years for the HIV virus. A dramatic recent example of this improving capacity was the sequencing of the SARS virus, which took only thirty-one days from the identification of the virus, compared to more than fifteen years for the HIV virus.46 Of course, we expect to see exponential growth in electronic memories such as RAM. But note how the trend on this logarithmic graph (below) proceeds smoothly through different technology paradigms: vacuum tube to discrete transistor to integrated circuit.47 However, growth in the price-performance of magnetic (disk-drive) memory is not a result of Moore's Law. This exponential trend reflects the squeezing of data onto a magnetic substrate, rather than transistors onto an integrated circuit, a completely different technical challenge pursued by different engineers and different companies.48 Exponential growth in communications technology (measures for communicating information; see the figure below) has for many years been even more explosive than in processing or memory measures of computation and is no less significant in its implications. Again, this progression involves far more than just shrinking transistors on an integrated circuit but includes accelerating advances in fiber optics, optical switching, electromagnetic technologies, and other factors.49 We are currently moving away from the tangle of wires in our cities and in our daily lives through wireless communication, the power of which is doubling every ten to eleven months (see the figure below).

The figures below show the overall growth of the Internet based on the number of hosts (Web-server computers). These two charts plot the same data, but one is on a logarithmic axis and the other is linear. As has been discussed, while technology progresses exponentially, we experience it in the linear domain. From the perspective of most observers, nothing was happening in this area until the mid-1990s, when seemingly out of nowhere the World Wide Web and e-mail exploded into view. But the emergence of the Internet into a worldwide phenomenon was readily predictable by examining exponential trend data in the early 1980s from the ARPANET, predecessor to the Intemet.50 This figure shows the same data on a linear scale.51 In addition to servers, the actual data traffic on the Internet has also doubled every year.52 To accommodate this exponential growth, the data transmission speed of the Internet backbone (as represented by the fastest announced backbone communication channels actually used for the Internet) has itself grown exponentially. Note that in the figure "Internet Backbone Bandwidth" below, we can actually see the progression of S-curves: the acceleration fostered by a new paradigm, followed by a leveling off as the paradigm runs out of steam, followed by renewed acceleration through paradigm shift.53 Another trend that will have profound implications for the twenty-first century is the pervasive movement toward miniaturization. The key feature sizes of a broad range of technologies, both electronic and mechanical, are decreasing, and at an exponential rate. At present, we are shrinking technology by a factor of about four per linear dimension per decade. This miniaturization is a driving force behind Moore's Law, but it's also reflected in the size of all electronic systems-for example, magnetic storage. We also see this decrease in the size of mechanical devices, as the figure on the size of mechanical devices ill.u.s.trates.54 As the salient feature size of a wide range of technologies moves inexorably closer to the multi-nanometer range (less than one hundred nanometers-billionths of a meter), it has been accompanied by a rapidly growing interest in nanotechnology. Nanotechnology science citations have been increasing significantly over the past decade, as noted in the figure below.55 We see the same phenomenon in nanotechnology-related patents (below).56 As we will explore in chapter 5, the genetics (or biotechnology) revolution is bringing the information revolution, with its exponentially increasing capacity and price-performance, to the field of biology. Similarly, the nanotechnology revolution will bring the rapidly increasing mastery of information to materials and mechanical systems. The robotics (or "strong AI") revolution involves the reverse engineering of the human brain, which means coming to understand human intelligence in information terms and then combining the resulting insights with increasingly powerful computational platforms. Thus, all three of the overlapping transformations-genetics, nanotechnology, and robotics-that will dominate the first half of this century represent different facets of the information revolution.

Information, Order, and Evolution: The Insights from Wolfram and Fredkin's Cellular Automata:

As I've described in this chapter, every aspect of information and information technology is growing at an exponential pace. Inherent in our expectation of a Singularity taking place in human history is the pervasive importance of information to the future of human experience. We see information at every level of existence. Every form of human knowledge and artistic expression-scientific and engineering ideas and designs, literature, music, pictures, movies-can be expressed as digital information.

Our brains also operate digitally, through discrete firings of our neurons. The wiring of our interneuronal connections can be digitally described, and the design of our brains is specified by a surprisingly small digital genetic code.57 Indeed, all of biology operates through linear sequences of 2-bit DNA base pairs, which in turn control the sequencing of only twenty amino acids in proteins. Molecules form discrete arrangements of atoms. The carbon atom, with its four positions establishing molecular connections, is particularly adept at creating a variety of three-dimensional shapes, which accounts for its central role in both biology and technology. Within the atom, electrons take on discrete energy levels. Other subatomic particles, such as protons, comprise discrete numbers of valence quarks.

Although the formulas of quantum mechanics are expressed in terms of both continuous fields and discrete levels, we do know that continuous levels can be expressed to any desired degree of accuracy using binary data.58 In fact, quantum mechanics, as the word "quantum" implies, is base on discrete values. In fact, quantum mechanics, as the word "quantum" implies, is base on discrete values.

Physicist-mathematician Stephen Wolfram provides extensive evidence to show how increasing complexity can originate from a universe that is at its core a deterministic, algorithmic system (a system based on fixed rules with predetermined outcomes). In his book A New Kind of Science A New Kind of Science, Wolfram offers a comprehensive a.n.a.lysis of how the processes underlying a mathematical construction called "a cellular automaton" have the potential to describe every level of our natural world.59 (A cellular automaton is a simple computational mechanism that, for example, changes the color of each cell on a grid based on the color of adjacent nearby cells according to a transformation rule.) (A cellular automaton is a simple computational mechanism that, for example, changes the color of each cell on a grid based on the color of adjacent nearby cells according to a transformation rule.) In his view, it is feasible to express all information processes in terms of operations on cellular automata, so Wolfram's insights bear on several key issues related to information and its pervasiveness. Wolfram postulates that the universe itself is a giant cellular-automaton computer. In his hypothesis there is a digital basic for apparently a.n.a.log phenomena (such as motion and time) and for formulas in physics, and we can model our understanding of physics as the simple transformation of a cellular automaton.

Others have proposed this possibility. Richard Feynman wondered about it in considering the relationship of information to matter and energy. Norbert Wiener heralded a fundamental change in focus from energy to information in his 1948 book Cybernetic Cybernetic and suggested that the transformation of information, not energy, was the fundamental building block of the universe. and suggested that the transformation of information, not energy, was the fundamental building block of the universe.60 Perhaps the first to postulate that the universe is being computed on a digital computer was Konrad Zuse in 1967. Perhaps the first to postulate that the universe is being computed on a digital computer was Konrad Zuse in 1967.61 Zuse is best known as the inventor of the first working programmable computer, which he developed from 1935 to 1941. An enthusiastic proponent of an information-based theory of physics was Edward Fredkin, who in the early 1980s proposed a "new theory of physics" founded on the idea that the universe is ultimately composed of software. We should not think of reality as consisting of particles and forces, according to Fredkin, but rather as bits of data modified according to computation rules. Zuse is best known as the inventor of the first working programmable computer, which he developed from 1935 to 1941. An enthusiastic proponent of an information-based theory of physics was Edward Fredkin, who in the early 1980s proposed a "new theory of physics" founded on the idea that the universe is ultimately composed of software. We should not think of reality as consisting of particles and forces, according to Fredkin, but rather as bits of data modified according to computation rules.

Fredkin was quotes by Robert Wright in the 1980s as saying, There are three great philosophical questions. What is life? What is consciousness and thinking and memory and all of that? And how does the universe work? ... [The] "information viewpoint" encompa.s.ses all three....What I'm saying is that at the most basic level of complexity an information process runs what we think of as physics. At the much higher level of complexity, life, DNA-you know, the biochemical function-are controlled by a digital information process. Then, at another level, out thought processes are basically information processing....I find the supporting evidence for my beliefs in ten thousand different places....And to me it's just totally overwhelming. It's like there's an animal I want to find. I've found his footprints. I've found his droppings. I've found the half-chewed food. I find pieces of his fur, and so on. In every case it fits one kind of animal, and it's not like any animal anyone's ever seen. People say, Where is this animal? I say, Well, he was here, he's about this big, this that, and the other. And I know a thousand things about him. I don't have in hand, but I know he's there....What I see is so compelling that it can't be a creature of my imagination.62 In commenting on Fredkin's theory of digital physics, Wright writes, Fredkin ... is talking about an interesting characteristic of some computer programs, including many cellular automata: there is no shortcut to finding out what they will lead to. This, indeed, is a basic difference between the "a.n.a.lytical" approach a.s.sociated with traditional mathematics, including differential equations, and the "computational" approach a.s.sociated with algorithms. You can predict a future state of a system susceptible to the a.n.a.lytic approach without figuring out what states it will occupy between now and then, but in the case of many cellular automata, you must go through all the intermediate states to find out what the end will be like: there is no way to know the future except to watch it unfold....Fredkin explains: "There is no way to know the answer to some question any faster than what's going on. "... Fredkin believes that the universe is very literally a computer and that it is being used by someone, or something, to solve a problem. It sounds like a good-news/bad-news joke: the good news is that our lives have purpose; the bas news is that their purpose is to help some remote hacker estimate pi to nine jillion decimal places. 63 63 Fredkin went on to show that although energy is needed for information storage and retrieval, we can arbitrarily reduce the energy required to perform any particular example of information processing. and that this operation has no lower limit.64 That implies that information rather than matter and energy may be regarded as the more fundamental reality. That implies that information rather than matter and energy may be regarded as the more fundamental reality.65 I will return to Fredkin's insight regarding the extreme lower limit of energy required for computation and communication in chapter 3, since it pertains to the ultimate power of intelligence in the universe. I will return to Fredkin's insight regarding the extreme lower limit of energy required for computation and communication in chapter 3, since it pertains to the ultimate power of intelligence in the universe.

Wolfram builds his theory primarily on a single, unified insight. The discovery that has so excited Wolfram is a simple rule he calls cellular automata rules 110 and its behavior. (There are some other interesting automata rules, but rule 110 makes the point well enough.) Most of Wolfram's a.n.a.lyses deal with the simplest possible cellular automata, specifically those that involve just a one-dimensional line of cells, two possible colors (black and white), and rules based only on the two immediately adjacent cells. For each transformation, the color of a cell depends only on its own previous color and that of the cell on the left and the cell on the right. Thus, there are eight possible input situations (that is, three combinations of two colors). Each rules maps all combinations of these eight input situations to an output (black or white). So there are 28 (256) possible rules for such a one-dimension, two-color, adjacent-cell automaton. Half the 256 possible rules map onto the other half because of left-right-symmetry. We can map half of them again because of black-white equivalence, so we are left with 64 rule types. Wolfram ill.u.s.trates the action of these automata with two-dimensional patterns in which each line (along the (256) possible rules for such a one-dimension, two-color, adjacent-cell automaton. Half the 256 possible rules map onto the other half because of left-right-symmetry. We can map half of them again because of black-white equivalence, so we are left with 64 rule types. Wolfram ill.u.s.trates the action of these automata with two-dimensional patterns in which each line (along the y y-axis) represents a subsequent generation of applying the rule to each cell in that line.

Most of the rules are degenerate, meaning they create repet.i.tive patterns of no interest, such as cells of a single color, or a checkerboard pattern. Wolfram calls these rules cla.s.s 1 automata. Some rules produce arbitrarily s.p.a.ced streaks that remain stable, and Wolfram cla.s.sifies these as belonging to cla.s.s 2. Cla.s.s 3 rules are a bit more interesting, in that recognizable features (such as triangles) appear in the resulting pattern in a essentially random order.

However, it was cla.s.s 4 automata that gave rise to the "aha" experience that resulted in Wolfram's devoting a decade to the topic. The cla.s.s 4 automata, of which rule 110 is the quintessential example, produce surprisingly complex patterns that do not repeat themselves. We see in them artifacts such as lines at various angles, aggregations of triangles, and other interesting configurations. The resulting pattern, however, is neither regular nor completely random; it appears to have some order but is never predictable.

Why is this important or interesting? Keep in mind that we began with the simplest possible starting point: a single black cell. The process involves repet.i.tive application of a very simple rule.66 From such a repet.i.tive and deterministic process, one would expect repet.i.tive and predictable behavior. There are two surprising results here. One is that the results produce apparent randomness. However, the results are more interesting than pure randomness, which itself would become boring very quickly. There are discernible and interesting features in the designs produced, so that the pattern has some order and apparent intelligence. Wolfram include a number of example of these images, many of which are rather lovely to look at. From such a repet.i.tive and deterministic process, one would expect repet.i.tive and predictable behavior. There are two surprising results here. One is that the results produce apparent randomness. However, the results are more interesting than pure randomness, which itself would become boring very quickly. There are discernible and interesting features in the designs produced, so that the pattern has some order and apparent intelligence. Wolfram include a number of example of these images, many of which are rather lovely to look at.

Wolfram makes the following point repeatedly: "Whenever a phenomenon is encountered that seems complex it is taken almost for granted that the phenomenon must be the result of some underlying mechanism that is itself complex. But my discovery that simple programs can produce great complexity makes it clear that this is not in fact correct." 67 67 I do find the behavior of rule 110 rather delightful. Furthermore, the idea that a completely deterministic process can produce results that are completely unpredictable is of great importance, as it provides an explanation for how the world can be inherently unpredictable while still based on fully deterministic rules.68 However, I am not entirely surprised by the idea that simple mechanisms can produce results more complicated than their starting conditions. We've seen this phenomenon in fractals, chaos and complexity theory, and self-organizing systems (such as neural nets and Markov models), which start with simple networks but organize themselves to produce apparently intelligent behavior. However, I am not entirely surprised by the idea that simple mechanisms can produce results more complicated than their starting conditions. We've seen this phenomenon in fractals, chaos and complexity theory, and self-organizing systems (such as neural nets and Markov models), which start with simple networks but organize themselves to produce apparently intelligent behavior.

At a different level, we see it in the human brain itself, which starts with only about thirty to one hundred million bytes of specification in the compressed genome yet ends up with a complexity that is about a billion times greater.69 It is also not surprising that a deterministic process can produce apparently random results. We have has random-number generators (for example, the "randomize" function in Wolfram's program Mathematics) that use deterministic processes to produce sequences that pa.s.s statistical tests for randomness. These programs date back to the earliest days of computer software, such as the first version of Fortran. However, Wolfram does provide a thorough theoretical foundation for this observation.

Wolfram goes on to describe how simple computational mechanisms can exist in nature at different levels, and he shows that these simple and deterministic mechanisms can produce all of the complexity that we see and experience. He provides myriad examples, such as the pleasing designs of pigmentation on animals, the shape and markings on sh.e.l.ls, and patterns of turbulence (such as the behavior of smoke in the air). He makes the point that computation is essentially simple and ubiquitous. The repet.i.tive application of simple computational transformations, according to Wolfram, is the true source of complexity in the world.

My own view is that this is only party correct. I agree with Wolfram that computation is all around us, and that some of the patterns we see are created by the equivalent of cellular automata. But a key issue to ask is this: Just how complex are the results of cla.s.s automata? Just how complex are the results of cla.s.s automata?

Wolfram effectively sidesteps the issue of degrees of complexity. I agree that a degenerate pattern such as a chessboard has no complexity. Wolfram also acknowledges that mere randomness does not represent complexity either, because pure randomness becomes predictable in its pure lack of predictability. It is true that the interesting features of cla.s.s 4 automata are neither repeating nor purely random, so I would agree that they are more complex than the results produced by other cla.s.ses of automata.

However, there is nonetheless a distinct limit to the complexity produced by cla.s.s 4 automata. The many images of such automata in Wolfram's book all have a similar look to them, and although they are nonrepeating, they are interesting (and intelligent) only to a degree. Moreover, they do not continue to evolve into anything complex, nor do they develop new types of features. One could run these for trillions or even trillions of trillions of iterations and the image would remain at the same limited level complexity. They do not evolve into, say, insects or humans or Chopin preludes or anything else that we might consider of a higher order of complexity than the streaks and intermingling triangles displayed in these images.

Complexity is a continuum. Here I define "order" as "information that fits a purpose."70 A completely predictable process has zero order. A high level of information alone does not necessarily imply a high level of order either. A phone book has a lot of information, but the level of order of that information is quite low. A random sequence is essentially pure information (since it is not predictable) but has no order. The output of cla.s.s 4 automata does possess a certain level of order, and it does survive like other persisting patterns. But the patterns represented by a human being has a far higher level of order, and of complexity. A completely predictable process has zero order. A high level of information alone does not necessarily imply a high level of order either. A phone book has a lot of information, but the level of order of that information is quite low. A random sequence is essentially pure information (since it is not predictable) but has no order. The output of cla.s.s 4 automata does possess a certain level of order, and it does survive like other persisting patterns. But the patterns represented by a human being has a far higher level of order, and of complexity.

Human beings fulfill a highly demanding purpose: they survive in a challenging ecological niche. Human beings represent an extremely intricate and elaborate hierarchy of other patterns. Wolfram regards any patterns that combine some recognizable features and unpredictable elements to be effectively equivalent to on another. But he does not show how a cla.s.s 4 automaton can ever increase it complexity, let alone become a pattern as complex as a human being.

There is a missing link here, one that would account for how one gets from the interesting but ultimately routine patterns of a cellular automaton to the complexity of persisting structures that demonstrate higher levels of intelligence. For example, these cla.s.s 4 patterns are not capable of solving interesting problems, and no amount of iteration moves them closer to doing so. Wolfram would counter than a rule 110 automaton could be used as a "universal computer."71 However, by itself, a universal computer is not capable of solving intelligent programs without what I would call "software." It is the complexity of the software that runs on a universal computer that is precisely the issue. However, by itself, a universal computer is not capable of solving intelligent programs without what I would call "software." It is the complexity of the software that runs on a universal computer that is precisely the issue.

One might point out that cla.s.s 4 patterns result from the simplest possible automata (one-dimensional, two-color, two-neighbor rules). What happens if we increase the dimensionality-for example, go to multiple colors or even generalize these discrete cellular automata to continuous function? Wolfram address all of this quite thoroughly. The results produced from more complex automata are essentially the same as those of the very simple ones. We get the same sorts of interesting by ultimately quite limited patterns. Wolfram makes the intriguing point that we do not need to use more complex rules to get complexity in the end result. But I would make the converse point that we are unable to increase the complexity of the end results through either more complex rules or further iteration. So cellular automata get us only so far.

Can We Evolve Artificial Intelligence from Simple Rules?

So how do we get from these interesting but limited patterns to those of insects or Chopin interludes? One concept we need into consideration is conflict-that is, evolution evolution. If we add another simple concept-an evolutionary algorithm-to that of Wolfram's simple cellular automata, we start to get far more exciting and more intelligent results. Wolfram say that the cla.s.s 4 automata and an evolutionary algorithm are "computationally equivalent." But that is true only on what I consider the "hardware" level. On the software level, the other of the patterns produced are clearly different an of a different order of complexity and usefulness.

An evolutionary algorithm can start with randomly generated potential solutions to a problem, which are encoded in a digital genetic code. We then have the solutions compete with one another in a simulated evolutionary battle. The better solutions survive and procreate in a simulated s.e.xual reproduction in which offspring solutions are created, drawing their genetic code (encoded solutions) from two parents. We can also introduce a rate of genetic mutation. Various high-level parameters of this process, such as the rate of mutation, the rate of offspring, and so on, are appropriately called "G.o.d parameters," and it is the job of the engineer designing the evolutionary algorithm to set them to reasonably optimal values. The process is run for many thousands of generations of simulated evolution, and at the end of the process one is likely to find solutions that are of a distinctly higher order than the starting ones.

The results of these evolutionary (sometimes called genetic) algorithms can be elegant, beautiful, and intelligent solutions to complex problems. They have been used, for example, to create artistic designs and designs for artificial life-forms, as well as to execute a wide range of practical a.s.signments such as designing jet engines. Genetic algorithms are one approach to "narrow" artificial intelligence-that is, creating systems that can perform particular functions that used to require the application of human intelligence.

But something is still missing. Although genetic algorithms are a useful tool in solving specific problems, they have never achieved anything resembling "strong AI"-that is, apt.i.tude resembling the broad, deep, and subtle features of human intelligence, particularly its power of pattern recognition and command language. Is the problem that we are not running the evolutionary algorithms long enough? After all, humans evolved through a process that took billions of years. Perhaps we cannot re-create that process with just a few days or weeks of computer simulation. This won't work, however, because conventional genetic algorithms reach an asymptote in their level of performance, so running them for a longer period of time won't help.

A third level (beyond the ability of cellular processes to produce apparent randomness and genetic algorithms to produce focused intelligent solutions) is to perform evolution on multiple levels. Conventional genetic algorithms allow evolution only within the confines of a narrow problem and a single means of evolution. The genetic code itself needs to evolve; the rules of evolution need to evolve. Nature did not stay with a single chromosome, for example. There have been many levels of indirection incorporated in the natural evolutionary process. And we require a complex environment in which the evolution takes place.

To build strong AI we will have the opportunity to short-circuit this process, however, by reverse-engineering the human brain, a project well under way, thereby benefiting from the evolutionary process that has already taken place. We will be applying evolutionary algorithms within these solutions just as the human brain does. For example, the fetal wiring is initially random within constraints specified in the genome in at least some regions. Recent research shows that areas having to do with learning undergo more change, whereas structures having to do with sensory processing experience less change after birth.72 Wolfram make the valid point that certain (indeed, most) computational processes are not predictable. In other words, we cannot predict future state without running the entire process, I agree with him that we can know the answer in advance only if somehow we can simulate a process at a faster speed. Given that the universe runs at the fastest speed it can run, there is usually no way to short-circuit the process. However, we have the benefits of the billions of years of evolution that have already taken place, which are responsible for the greatly increased order of complexity in the natural world. We can now benefit from it by using out evolved tools to reverse engineer the products of biological evolution (most importantly, the human brain).

Yes, it is true that some phenomena in nature that may appear complex at some level are merely the results of simple underlying computational mechanisms that are essentially cellular automata at work. The interesting pattern of triangles on a "tent oliuve" (cited extensively by Wolfram) or the intricate and varied patterns of a snowflake are good example. I don't think this is a new observation, in that we've always regarded the design of snowflakes to derive from a simple molecular computation-like building process. However, Wolfram does provide us with a compelling theoretical foundation for expressing these processes and their resulting patterns. But there is more to biology than cla.s.s 4 patterns.

Another important these by Wolfram lies in his thorough treatment of computation as a simple and ubiquitous phenomenon. Of course, we've known for more than a century that computation is inherently simple: we can build any possible level of complexity from a foundation of the simplest possible manipulations of information.

For example, Charles Babbage's late-nineteenth-century mechanical computer (which never ran) provided only a handful of operation codes, yet provided (within its memory capacity and speed) the same kinds of transformations that modern computers do. The complexity of Babbage's invention stemmed only from the details of its design, which indeed proved too difficult for Babbage to implement using the technology available to him.

The Turing machine, Alan Turing's theoretical conception of a universal computer in 1950, provides only seven very basic commands, yet can be organized to perform any possible computation.73 The existence of a "universal Turing machine," which can simulate any possible Turing machine that is described on its tape memory, is a further demonstration of the universality and simplicity of information. The existence of a "universal Turing machine," which can simulate any possible Turing machine that is described on its tape memory, is a further demonstration of the universality and simplicity of information.74 In In The Age of Intelligent Machines The Age of Intelligent Machines, I showed how any computer could be constructed from "a suitable number of [a] very simple device," namely, the "nor" gate.75 This is not exactly the same demonstration as a universal Turing machine, but it does demonstrate that any computation can be performed by a cascade of this very simple device (which is simpler than rule 110), given the right software (which would include the connection description of the nor gates). This is not exactly the same demonstration as a universal Turing machine, but it does demonstrate that any computation can be performed by a cascade of this very simple device (which is simpler than rule 110), given the right software (which would include the connection description of the nor gates).76 Although we need additional concepts to describe an evolutionary process that create intelligent solutions to problems, Wolfram's demonstration of the simplicity an ubiquity of computation is an important contribution in our understanding of the fundamental significance of information in the world.

MOLLY 2004: You've got machines evolving at an accelerating pace. What about humans? You've got machines evolving at an accelerating pace. What about humans?

RAY: You mean biological humans? You mean biological humans?

MOLLY 2004: Yes. Yes.

CHARLES DARWIN: Biological evolution is presumably continuing, is it not? Biological evolution is presumably continuing, is it not?

RAY: Well, biology at this level is evolving so slowly that it hardly counts. I mentioned that evolution works through indirection. It turns out that the older paradigms such as biological evolution do continue but at their old speed, so they are eclipsed by the new paradigms. Biological evolution for animals as complex as humans takes tens of thousands of years to make noticeable, albeit still small, differences. The entire history of human cultural and technological evolution has taken place on that timescale. Yet we are now poised to ascend beyond the fragile and slow creations of biological evolution in a mere several decades. Current progress is on a scale that is a thousand to a million times faster than biological evolution. Well, biology at this level is evolving so slowly that it hardly counts. I mentioned that evolution works through indirection. It turns out that the older paradigms such as biological evolution do continue but at their old speed, so they are eclipsed by the new paradigms. Biological evolution for animals as complex as humans takes tens of thousands of years to make noticeable, albeit still small, differences. The entire history of human cultural and technological evolution has taken place on that timescale. Yet we are now poised to ascend beyond the fragile and slow creations of biological evolution in a mere several decades. Current progress is on a scale that is a thousand to a million times faster than biological evolution.

NED LUDD: What if not everyone wants to go along with this? What if not everyone wants to go along with this?

RAY: I wouldn't expect they would. There are always early and late adopters. There's always a leading edge and a trailing edge to technology or to any evolutionary change. We still have people pushing plows, but that hasn't slowed down the adoption of cell phones, telecommunications, the Internet, biotechnology, and so on. However, the lagging edge does ultimately catch up. We have societies in Asia that jumped from agrarian economies to information economies, without going through industrialization. I wouldn't expect they would. There are always early and late adopters. There's always a leading edge and a trailing edge to technology or to any evolutionary change. We still have people pushing plows, but that hasn't slowed down the adoption of cell phones, telecommunications, the Internet, biotechnology, and so on. However, the lagging edge does ultimately catch up. We have societies in Asia that jumped from agrarian economies to information economies, without going through industrialization.

NED: That may be so, but the digital divide is getting worse. That may be so, but the digital divide is getting worse.

RAY: I know that people keep saying that, but how can that possibly be true? The number of humans is growing only very slowly. The number of digitally connected humans, no matter how you measure it, is growing rapidly. A larger and larger fraction of the world's population is getting electronic communicators and leapfrogging our primitive phone-wiring system by hooking up to the Internet wirelessly, so the digital divide is rapidly diminishing, not growing. I know that people keep saying that, but how can that possibly be true? The number of humans is growing only very slowly. The number of digitally connected humans, no matter how you measure it, is growing rapidly. A larger and larger fraction of the world's population is getting electronic communicators and leapfrogging our primitive phone-wiring system by hooking up to the Internet wirelessly, so the digital divide is rapidly diminishing, not growing.

MOLLY 2004: I still feel that the have/have not issue doesn't get enough attention. There's more we can do. I still feel that the have/have not issue doesn't get enough attention. There's more we can do.

RAY: Indeed, but the overriding, impersonal forces of the law of accelerating returns are nonetheless moving in the right direction. Consider that technology in a particular area starts out unaffordable and not working very well. Then it becomes merely expensive and works a little better. The next step is the product becomes inexpensive and works really well. Finally, the technology becomes virtually free and works great. It wasn't long ago that when you saw someone using a portable phone in a movie, he or she was a member of the power elite, because only the wealthy could afford portable phones. Or as a more poignant example, consider drugs for AIDS. They started out not working very well and costing more than ten thousand dollars per year per patient. Now they work a lot better and are down to several hundred dollars per year in poor countries. Indeed, but the overriding, impersonal forces of the law of accelerating returns are nonetheless moving in the right direction. Consider that technology in a particular area starts out unaffordable and not working very well. Then it becomes merely expensive and works a little better. The next step is the product becomes inexpensive and works really well. Finally, the technology becomes virtually free and works great. It wasn't long ago that when you saw someone using a portable phone in a movie, he or she was a member of the power elite, because only the wealthy could afford portable phones. Or as a more poignant example, consider drugs for AIDS. They started out not working very well and costing more than ten thousand dollars per year per patient. Now they work a lot better and are down to several hundred dollars per year in poor countries. Unfortunately with regard to AIDS, we're not yet at the working great and costing almost nothing stage. The world is beginning to take somewhat more effective action on AIDS, but it has been tragic that more has not been done. Millions of lives, most in Africa, have been lost as a result. But the effect of the law of accelerating returns is nonetheless moving in the right direction. And the time gap between leading and lagging edge is itself contracting. Right now I estimate this lag at about a decade. In a decade, it will be down to about half a decade. Unfortunately with regard to AIDS, we're not yet at the working great and costing almost nothing stage. The world is beginning to take somewhat more effective action on AIDS, but it has been tragic that more has not been done. Millions of lives, most in Africa, have been lost as a result. But the effect of the law of accelerating returns is nonetheless moving in the right direction. And the time gap between leading and lagging edge is itself contracting. Right now I estimate this lag at about a decade. In a decade, it will be down to about half a decade.

The Singularity as Economic Imperative

The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.-GEORGE BERNARD SHAW, "MAXIMS FOR REVOLUTIONISTS", MAN AND SUPERMAN MAN AND SUPERMAN, 1903 All progress is based upon a universal innate desire on the part of every organism to live beyond its income.-SAMUEL BUTLER, NOTEBOOKS NOTEBOOKS, 1912 If I were just setting out today to make that drive to the West Coast to start a new business, I would be looking at biotechnology and nanotechnology.-JEFF BEZOS, FOUNDER AND CEO OF AMAZON.COM

Get Eighty Trillion Dollars-Limited Time Only You will get eighty trillion dollars just by reading this section and understanding what it says. For complete details, see below. (It's true that an author will do just about anything to keep your attention, but I'm serious about this statement. Until I return to a further explanation, however, do read the first sentence of this paragraph carefully.) The law of accelerating returns is fundamentally an economic theory. Contemporary economic theory and policy are based on outdated models that emphasize energy costs, commodity prices, and capital investment in plant and equipment as key driving factors, while largely overlooking computational capacity, memory, bandwidth, the size of technology, intellectual property, knowledge, and other increasingly vital (and increasingly increasing) const.i.tuents that are driving the economy.

It's the economic imperative of a compet.i.tive marketplace that is the primary force driving technology forward and fueling the law of accelerating returns. In turn, the law of accelerating returns is transforming economic relationships. Economic imperative is the equivalent of survival in biological evolution. We are moving toward more intelligent and smaller machines as the result of myriad small advances, each with its own particular economic justification. Machines that can more precisely carry out their missions have increased value, which explains why they are being built. There are tens of thousands of projects that are advancing the various aspects of the law of accelerating returns in diverse incremental ways.

Regardless of near-term business cycles, support for "high tech" in the business community, and in particular for software development, has grown enormously. When I started my optical character recognition (OCR) and speech-synthesis company (Kurzweil Computer Products) in 1974, high-tech venture deals in the United States totaled less than thirty million dollars (in 1974 dollars). Even during the recent high-tech recession (20002003), the figure was almost one hundred times greater.79 We would have to repeal capitalism and every vestige of economic compet.i.tion to stop this progression. We would have to repeal capitalism and every vestige of economic compet.i.tion to stop this progression.

It is important to point out that we are progressing toward the "new" knowledge-based economy exponentially but nonetheless gradually.80 When the so-called new economy did not transform business models overnight, many observers were quick to dismiss the idea as inherently flawed. It will be another couple of decades before knowledge dominates the economy, but it will represent a profound transformation when it happens. When the so-called new economy did not transform business models overnight, many observers were quick to dismiss the idea as inherently flawed. It will be another couple of decades before knowledge dominates the economy, but it will represent a profound transformation when it happens.

We saw the same phenomenon in the Internet and telecommunications boom-and-bust cycles. The booms were fueled by the valid insight that the Internet and distributed electronic communication represented fundamental transformations. But when these transformations did not occur in what were unrealistic time frames, more than two trillion dollars of market capitalization vanished. As I point out below, the actual adoption of these technologies progressed smoothly with no indication of boom or bust.

Virtually all of the economic models taught in economics cla.s.ses and used by the Federal Reserve Board to set monetary policy, by government agencies to set economic policy, and by economic forecasters of all kinds are fundamentally flawed in their view of long-term trends. That's because they are based on the "intuitive linear" view of history (the a.s.sumption that the pace of change will continue at the current rate) rather than the historically based exponential view. The reason that these linear models appear to work for a while is the same reason most people adopt the intuitive linear view in the first place: exponential trends appear to be linear when viewed and experienced for a brief period of time, particularly in the early stages of an exponential trend, when not much is happening. But once the "knee of the curve" is achieved and the exponential growth explodes, the linear models break down.

As this book is being written, the country is debating changing the Social Security program based on projections that go out to 2042, approximately the time frame I've estimated for the Singularity (see the next chapter). This economic policy review is unusual in the very long time frames involved. The predictions are based on linear models of longevity increases and economic growth that are highly unrealistic. On the one hand, longevity increases will vastly outstrip the government's modest expectations. On the other hand, people won't be seeking to retire at sixty-five when they have the bodies and brains of thirty-year-olds. Most important, the economic growth from the "GNR" technologies (see chapter 5) will greatly outstrip the 1.7 percent per year estimates being used (which understate by half even our experience over the past fifteen years).

The exponential trends underlying productivity growth are just beginning this explosive phase. The U.S. real gross domestic product has grown exponentially, fostered by improving productivity from technology, as seen in the figure below.81 Some critics credit population growth with the exponential growth in GDP, but we see the same trend on a per-capita basis (see the figure below).82 Note that the underlying exponential growth in the economy is a far more powerful force than periodic recessions. Most important, recessions, including depressions, represent only temporary deviations from the underlying curve. Even the Great Depression represents only a minor blip in the context of the underlying pattern of growth. In each case, the economy ends up exactly where it would have been had the recession/depression never occurred.

The world economy is continuing to accelerate. The World Bank released a report in late 2004 indicating that the past year had been more prosperous than any year in history with worldwide economic growth of 4 percent.83 Moreover, the highest rates were in the developing countries: more than 6 percent. Even omitting China and India, the rate was over 5 percent. In the East Asian and Pacific region, the number of people living in extreme poverty went from 470 million in 1990 to 270 million in 2001, and is projected by the World Bank to be under 20 million by 2015. Other regions are showing similar, although somewhat less dramatic, economic growth. Moreover, the highest rates were in the developing countries: more than 6 percent. Even omitting China and India, the rate was over 5 percent. In the East Asian and Pacific region, the number of people living in extreme poverty went from 470 million in 1990 to 270 million in 2001, and is projected by the World Bank to be under 20 million by 2015. Other regions are showing similar, although somewhat less dramatic, economic growth.

Productivity (economic output per worker) has also been growing exponentially. These statistics are in fact greatly understated because they do not fully reflect significant improvements in the quality and features of products and services. It is not the case that "a car is a car"; there have been major upgrades in safety, reliability, and features. Certainly, one thousand dollars of computation today is far more powerful than one thousand dollars of computation ten years ago (by a factor of more than one thousand). There are many other such examples. Pharmaceutical drugs are increasingly effective because they are now being designed to precisely carry out modifications to the exact metabolic pathways underlying disease and aging processes with minimal side effects (note that the vast majority of drugs on the market today still reflect the old paradigm; see chapter 5). Products ordered in five minutes on the Web and delivered to your door are worth more than products that you have to fetch yourself. Clothes custom-manufactured for your unique body are worth more than clothes you happen to find on a store rack. These sorts of improvements are taking place in most product categories, and none of them is reflected in the productivity statistics.

The statistical methods underlying productivity measurements tend to factor out gains by essentially concluding that we still get only one dollar of products and services for a dollar, despite the fact that we get much more for that dollar. (Computers are an extreme example of this phenomenon, but it is pervasive.) University of Chicago professor Pete Klenow and University of Rochester professor Mark Bils estimate that the value in constant dollars of existing goods has been increasing at 1.5 percent per year for the past twenty years because of qualitative improvements.84 This still does not account for the introduction of entirely new products and product categories (for example, cell phones, pagers, pocket computers, downloaded songs, and software programs). It does not consider the burgeoning value of the Web itself. How do we value the availability of free resources such as online encyclopedias and search engines that increasingly provide effective gateways to human knowledge? This still does not account for the introduction of entirely new products and product categories (for example, cell phones, pagers, pocket computers, downloaded songs, and software programs). It does not consider the burgeoning value of the Web itself. How do we value the availability of free resources such as online encyclopedias and search engines that increasingly provide effective gateways to human knowledge?

The Bureau of Labor Statistics, which is responsible for the inflation statistics, uses a model that incorporates an estimate of quality growth of only 0.5 percent per year.85 If we use Klenow and Bils's conservative estimate, this reflects a systematic underestimate of quality improvement and a resulting overestimate of inflation by at least 1 percent per year. And that still does not account for new product categories. If we use Klenow and Bils's conservative estimate, this reflects a systematic underestimate of quality improvement and a resulting overestimate of inflation by at least 1 percent per year. And that still does not account for new product categories.

Despite these weaknesses in the productivity statistical methods, gains in productivity are now actually reaching the steep part of the exponential curve. Labor productivity grew at 1.6 percent per year until 1994, then rose at 2.4 percent per year, and is now growing even more rapidly. Manufacturing productivity in output per hour grew at 4.4 percent annually from 1995 to 1999, durables manufacturing at 6.5 percent per year. In the first quarter of 2004, the seasonally adjusted annual rate of productivity change was 4.6 percent in the business sector and 5.9 percent in durable goods manufacturing.86 We see smooth exponential growth in the value produced by an hour of labor over the last half century (see the figure below). Again, this trend does not take into account the vastly greater value of a dollar's power in purchasing information technologies (which has been doubling about once a year in overall price-performance).87 Deflation ... a Bad Thing?In 1846 we believe there was not a single garment in our country sewed by machinery; in that year the first American patent of a sewing machine was issued. At the present moment thousands are wearing clothes which have been st.i.tched by iron fingers, with a delicacy rivaling that of a Cashmere maiden.-SCIENTIFIC AMERICAN, 1853

As this book is being written, a worry of many mainstream economists on both the political right and the left is deflation. On the face of it, having your money go further would appear to be a good thing. The economists' concern is that if consumers can buy what they need and want with fewer dollars, the economy will shrink (as measured in dollars). This ignores, however, the inherently insatiable needs and desires of human consumers. The revenues of the semiconductor industry, which "suffers" 40 to 50 percent deflation per year, have nonetheless grown by 17 percent each year over the past half century.88 Since the economy is in fact expanding, this theoretical implication of deflation should not cause concern. Since the economy is in fact expanding, this theoretical implication of deflation should not cause concern.

The 1990s and early 2000s have seen the most powerful deflationary forces in history, which explains why we are not seeing significant rates of inflation. Yes, it's true that historically low unemployment, high a.s.set values, economic growth, and other such factors are inflationary, but these factors are offset by the exponential trends in the price-performance of all information-based technologies: computation, memory, communications, biotechnology, miniaturization, and even the overall rate of technical progress. These technologies deeply affect all industries. We are also undergoing ma.s.sive disintermediation in the channels of distribution through the Web and other new communication technologies, as well as escalating efficiencies in operations and administration.

Since the information industry is becoming increasingly influential in all sectors of the economy, we are seeing the increasing impact of the IT industry's extraordinary deflation rates. Deflation during the Great Depression in the 1930s was due to a collapse of consumer confidence and a collapse of the money supply. Today's deflation is a completely different phenomenon, caused by rapidly increasing productivity and the increasing pervasiveness of information in all its forms.

All of the technology trend charts in this chapter represent ma.s.sive deflation. There are many examples of the impact of these escalating efficiencies. BP Amoco's cost for finding oil in 2000 was less than one dollar per barrel, down from nearly ten dollars in 1991. Processing an Internet transaction costs a bank one penny, compared to more than one dollar using a teller.

It is important to point out that a key implication of nanotechnology is that it will bring the economics of software to hardware-that is, to physical products. Software prices are deflating even more quickly than those of hardware (see the figure below).

The impact of distributed and intelligent communications has been felt perhaps most intensely in the world of business. Despite dramatic mood swings on Wall Street, the extraordinary values ascribed to so-called e-companies during the 1990s boom era reflected a valid perception: the business models that have sustained businesses for decades are in the early phases of a radical transformation. New models based on direct personalized communication with the customer will transform every industry, resulting in ma.s.sive disintermediation of the middle layers that have traditionally separated the customer from the ultimate source of products and services. There is, however, a pace to all revolutions, and the investments and stock market valuations in this area expanded way beyond the early phases of this economic S-curve.

The boom-and-bust cycle in these information technologies was strictly a capital-markets (stock-value) phenomenon. Neither boom nor bust is apparent in the actual business-to-consumer (B2C) and business-to-business (B2B) data (see the figure on the next page). Actual B2C revenues grew smoothly from $1.8 billion in 1997 to $70 billion in 2002. B2B had similarly smooth growth from $56 billion in 1999 to $482 billion in 2002.90 In 2004 it is approaching $1 trillion. We certainly do not see any evidence of business cycles in the actual price-performance of the underlying technologies, as I discussed extensively above. In 2004 it is approaching $1 trillion. We certainly do not see any evidence of business cycles in the actual price-performance of the underlying technologies, as I discussed extensively above.

Expanding access to knowledge is also changing power relationships. Patients increasingly approach visits to their physician armed with a sophisticated understanding of their medical condition and their options. Consumers of virtually everything from toasters, cars, and homes to banking and insurance are now using automated software agents to quickly identify the right choices with the optimal features and prices. Web services such as eBay are rapidly connecting buyers and sellers in unprecedented ways.

The wishes and desires of customers, often unknown even to themselves, are rapidly becoming the driving force in business relationships. Well-connected clothes shoppers, for example, are not going to be satisfied for much longer with settling for whatever items happen to be left hanging on the rack of their local store. Instead, they will select just the right materials and styles by viewing how many possible combinations look on a three-dimensional image of their own body (based on a detailed body scan), and then having the choices custom-manufactured.

The current disadvantages of Web-based commerce (for example, limitations in the ability to directly interact with products and the frequent frustrations of interacting with inflexible menus and forms instead of human personnel) will gradually dissolve as the trends move robustly in favor of the electronic world. By the end of this decade, computers will disappear as distinct physical objects, with displays built in our eyegla.s.ses, and electronics woven in our clothing, providing full-immersion visual virtual reality. Thus, "going to a Web site" will mean entering a virtual-reality environment-at least for the visual and auditory senses-where we can directly interact with products and people, both real and simulated. Although the simulated people will not be up to human standards-at least not by 2009-they will be quite satisfactory as sales agents, reservation clerks, and research a.s.sistants. Haptic (tactile) interfaces will enable us to touch products and people. It is difficult to identify any lasting advantage of the old brick-and-mortar world that will not ultimately be overcome by the rich interactive interfaces that are soon to come.

These developments will have significant implications for the real-estate industry. The need to congregate workers in offices will gradually diminish. From the experience of my own companies, we are already able to effectively organize geographically disparate teams, something that was far more difficult a decade ago. The full-immersion visual-auditory virtual-reality environments, which will be ubiquitous during the second decade of this century, will hasten the trend toward people living and working wherever they wish. Once we have full-immersion virtual-reality environments incorporating all of the senses, which will be feasible by the late 2020s, there will be no reason to utilize real offices. Real estate will become virtual.

As Sun Tzu pointed out, "knowledge is power," and another ramification of the law of accelerating returns is the exponential growth of human knowledge, including intellectual property.

Here is a closeup of the upper-right section of the above figure ed.:None of this means that cycles of recession will disappear immediately. Recently, the country experienced an economic slowdown and technology-sector recession and then a gradual recovery. The economy is still burdened with some of the underlying dynamics that historically have caused cycles of recession: excessive commitments such as overinvestment in capital-intensive projects and the overstocking of inventories. However, because the rapid dissemination of information, sophisticated forms of online procurement, and increasingly transparent markets in all industries have diminished the impact of this cycle, "recessions" are likely to have less direct impact on our standard of living. That appears to have been the case in the mini-recession that we experienced in 19911993 and was even more evident in the most recent recession in the early 2000s. The underlying long-term growth rate will continue at an exponential rate.

Moreover, innovation and the rate of paradigm shift are not noticeably affected by the minor deviations caused by economic cycles. All of the technologies exhibiting exponential growth shown in the above charts are continuing without losing a beat through recent economic slowdowns. Market acceptance also shows no evidence of boom and bust. The overall growth of the economy reflects completely new forms and layers of wealth and value that did not previously exist, or at least that did not previously const.i.tute a significant portion of the economy, such as new forms of nanoparticle-based materials, genetic information, intellectual property, communication portals, Web sites, bandwidth, software, databases, and many other new technology-based categories.

The overall information-technology sector is rapidly increasing its share of the economy and is increasingly influential on all other sectors, as noted in the figure below.92 Another implication of the law of accelerating returns is exponential growth in education and learning. Over the past 120 years, we have increased our investment in K-12 education (per student and in constant dollars) by a factor of ten. There has been a hundredfold increase in the number of college students. Automation started by amplifying the power of our muscles and in recent times has been amplifying the power of our minds. So for the past two centuries, automation has been eliminating jobs at the bottom of the skill ladder while creating new (and better-paying) jobs at the top of the skill ladder. The ladder has been moving up, and thus we have been exponentially increasing investments in education at all levels (see the figure below).93 Oh, and about that "offer" at the beginning of this precis, consider that present stock values are based on future expectations. Given that the (literally) shortsighted linear intuitive view represents the ubiquitous outlook, the common wisdom in economic expectations is dramatically understated. Since stock prices reflect the consensus of a buyer-seller market, the prices reflect the underlying linear a.s.sumption that most people share regarding future economic growth. But the law of accelerating returns clearly implies that the growth rate will continue to grow exponentially, because the rate of progress will continue to accelerate.

MOLLY 2004: But wait a second, you said that I would get eighty trillion dollars if I read and understood this section of the chapter.

RAY: That's right