The Singularity Is Near_ When Humans Transcend Biology - Part 18
Library

Part 18

Another refinement of the Dyson concept is that the heat radiated by one sh.e.l.l could be captured and used by a parallel sh.e.l.l that is placed at a position farther from the sun. Computer scientist Robert Bradbury points out that there could be any number of such layers and proposes a computer aptly called a "Matrioshka brain," organized as a series of nested sh.e.l.ls around the sun or another star. One such conceptual design a.n.a.lyzed by Sandberg is called Uranos, which is designed to use 1 percent of the nonhydrogen, nonhelium ma.s.s in the solar system (not including the sun), or about 1024 kilograms, a bit smaller than Zeus. kilograms, a bit smaller than Zeus.77 Uranos provides about 10 Uranos provides about 1039 computational nodes, an estimated 10 computational nodes, an estimated 1051 cps of computation, and about 10 cps of computation, and about 1052 bits of storage. bits of storage.

Computation is already a widely distributed-rather than centralized-resource, and my expectation is that the trend will continue toward greater decentralization. However, as our civilization approaches the densities of computation envisioned above, the distribution of the vast number of processors is likely to have characteristics of these conceptual designs. For example, the idea of Matrioshka sh.e.l.ls would take maximal advantage of solar power and heat dissipation. Note that the computational powers of these solar system-scale computers will be achieved, according to my projections in chapter 2, around the end of this century.

Bigger or Smaller. Given that the computational capacity of our solar system is in the range of 10 Given that the computational capacity of our solar system is in the range of 1070 to 10 to 1080 cps, we will reach these limits early in the twenty-second century, according to my projections. The history of computation tells us that the power of computation expands both inward and outward. Over the last several decades we have been able to place twice as many computational elements (transistors) on each integrated circuit chip about every two years, which represents inward growth (toward greater densities of computation per kilogram of matter). But we are also expanding outward, in that the number of chips is expanding (currently) at a rate of about 8.3 percent per year. cps, we will reach these limits early in the twenty-second century, according to my projections. The history of computation tells us that the power of computation expands both inward and outward. Over the last several decades we have been able to place twice as many computational elements (transistors) on each integrated circuit chip about every two years, which represents inward growth (toward greater densities of computation per kilogram of matter). But we are also expanding outward, in that the number of chips is expanding (currently) at a rate of about 8.3 percent per year.78 It is reasonable to expect both types of growth to continue, and for the outward growth rate to increase significantly once we approach the limits of inward growth (with three-dimensional circuits). It is reasonable to expect both types of growth to continue, and for the outward growth rate to increase significantly once we approach the limits of inward growth (with three-dimensional circuits).

Moreover, once we b.u.mp up against the limits of matter and energy in our solar system to support the expansion of computation, we will have no choice but to expand outward as the primary form of growth. We discussed earlier the speculation that finer scales of computation might be feasible-on the scale of subatomic particles. Such pico- or femtotechnology would permit continued growth of computation by continued shrinking of feature sizes. Even if this is feasible, however, there are likely to be major technical challenges in mastering subnanoscale computation, so the pressure to expand outward will remain.

Expanding Beyond the Solar System. Once we do expand our intelligence beyond the solar system, at what rate will this take place? The expansion will not start out at the maximum speed; it will quickly achieve a speed within a vanishingly small change from the maximum speed (speed of light or greater). Some critics have objected to this notion, insisting that it would be very difficult to send people (or advanced organisms from any other ETI civilization) and equipment at near the speed of light without crushing them. Of course, we could avoid this problem by accelerating slowly, but another problem would be collisions with interstellar material. But again, this objection entirely misses the point of the nature of intelligence at this stage of development. Early ideas about the spread of ETI through the galaxy and universe were based on the migration and colonization patterns from our human history and basically involved sending settlements of humans (or, in the case of other ETI civilizations, intelligent organisms) to other star systems. This would allow them to multiply through normal biological reproduction and then continue to spread in like manner from there. Once we do expand our intelligence beyond the solar system, at what rate will this take place? The expansion will not start out at the maximum speed; it will quickly achieve a speed within a vanishingly small change from the maximum speed (speed of light or greater). Some critics have objected to this notion, insisting that it would be very difficult to send people (or advanced organisms from any other ETI civilization) and equipment at near the speed of light without crushing them. Of course, we could avoid this problem by accelerating slowly, but another problem would be collisions with interstellar material. But again, this objection entirely misses the point of the nature of intelligence at this stage of development. Early ideas about the spread of ETI through the galaxy and universe were based on the migration and colonization patterns from our human history and basically involved sending settlements of humans (or, in the case of other ETI civilizations, intelligent organisms) to other star systems. This would allow them to multiply through normal biological reproduction and then continue to spread in like manner from there.

But as we have seen, by late in this century nonbiological intelligence on the Earth will be many trillions of times more powerful than biological intelligence, so sending biological humans on such a mission would not make sense. The same would be true for any other ETI civilization. This is not simply a matter of biological humans sending robotic probes. Human civilization by that time will be nonbiological for all practical purposes.

These nonbiological sentries would not need to be very large and in fact would primarily comprise information. It is true, however, that just sending information would not be sufficient, for some material-based device that can have a physical impact on other star and planetary systems must be present. However, it would be sufficient for the probes to be self-replicating nan.o.bots (note that a nan.o.bot has nanoscale features but that the overall size of a nan.o.bot is measured in microns).79 We could send swarms of many trillions of them, with some of these "seeds" taking root in another planetary system and then replicating by finding the appropriate materials, such as carbon and other needed elements, and building copies of themselves. We could send swarms of many trillions of them, with some of these "seeds" taking root in another planetary system and then replicating by finding the appropriate materials, such as carbon and other needed elements, and building copies of themselves.

Once established, the nan.o.bot colony could obtain the additional information it needs to optimize its intelligence from pure information transmissions that involve only energy, not matter, and that are sent at the speed of light. Unlike large organisms such as humans, these nan.o.bots, being extremely small, could travel at close to the speed of light. Another scenario would be to dispense with the information transmissions and embed the information needed in the nan.o.bots' own memory. That's an engineering decision we can leave to these future superengineers.

The software files could be spread out among billions of devices. Once one or a few of them get a "foothold" by self-replicating at a destination, the now much larger system could gather up the nan.o.bots traveling in the vicinity so that from that time on, the bulk of the nan.o.bots sent in that direction do not simply fly by. In this way, the now established colony can gather up the information, as well as the distributed computational resources, it needs to optimize its intelligence.

The Speed of Light Revisited. In this way the maximum speed of expansion of a solar system-size intelligence (that is, a type II civilization) into the rest of the universe would be very close to the speed of light. We currently understand the maximum speed to transmit information and material objects to be the speed of light, but there are at least suggestions that this may not be an absolute limit. In this way the maximum speed of expansion of a solar system-size intelligence (that is, a type II civilization) into the rest of the universe would be very close to the speed of light. We currently understand the maximum speed to transmit information and material objects to be the speed of light, but there are at least suggestions that this may not be an absolute limit.

We have to regard the possibility of circ.u.mventing the speed of light as speculative, and my projections of the profound changes that our civilization will undergo in this century make no such a.s.sumption. However, the potential to engineer around this limit has important implications for the speed with which we will be able to colonize the rest of the universe with our intelligence.

Recent experiments have measured the flight time of photons at nearly twice the speed of light, a result of quantum uncertainty on their position.80 However, this result is really not useful for this a.n.a.lysis, because it does not actually allow information to be communicated faster than the speed of light, and we are fundamentally interested in communication speed. However, this result is really not useful for this a.n.a.lysis, because it does not actually allow information to be communicated faster than the speed of light, and we are fundamentally interested in communication speed.

Another intriguing suggestion of an action at a distance that appears to occur at speeds far greater than the speed of light is quantum disentanglement. Two particles created together may be "quantum entangled," meaning that while a given property (such as the phase of its spin) is not determined in either particle, the resolution of this ambiguity of the two particles will occur at the same moment. In other words, if the undetermined property is measured in one of the particles, it will also be determined as the exact same value at the same instant in the other particle, even if the two have traveled far apart. There is an appearance of some sort of communication link between the particles.

This quantum disentanglement has been measured at many times the speed of light, meaning that resolution of the state of one particle appears to resolve the state of the other particle in an amount of time that is a small fraction of the time it would take if the information were transmitted from one particle to the other at the speed of light (in theory, the time lapse is zero). For example, Dr. Nicolas Gisin of the University of Geneva sent quantum-entangled photons in opposite directions through optical fibers across Geneva. When the photons were seven miles apart, they each encountered a gla.s.s plate. Each photon had to "decide" whether to pa.s.s through or bounce off the plate (which previous experiments with non-quantum-entangled photons have shown to be a random choice). Yet because the two photons were quantum entangled, they made the same decision at the same moment. Many repet.i.tions provided the identical result.81 The experiments have not absolutely ruled out the explanation of a hidden variable-that is, an unmeasurable state of each particle that is in phase (set to the same point in a cycle), so that when one particle is measured (for example, has to decide its path through or off a gla.s.s plate), the other has the same value of this internal variable. So the "choice" is generated by an identical setting of this hidden variable, rather than being the result of actual communication between the two particles. However, most quantum physicists reject this interpretation.

Yet even if we accept the interpretation of these experiments as indicating a quantum link between the two particles, the apparent communication is transmitting only randomness (profound quantum randomness) at speeds far greater than the speed of light, not predetermined information, such as the bits in a file. This communication of quantum random decisions to different points in s.p.a.ce could have value, however, in applications such as providing encryption codes. Two different locations could receive the same random sequence, which could then be used by one location to encrypt a message and by the other to decipher it. It would not be possible for anyone else to eavesdrop on the encryption code without destroying the quantum entanglement and thereby being detected. There are already commercial encryption products incorporating this principle. This is a fortuitous application of quantum mechanics because of the possibility that another application of quantum mechanics-quantum computing-may put an end to the standard method of encryption based on factoring large numbers (which quantum computing, with a large number of entangled qubits, would be good at).

Yet another faster-than-the-speed-of-light phenomenon is the speed with which galaxies can recede from each other as a result of the expansion of the universe. If the distance between two galaxies is greater than what is called the Hubble distance, then these galaxies are receding from one another at faster than the speed of light.82 This does not violate Einstein's special theory of relativity, because this velocity is caused by s.p.a.ce itself expanding rather than the galaxies moving through s.p.a.ce. However, it also doesn't help us transmit information at speeds faster than the speed of light. This does not violate Einstein's special theory of relativity, because this velocity is caused by s.p.a.ce itself expanding rather than the galaxies moving through s.p.a.ce. However, it also doesn't help us transmit information at speeds faster than the speed of light.

Wormholes. There are two exploratory conjectures that suggest ways to circ.u.mvent the apparent limitation of the speed of light. The first is to use wormholes-folds of the universe in dimensions beyond the three visible ones. This does not really involve traveling at speeds faster than the speed of light but merely means that the topology of the universe is not the simple three-dimensional s.p.a.ce that naive physics implies. However, if wormholes or folds in the universe are ubiquitous, perhaps these shortcuts would allow us to get everywhere quickly. Or perhaps we can even engineer them. There are two exploratory conjectures that suggest ways to circ.u.mvent the apparent limitation of the speed of light. The first is to use wormholes-folds of the universe in dimensions beyond the three visible ones. This does not really involve traveling at speeds faster than the speed of light but merely means that the topology of the universe is not the simple three-dimensional s.p.a.ce that naive physics implies. However, if wormholes or folds in the universe are ubiquitous, perhaps these shortcuts would allow us to get everywhere quickly. Or perhaps we can even engineer them.

In 1935 Einstein and physicist Nathan Rosen formulated "Einstein-Rosen" bridges as a way of describing electrons and other particles in terms of tiny s.p.a.ce-time tunnels.83 In 1955 physicist John Wheeler described these tunnels as "wormholes," introducing the term for the first time. In 1955 physicist John Wheeler described these tunnels as "wormholes," introducing the term for the first time.84 His a.n.a.lysis of wormholes showed them to be fully consistent with the theory of general relativity, which describes s.p.a.ce as essentially curved in another dimension. His a.n.a.lysis of wormholes showed them to be fully consistent with the theory of general relativity, which describes s.p.a.ce as essentially curved in another dimension.

In 1988 California Inst.i.tute of Technology physicists Michael Morris, Kip Thorne, and Uri Yurtsever explained in some detail how such wormholes could be engineered.85 Responding to a question from Carl Sagan they described the energy requirements to keep wormholes of varying sizes open. They also pointed out that based on quantum fluctuation, so-called empty s.p.a.ce is continually generating tiny wormholes the size of subatomic particles. By adding energy and following other requirements of both quantum physics and general relativity (two fields that have been notoriously difficult to unify), these wormholes could be expanded to allow objects larger than subatomic particles to travel through them. Sending humans through them would not be impossible but extremely difficult. However, as I pointed out above, we really only need to send nan.o.bots plus information, which could pa.s.s through wormholes measured in microns rather than meters. Responding to a question from Carl Sagan they described the energy requirements to keep wormholes of varying sizes open. They also pointed out that based on quantum fluctuation, so-called empty s.p.a.ce is continually generating tiny wormholes the size of subatomic particles. By adding energy and following other requirements of both quantum physics and general relativity (two fields that have been notoriously difficult to unify), these wormholes could be expanded to allow objects larger than subatomic particles to travel through them. Sending humans through them would not be impossible but extremely difficult. However, as I pointed out above, we really only need to send nan.o.bots plus information, which could pa.s.s through wormholes measured in microns rather than meters.

Thorne and his Ph.D. students Morris and Yurtsever also described a method consistent with general relativity and quantum mechanics that could establish wormholes between the Earth and faraway locations. Their proposed technique involves expanding a spontaneously generated, subatomic-size wormhole to a larger size by adding energy, then stabilizing it using superconducting spheres in the two connected "wormhole mouths." After the wormhole is expanded and stabilized, one of its mouths (entrances) is transported to another location, while keeping its connection to the other entrance, which remains on Earth.

Thorne offered the example of moving the remote entrance via a small rocket ship to the star Vega, which is twenty-five light-years away. By traveling at very close to the speed of light, the journey, as measured by clocks on the ship, would be relatively brief. For example, if the ship traveled at 99.995 percent of the speed of light, the clocks on the ship would move ahead by only three months. Although the time for the voyage, as measured on Earth, would be around twenty-five years, the stretched wormhole would maintain the direct link between the locations as well as the points in time of the two locations. Thus, even as experienced on Earth, it would take only three months to establish the link between Earth and Vega, because the two ends of the wormhole would maintain their time relationship. Suitable engineering improvements could allow such links to be established anywhere in the universe. By traveling arbitrarily close to the speed of light, the time required to establish a link-for both communications and transportation-to other locations in the universe, even those millions of billions of light years away, could be relatively brief.

Matt Visser of Washington University in St. Louis has suggested refinements to the Morris-Thorne-Yurtsever concept that provide a more stable environment, which might even allow humans to travel through wormholes.86 In my view, however, this is unnecessary. By the time engineering projects of this scale might be feasible, human intelligence will long since have been dominated by its nonbiological component. Sending molecular-scale self-replicating devices along with software will be sufficient and much easier. Anders Sandberg estimates that a one-nanometer wormhole could transmit a formidable 10 In my view, however, this is unnecessary. By the time engineering projects of this scale might be feasible, human intelligence will long since have been dominated by its nonbiological component. Sending molecular-scale self-replicating devices along with software will be sufficient and much easier. Anders Sandberg estimates that a one-nanometer wormhole could transmit a formidable 1069 bits per second. bits per second.87 Physicist David Hochberg and Vanderbilt University's Thomas Kephart point out that shortly after the Big Bang, gravity was strong enough to have provided the energy required to spontaneously create ma.s.sive numbers of self-stabilizing wormholes.88 A significant portion of these wormholes is likely to still be around and may be pervasive, providing a vast network of corridors that reach far and wide throughout the universe. It might be easier to discover and use these natural wormholes than to create new ones. A significant portion of these wormholes is likely to still be around and may be pervasive, providing a vast network of corridors that reach far and wide throughout the universe. It might be easier to discover and use these natural wormholes than to create new ones.

Changing the Speed of Light. The second conjecture is to change the speed of light itself. In chapter 3, I mentioned the finding that appears to indicate that the speed of light has differed by 4.5 parts out of 10 The second conjecture is to change the speed of light itself. In chapter 3, I mentioned the finding that appears to indicate that the speed of light has differed by 4.5 parts out of 108 over the past two billion years. over the past two billion years.

In 2001 astronomer John Webb discovered that the so-called fine-structure constant varied when he examined light from sixty-eight quasars (very bright young galaxies).89 The speed of light is one of four constants that the fine-structure constant comprises, so the result is another suggestion that varying conditions in the universe may cause the speed of light to change. Cambridge University physicist John Barrow and his colleagues are in the process of running a two-year tabletop experiment that will test the ability to engineer a small change in the speed of light. The speed of light is one of four constants that the fine-structure constant comprises, so the result is another suggestion that varying conditions in the universe may cause the speed of light to change. Cambridge University physicist John Barrow and his colleagues are in the process of running a two-year tabletop experiment that will test the ability to engineer a small change in the speed of light.90 Suggestions that the speed of light can vary are consistent with recent theories that it was significantly higher during the inflationary period of the universe (an early phase in its history, when it underwent very rapid expansion). These experiments showing possible variation in the speed of light clearly need corroboration and are showing only small changes. But if confirmed, the findings would be profound, because it is the role of engineering to take a subtle effect and greatly amplify it. Again, the mental experiment we should perform now is not whether contemporary human scientists, such as we are, can perform these engineering feats but whether or not a human civilization that has expanded its intelligence by trillions of trillions will be able to do so.

For now we can say that ultrahigh levels of intelligence will expand outward at the speed of light, while recognizing that our contemporary understanding of physics suggests that this may not be the actual limit of the speed of expansion or, even if the speed of light proves to be immutable, that this limit may not restrict reaching other locations quickly through wormholes.

The Fermi Paradox Revisited. Recall that biological evolution is measured in millions and billions of years. So if there are other civilizations out there, they would be spread out in terms of development by huge spans of time. The SETI a.s.sumption implies that there should be billions of ETIs (among all the galaxies), so there should be billions that lie far ahead of us in their technological progress. Yet it takes only a few centuries at most from the advent of computation for such civilizations to expand outward at at least light speed. Given this, how can it be that we have not noticed them? The conclusion I reach is that it is likely (although not certain) that there are no such other civilizations. In other words, we are in the lead. That's right, our humble civilization with its pickup trucks, fast food, and persistent conflicts (and computation!) is in the lead in terms of the creation of complexity and order in the universe. Recall that biological evolution is measured in millions and billions of years. So if there are other civilizations out there, they would be spread out in terms of development by huge spans of time. The SETI a.s.sumption implies that there should be billions of ETIs (among all the galaxies), so there should be billions that lie far ahead of us in their technological progress. Yet it takes only a few centuries at most from the advent of computation for such civilizations to expand outward at at least light speed. Given this, how can it be that we have not noticed them? The conclusion I reach is that it is likely (although not certain) that there are no such other civilizations. In other words, we are in the lead. That's right, our humble civilization with its pickup trucks, fast food, and persistent conflicts (and computation!) is in the lead in terms of the creation of complexity and order in the universe.

Now how can that be? Isn't this extremely unlikely, given the sheer number of likely inhabited planets? Indeed it is very unlikely. But equally unlikely is the existence of our universe, with its set of laws of physics and related physical constants, so exquisitely, precisely what is needed for the evolution of life to be possible. But by the anthropic principle, if the universe didn't allow the evolution of life we wouldn't be here to notice it. Yet here we are. So by a similar anthropic principle, we're here in the lead in the universe. Again, if we weren't here, we would not be noticing it.

Let's consider some arguments against this perspective.

Perhaps there are extremely advanced technological civilizations out there, but we are outside their light sphere of intelligence. That is, they haven't gotten here yet. Okay, in this case, SETI will still fail to find ETIs because we won't be able to see (or hear) them, at least not unless and until we find a way to break out of our light sphere (or the ETl does so) by manipulating the speed of light or finding shortcuts, as I discussed above.

Perhaps they are among us, but have decided to remain invisible to us. If they have made that decision, they are likely to succeed in avoiding being noticed. Again, it is hard to believe that every single ETl has made the same decision.

John Smart has suggested in what he calls the "transcension" scenario that once civilizations saturate their local region of s.p.a.ce with their intelligence, they create a new universe (one that will allow continued exponential growth of complexity and intelligence) and essentially leave this universe.91 Smart suggests that this option may be so attractive that it is the consistent and inevitable outcome of an ETl's having reached an advanced stage of its development, and it thereby explains the Fermi Paradox. Smart suggests that this option may be so attractive that it is the consistent and inevitable outcome of an ETl's having reached an advanced stage of its development, and it thereby explains the Fermi Paradox.

Incidentally, I have always considered the science-fiction notion of large s.p.a.ceships piloted by huge, squishy creatures similar to us to be very unlikely. Seth Shostak comments that "the reasonable probability is that any extraterrestrial intelligence we will detect will be machine intelligence, not biological intelligence like us." In my view this is not simply a matter of biological beings sending out machines (as we do today) but rather that any civilization sophisticated enough to make the trip here would have long since pa.s.sed the point of merging with its technology and would not need to send physically bulky organisms and equipment.

If they exist, why would they come here? One mission would be for observation-to gather knowledge (just as we observe other species on Earth today). Another would be to seek matter and energy to provide additional substrate for its expanding intelligence. The intelligence and equipment needed for such exploration and expansion (by an ETl, or by us when we get to that stage of development) would be extremely small, basically nan.o.bots and information transmissions.

It appears that our solar system has not yet been turned into someone else's computer. And if this other civilization is only observing us for knowledge's sake and has decided to remain silent, SETl will fail to find it, because if an advanced civilization does not want us to notice it, it would succeed in that desire. Keep in mind that such a civilization would be vastly more intelligent than we are today. Perhaps it will reveal itself to us when we achieve the next level of our evolution, specifically merging our biological brains with our technology, which is to say, after the Singularity. However, given that the SETl a.s.sumption implies that there are billions of such highly developed civilizations, it seems unlikely that all of them have made the same decision to stay out of our way.

The Anthropic Principle Revisited. We are struck with two possible applications of an anthropic principle, one for the remarkable biofriendly laws of our universe, and one for the actual biology of our planet. We are struck with two possible applications of an anthropic principle, one for the remarkable biofriendly laws of our universe, and one for the actual biology of our planet.

Let's first consider the anthropic principle as applied to the universe in more detail. The question concerning the universe arises because we notice that the constants in nature are precisely what are required for the universe to have grown in complexity. If the cosmological constant, the Planck constant, and the many other constants of physics were set to just slightly different values, atoms, molecules, stars, planets, organisms, and humans would have been impossible. The universe appears to have exactly the right rules and constants. (The situation is reminiscent of Steven Wolfram's observation that certain cellular-automata rules [see the sidebar on p. 85] allow for the creation of remarkably complex and unpredictable patterns, whereas other rules lead to very uninteresting patterns, such as alternating lines or simple triangles in a repeating or random configuration.) How do we account for the remarkable design of the laws and constants of matter and energy in our universe that have allowed for the increasing complexity we see in biological and technology evolution? Freeman Dyson once commented that "the universe in some sense knew we were coming." Complexity theorist James Gardner describes the question in this way:

Physicists feel that the task of physics is to predict what happens in the lab, and they are convinced that string theory, or M theory can do this....But they have no idea why the universe should ... have the standard model, with the values of its 40+ parameters that we observe. How can anyone believe that something so messy is the unique prediction of string theory? It amazes me that people can have such blinkered vision, that they can concentrate just on the final state of the universe, and not ask how and why it got there.92

The perplexity of how it is that the universe is so "friendly" to biology has led to various formulations of the anthropic principle. The "weak" version of the anthropic principle points out simply that if it were not the case, we wouldn't be here to wonder about it. So only in a universe that allowed for increasing complexity could the question even be asked. Stronger versions of the anthropic principle state that there must be more to it; advocates of these versions are not satisfied with a mere lucky coincidence. This has opened the door for advocates of intelligent design to claim that this is the proof of G.o.d's existence that scientists have been asking for.

The Multiverse. Recently a more Darwinian approach to the strong anthropic principle has been proposed. Consider that it is possible for mathematical equations to have multiple solutions. For example, if we solve for Recently a more Darwinian approach to the strong anthropic principle has been proposed. Consider that it is possible for mathematical equations to have multiple solutions. For example, if we solve for x x in the equation in the equation x x2 = 4, = 4, x x may be 2 or -2. Some equations allow for an infinite number of solutions. In the equation ( may be 2 or -2. Some equations allow for an infinite number of solutions. In the equation (a b b) ix = 0, = 0, x x can take on anyone of an infinite number of values if can take on anyone of an infinite number of values if a a = = b b (since any number multiplied by zero equals zero). It turns out that the equations for recent string theories allow in principle for an infinite number of solutions. To be more precise, since the spatial and temporal resolution of the universe is limited to the very small Planck constant, the number of solutions is not literally infinite but merely vast. String theory implies, therefore, that many different sets of natural constants are possible. (since any number multiplied by zero equals zero). It turns out that the equations for recent string theories allow in principle for an infinite number of solutions. To be more precise, since the spatial and temporal resolution of the universe is limited to the very small Planck constant, the number of solutions is not literally infinite but merely vast. String theory implies, therefore, that many different sets of natural constants are possible.

This has led to the idea of the multiverse: that there exist a vast number of universes, of which our humble universe is only one. Consistent with string theory, each of these universes can have a different set of physical constants.

Evolving Universes. Leonard Susskind, the discoverer of string theory, and Lee Smolin, a theoretical physicist and expert on quantum gravity, have suggested that universes give rise to other universes in a natural, evolutionary process that gradually refines the natural constants. In other words it is not by accident that the rules and constants of our universe are ideal for evolving intelligent life but rather that they themselves evolved to be that way. Leonard Susskind, the discoverer of string theory, and Lee Smolin, a theoretical physicist and expert on quantum gravity, have suggested that universes give rise to other universes in a natural, evolutionary process that gradually refines the natural constants. In other words it is not by accident that the rules and constants of our universe are ideal for evolving intelligent life but rather that they themselves evolved to be that way.

In Smolin's theory the mechanism that gives rise to new universes is the creation of black holes, so those universes best able to produce black holes are the ones that are most likely to reproduce. According to Smolin a universe best able to create increasing complexity-that is, biological life-is also most likely to create new universe-generating black holes. As he explains, "Reproduction through black holes leads to a multiverse in which the conditions for life are common-essentially because some of the conditions life requires, such as plentiful carbon, also boost the formation of stars ma.s.sive enough to become black holes."93 Susskind's proposal differs in detail from Smolin's but is also based on black holes, as well as the nature of "inflation," the force that caused the very early universe to expand rapidly. Susskind's proposal differs in detail from Smolin's but is also based on black holes, as well as the nature of "inflation," the force that caused the very early universe to expand rapidly.

Intelligence as the Destiny of the Universe. In The Age of Spiritual Machines, I introduced a related idea-namely, that intelligence would ultimately permeate the universe and would decide the destiny of the cosmos: In The Age of Spiritual Machines, I introduced a related idea-namely, that intelligence would ultimately permeate the universe and would decide the destiny of the cosmos:

How relevant is intelligence to the universe? ... The common wisdom is not very. Stars are born and die; galaxies go through their cycles of creation and destruction; the universe itself was born in a big bang and will end with a crunch or a whimper, we're not yet sure which. But intelligence has little to do with it. Intelligence is just a bit of froth, an ebullition of little creatures darting in and out of inexorable universal forces. The mindless mechanism of the universe is winding up or down to a I distant future, and there's nothing intelligence can do about it.That's the common wisdom. But I don't agree with it. My conjecture is that intelligence will ultimately prove more powerful than these big impersonal forces....So will the universe end in a big crunch, or in an infinite expansion of dead stars, or in some other manner? In my view, the primary issue is not the ma.s.s of the universe, or the possible existence of antigravity, or of Einstein's so-called cosmological constant. Rather, the fate of the universe is a decision yet to be made, one which we will intelligently consider when the time is right.94

Complexity theorist James Gardner combined my suggestion on the evolution of intelligence throughout the universe with Smolin's and Susskind's concepts of evolving universes. Gardner conjectures that it is specifically the evolution of intelligent life that enables offspring universes.95 Gardner builds on British astronomer Martin Rees's observation that "what we call the fundamental constants-the numbers that matter to physicists-may be secondary consequences of the final theory, rather than direct manifestations of its deepest and most fundamental level." To Smolin it is merely coincidence that black holes and biological life both need similar conditions (such as large amounts of carbon), so in his conception there is no explicit role for intelligence, other than that it happens to be the by-product of certain biofriendly circ.u.mstances. In Gardner's conception it is intelligent life that creates its successors. Gardner builds on British astronomer Martin Rees's observation that "what we call the fundamental constants-the numbers that matter to physicists-may be secondary consequences of the final theory, rather than direct manifestations of its deepest and most fundamental level." To Smolin it is merely coincidence that black holes and biological life both need similar conditions (such as large amounts of carbon), so in his conception there is no explicit role for intelligence, other than that it happens to be the by-product of certain biofriendly circ.u.mstances. In Gardner's conception it is intelligent life that creates its successors.

Gardner writes that "we and other living creatures throughout the cosmos are part of a vast, still undiscovered transterrestrial community of lives and intelligences spread across billions of galaxies and countless pa.r.s.ecs who are collectively engaged in a portentous mission of truly cosmic importance. Under the Biocosm vision, we share a common fate with that community-to help shape the future of the universe and transform it from a collection of lifeless atoms into a vast, transcendent mind." To Gardner the laws of nature, and the precisely balanced constants, "function as the cosmic counterpart of DNA: they furnish the 'recipe' by which the evolving cosmos acquires the capacity to generate life and ever more capable intelligence."

My own view is consistent with Gardner's belief in intelligence as the most important phenomenon in the universe. I do have a disagreement with Gardner on his suggestion of a "vast ... transterrestrial community of lives and intelligences spread across billions of galaxies." We don't yet see evidence that such a community beyond Earth exists. The community that matters may be just our own una.s.suming civilization here. As I pointed out above, although we can fashion all kinds of reasons why each particular intelligent civilization may remain hidden from us (for example, they destroyed themselves, or they have decided to remain invisible or stealthy, or they've switched all all of their communications away from electromagnetic transmissions, and so on), it is not credible to believe that every single civilization out of the billions that should be there (according to the SETI a.s.sumption) has some reason to be invisible. of their communications away from electromagnetic transmissions, and so on), it is not credible to believe that every single civilization out of the billions that should be there (according to the SETI a.s.sumption) has some reason to be invisible.

The Ultimate Utility Function. We can fashion a conceptual bridge between Susskind's and Smolin's idea of black holes being the "utility function" (the property being optimized in an evolutionary process) of each universe in the multiverse and the conception of intelligence as the utility function that I share with Gardner. As I discussed in chapter 3, the computational power of a computer is a function of its ma.s.s and its computational efficiency. Recall that a rock has significant ma.s.s but extremely low computational efficiency (that is, virtually all of the transactions of its particles are effectively random). Most of the particle interactions in a human are random also, but on a logarithmic scale humans are roughly halfway between a rock and the ultimate small computer. We can fashion a conceptual bridge between Susskind's and Smolin's idea of black holes being the "utility function" (the property being optimized in an evolutionary process) of each universe in the multiverse and the conception of intelligence as the utility function that I share with Gardner. As I discussed in chapter 3, the computational power of a computer is a function of its ma.s.s and its computational efficiency. Recall that a rock has significant ma.s.s but extremely low computational efficiency (that is, virtually all of the transactions of its particles are effectively random). Most of the particle interactions in a human are random also, but on a logarithmic scale humans are roughly halfway between a rock and the ultimate small computer.

A computer in the range of the ultimate computer has a very high computational efficiency. Once we achieve an optimal computational efficiency, the only way to increase the computational power of a computer would be to increase its ma.s.s. If we increase the ma.s.s enough, its gravitational force becomes strong enough to cause it to collapse into a black hole. So a black hole can be regarded as the ultimate computer.

Of course, not any black hole will do. Most black holes, like most rocks, are performing lots of random transactions but no useful computation. But a well-organized black hole would be the most powerful conceivable computer in terms of cps per liter.

Hawking Radiation. There has been a long-standing debate about whether or not we can transmit information into a black hole, have it usefully transformed, and then retrieve it. Stephen Hawking's conception of transmissions from a black hole involves particle-antiparticle pairs that are created near the event horizon (the point of no return near a black hole, beyond which matter and energy are unable to escape). When this spontaneous creation occurs, as it does everywhere in s.p.a.ce, the particle and antiparticle travel in opposite directions. If one member of the pair travels into the event horizon (never to be seen again), the other will flyaway from the black hole. There has been a long-standing debate about whether or not we can transmit information into a black hole, have it usefully transformed, and then retrieve it. Stephen Hawking's conception of transmissions from a black hole involves particle-antiparticle pairs that are created near the event horizon (the point of no return near a black hole, beyond which matter and energy are unable to escape). When this spontaneous creation occurs, as it does everywhere in s.p.a.ce, the particle and antiparticle travel in opposite directions. If one member of the pair travels into the event horizon (never to be seen again), the other will flyaway from the black hole.

Some of these particles will have sufficient energy to escape its gravitation and result in what has been called Hawking radiation.96 Prior to Hawking's a.n.a.lysis it was thought that black holes were, well, black; with his insight we realized that they actually give off a continual shower of energetic particles. But according to Hawking this radiation is random, since it originates from random quantum events near the event boundary. So a black hole may contain an ultimate computer, according to Hawking, but according to his original conception, no information can escape a black hole, so this computer could never transmit its results. Prior to Hawking's a.n.a.lysis it was thought that black holes were, well, black; with his insight we realized that they actually give off a continual shower of energetic particles. But according to Hawking this radiation is random, since it originates from random quantum events near the event boundary. So a black hole may contain an ultimate computer, according to Hawking, but according to his original conception, no information can escape a black hole, so this computer could never transmit its results.

In 1997 Hawking and fellow physicist Kip Thorne (the wormhole scientist) made a bet with California Inst.i.tute of Technology's John Preskill. Hawking and Thorne maintained that the information that entered a black hole was lost, and any computation that might occur inside the black hole, useful or otherwise, could never be transmitted outside of it, whereas Preskill maintained that the information could be recovered.97 The loser was to give the winner some useful information in the form of an encyclopedia. The loser was to give the winner some useful information in the form of an encyclopedia.

In the intervening years the consensus in the physics community steadily moved away from Hawking, and on July 21, 2004, Hawking admitted defeat and acknowledged that Preskill had been correct after all: that information sent into a black hole is not lost. It could be transformed inside the black hole and then transmitted outside it. According to this understanding, what happens is that the particle that flies away from the black hole remains quantum entangled with its antiparticle that disappeared into the black hole. If that antiparticle inside the black hole becomes involved in a useful computation, then these results will be encoded in the state of its tangled partner particle outside of the black hole.

Accordingly Hawking sent Preskill an encyclopedia on the game of cricket, but Preskill rejected it, insisting on a baseball encyclopedia, which Hawking had flown over for a ceremonial presentation.

a.s.suming that Hawking's new position is indeed correct, the ultimate computers that we can create would be black holes. Therefore a universe that is well designed to create black holes would be one that is well designed to optimize its intelligence. Susskind and Smolin argued merely that biology and black holes both require the same kind of materials, so a universe that was optimized for black holes would also be optimized for biology. Recognizing that black holes are the ultimate repository of intelligent computation, however, we can conclude that the utility function of optimizing black-hole production and that of optimizing intelligence are one and the same.

Why Intelligence Is More Powerful than Physics. There is another reason to apply an anthropic principle. It may seem remarkably unlikely that our planet is in the lead in terms of technological development, but as I pointed out above, by a weak anthropic principle, if we had not evolved, we would not be here discussing this issue. There is another reason to apply an anthropic principle. It may seem remarkably unlikely that our planet is in the lead in terms of technological development, but as I pointed out above, by a weak anthropic principle, if we had not evolved, we would not be here discussing this issue.

As intelligence saturates the matter and energy available to it, it turns dumb matter into smart matter. Although smart matter still nominally follows the laws of physics, it is so extraordinarily intelligent that it can harness the most subtle aspects of the laws to manipulate matter and energy to its will. So it would at least appear that intelligence is more powerful than physics. What I should say is that intelligence is more powerful than cosmology. That is, once matter evolves into smart matter (matter fully saturated with intelligent processes), it can manipulate other matter and energy to do its bidding (through suitably powerful engineering). This perspective is not generally considered in discussions of future cosmology. It is a.s.sumed that intelligence is irrelevant to events and processes on a cosmological scale.

Once a planet yields a technology-creating species and that species creates computation (as has happened here), it is only a matter of a few centuries before its intelligence saturates the matter and energy in its vicinity, and it begins to expand outward at at least the speed of light (with some suggestions of circ.u.mventing this limit). Such a civilization will then overcome gravity (through exquisite and vast technology) and other cosmological forces-or, to be fully accurate, it will maneuver and control these forces-and engineer the universe it wants. This is the goal of the Singularity.

A Universe-Scale Computer. How long will it take for our civilization to saturate the universe with our vastly expanded intelligence? Seth Lloyd estimates there are about 10 How long will it take for our civilization to saturate the universe with our vastly expanded intelligence? Seth Lloyd estimates there are about 1080 particles in the universe, with a theoretical maximum capacity of about 10 particles in the universe, with a theoretical maximum capacity of about 1090 cps. In other words a universe-scale computer would be able to compute at 10 cps. In other words a universe-scale computer would be able to compute at 1090 cps. cps.98 To arrive at those estimates, Lloyd took the observed density of matter-about one hydrogen atom per cubic meter-and from this figure computed the total energy in the universe. Dividing this energy figure by the Planck constant, he got about 10 To arrive at those estimates, Lloyd took the observed density of matter-about one hydrogen atom per cubic meter-and from this figure computed the total energy in the universe. Dividing this energy figure by the Planck constant, he got about 1090 cps. The universe is about 10 cps. The universe is about 1017 seconds old, so in round numbers there have been a maximum of about 10 seconds old, so in round numbers there have been a maximum of about 10107 calculations in it thus far. With each particle able to store about 10 calculations in it thus far. With each particle able to store about 1010 bits in all of its degrees of freedom (including its position, trajectory, spin, and so on), the state of the universe represents about 10 bits in all of its degrees of freedom (including its position, trajectory, spin, and so on), the state of the universe represents about 1090 bits of information at each point in time. bits of information at each point in time.

We do not need to contemplate devoting all of the ma.s.s and energy of the universe to computation. If we were to apply 0.01 percent, that would still leave 99.99 percent of the ma.s.s and energy unmodified, but would still result in a potential of about 1086 cps. Based on our current understanding, we can only approximate these orders of magnitude. Intelligence at anything close to these levels will be so vast that it will be able to perform these engineering feats with enough care so as not to disrupt whatever natural processes it considers important to preserve. cps. Based on our current understanding, we can only approximate these orders of magnitude. Intelligence at anything close to these levels will be so vast that it will be able to perform these engineering feats with enough care so as not to disrupt whatever natural processes it considers important to preserve.

The Holographic Universe. Another perspective on the maximum information storage and processing capability of the universe comes from a speculative recent theory of the nature of information. According to the "holographic universe" theory the universe is actually a two-dimensional array of information written on its surface, so its conventional three-dimensional appearance is an illusion. Another perspective on the maximum information storage and processing capability of the universe comes from a speculative recent theory of the nature of information. According to the "holographic universe" theory the universe is actually a two-dimensional array of information written on its surface, so its conventional three-dimensional appearance is an illusion.99 In essence, the universe, according to this theory, is a giant hologram. In essence, the universe, according to this theory, is a giant hologram.

The information is written at a very fine scale, governed by the Planck constant. So the maximum amount of information in the universe is its surface area divided by the square of the Planck constant, which comes to about 10120 bits. There does not appear to be enough matter in the universe to encode this much information, so the limits of the holographic universe may be higher than what is actually feasible. In any event the order of magnitude of the number of orders of magnitudes of these various estimates is in the same range. The number of bits that a universe reorganized for useful computation will be able to store is 10 raised to a power somewhere between 80 and 120. bits. There does not appear to be enough matter in the universe to encode this much information, so the limits of the holographic universe may be higher than what is actually feasible. In any event the order of magnitude of the number of orders of magnitudes of these various estimates is in the same range. The number of bits that a universe reorganized for useful computation will be able to store is 10 raised to a power somewhere between 80 and 120.

Again, our engineering, even that of our vastly evolved future selves; will probably fall short of these maximums. In chapter 2 I showed how we progressed from 105 to 10 to 108 cps per thousand dollars during the twentieth century. Based on a continuation of the smooth, doubly exponential growth that we saw in the twentieth century, I projected that we would achieve about 10 cps per thousand dollars during the twentieth century. Based on a continuation of the smooth, doubly exponential growth that we saw in the twentieth century, I projected that we would achieve about 1060 cps per thousand dollars by 2100. If we estimate a modest trillion dollars devoted to computation, that's a total of about 10 cps per thousand dollars by 2100. If we estimate a modest trillion dollars devoted to computation, that's a total of about 1069 cps by the end of this century. This can be achieved with the matter and energy in our solar system. cps by the end of this century. This can be achieved with the matter and energy in our solar system.

To get to around 1090 cps requires expanding through the rest of the universe. Continuing the double-exponential growth curve shows that we can saturate the universe with our intelligence well before the end of the twenty-second century, provided that we are not limited by the speed of light. Even if the up-to-thirty additional powers of ten suggested by the holographic-universe theory are borne out, we still reach saturation by the end of the twenty-second century. cps requires expanding through the rest of the universe. Continuing the double-exponential growth curve shows that we can saturate the universe with our intelligence well before the end of the twenty-second century, provided that we are not limited by the speed of light. Even if the up-to-thirty additional powers of ten suggested by the holographic-universe theory are borne out, we still reach saturation by the end of the twenty-second century.

Again, if it is at all possible to circ.u.mvent the speed-of-light limitation, the vast intelligence we will have with solar system-scale intelligence will be able to design and implement the requisite engineering to do so. If I had to place a bet, I would put my money on the conjecture that circ.u.mventing the speed of light is possible and that we will be able to do this within the next couple of hundred years. But that is speculation on my part, as we do not yet understand these issues sufficiently to make a more definitive statement. If the speed of light is an immutable barrier, and no shortcuts through wormholes exist that can be exploited, it will take billions of years, not hundreds, to saturate the universe with our intelligence, and we will be limited to our light cone within the universe. In either event the exponential growth of computation will hit a wall during the twenty-second century. (But what a wall!) This large difference in timespans-hundreds of years versus billions of years (to saturate the universe with our intelligence)-demonstrates why the issue of circ.u.mventing the speed of light will become so important. It will become a primary preoccupation of the vast intelligence of our civilization in the twenty-second century. That is why I believe that if wormholes or other circ.u.mventing means are feasible, we will be highly motivated to find and exploit them.