The Singularity Is Near_ When Humans Transcend Biology - Part 23
Library

Part 23

More recently, the organization representing the world's organ transplantation surgeons has adopted a moratorium on the transplantation of vascularized animal organs into humans. This was done out of fear of the spread of long-dormant HIV-type xenoviruses from animals such as pigs or baboons into the human population. Unfortunately, such a moratorium can also slow down the availability of lifesaving xenografts (genetically modified animal organs that are accepted by the human immune system) to the millions of people who die each year from heart, kidney, and liver disease. Geoethicist Martine Rothblatt has proposed replacing this moratorium with a new set of ethical guidelines and regulations.44 In the case of nanotechnology, the ethics debate has started a couple of decades prior to the availability of the particularly dangerous applications. The most important provisions of the Foresight Inst.i.tute guidelines include:

"Artificial replicators must not be capable of replication in a natural, uncontrolled environment." "Artificial replicators must not be capable of replication in a natural, uncontrolled environment." "Evolution within the context of a self-replicating manufacturing system is discouraged." "Evolution within the context of a self-replicating manufacturing system is discouraged." "MNT device designs should specifically limit proliferation and provide traceability of any replicating systems." "MNT device designs should specifically limit proliferation and provide traceability of any replicating systems." "Distribution of molecular manufacturing "Distribution of molecular manufacturing development development capability should be restricted whenever possible, to responsible actors that have agreed to use the Guidelines. No such restriction need apply to end products of the development process." capability should be restricted whenever possible, to responsible actors that have agreed to use the Guidelines. No such restriction need apply to end products of the development process."

Other strategies that the Foresight Inst.i.tute has proposed include: Replication should require materials not found in the natural environment. Replication should require materials not found in the natural environment. Manufacturing (replication) should be separated from the functionality of end products. Manufacturing devices can create end products but cannot replicate themselves, and end products should have no replication capabilities. Manufacturing (replication) should be separated from the functionality of end products. Manufacturing devices can create end products but cannot replicate themselves, and end products should have no replication capabilities. Replication should require replication codes that are encrypted and time limited. The broadcast architecture mentioned earlier is an example of this recommendation. Replication should require replication codes that are encrypted and time limited. The broadcast architecture mentioned earlier is an example of this recommendation.

These guidelines and strategies are likely to be effective for preventing accidental release of dangerous self-replicating nanotechnology ent.i.ties. But dealing with the intentional design and release of such ent.i.ties is a more complex and challenging problem. A sufficiently determined and destructive opponent could possibly defeat each of these layers of protections. Take, for example, the broadcast architecture. When properly designed, each ent.i.ty is unable to replicate without first obtaining replication codes, which are not repeated from one replication generation to the next. However, a modification to such a design could bypa.s.s the destruction of the replication codes and thereby pa.s.s them on to the next generation. To counteract that possibility it has been recommended that the memory for the replication codes be limited to only a subset of the full code. However, this guideline could be defeated by expanding the size of the memory.

Another protection that has been suggested is to encrypt the codes and build in protections in the decryption systems, such as time-expiration limitations. However, we can see how easy it has been to defeat protections against unauthorized replications of intellectual property such as music files. Once replication codes and protective layers are stripped away, the information can be replicated without these restrictions.

This doesn't mean that protection is impossible. Rather, each level of protection will work only to a certain level of sophistication. The meta-lesson here is that we will need to place twenty-first-century society's highest priority on the continuing advance of defensive technologies, keeping them one or more steps ahead of the destructive technologies (or at least no more than a quick step behind).

Protection from "Unfriendly" Strong AI. Even as effective a mechanism as the broadcast architecture, however, won't serve as protection against abuses of strong AI. The barriers provided by the broadcast architecture rely on the lack of intelligence in nanoengineered ent.i.ties. By definition, however, intelligent ent.i.ties have the cleverness to easily overcome such barriers. Even as effective a mechanism as the broadcast architecture, however, won't serve as protection against abuses of strong AI. The barriers provided by the broadcast architecture rely on the lack of intelligence in nanoengineered ent.i.ties. By definition, however, intelligent ent.i.ties have the cleverness to easily overcome such barriers.

Eliezer Yudkowsky has extensively a.n.a.lyzed paradigms, architectures, and ethical rules that may help a.s.sure that once strong AI has the means of accessing and modifying its own design it remains friendly to biological humanity and supportive of its values. Given that self-improving strong AI cannot be recalled, Yudkowsky points out that we need to "get it right the first time," and that its initial design must have "zero nonrecoverable errors."45 Inherently there will be no absolute protection against strong AI. Although the argument is subtle I believe that maintaining an open free-market system for incremental scientific and technological progress, in which each step is subject to market acceptance, will provide the most constructive environment for technology to embody widespread human values. As I have pointed out, strong AI is emerging from many diverse efforts and will be deeply integrated into our civilization's infrastructure. Indeed, it will be intimately embedded in our bodies and brains. As such, it will reflect our values because it will be us. Attempts to control these technologies via secretive government programs, along with inevitable underground development, would only foster an unstable environment in which the dangerous applications would be likely to become dominant.

DecentraIization. One profound trend already well under way that will provide greater stability is the movement from centralized technologies to distributed ones and from the real world to the virtual world discussed above. Centralized technologies involve an aggregation of resources such as people (for example, cities, buildings), energy (such as nuclear-power plants, liquid-natural-gas and oil tankers, energy pipelines), transportation (airplanes, trains), and other items. Centralized technologies are subject to disruption and disaster. They also tend to be inefficient, wasteful, and harmful to the environment. One profound trend already well under way that will provide greater stability is the movement from centralized technologies to distributed ones and from the real world to the virtual world discussed above. Centralized technologies involve an aggregation of resources such as people (for example, cities, buildings), energy (such as nuclear-power plants, liquid-natural-gas and oil tankers, energy pipelines), transportation (airplanes, trains), and other items. Centralized technologies are subject to disruption and disaster. They also tend to be inefficient, wasteful, and harmful to the environment.

Distributed technologies, on the other hand, tend to be flexible, efficient, and relatively benign in their environmental effects. The quintessential distributed technology is the Internet. The Internet has not been substantially disrupted to date, and as it continues to grow, its robustness and resilience continue to strengthen. If any hub or channel does go down, information simply routes around it.

Distributed Energy. In energy, we need to move away from the extremely concentrated and centralized installations on which we now depend. For example, one company is pioneering fuel cells that are microscopic, using MEMS technology. In energy, we need to move away from the extremely concentrated and centralized installations on which we now depend. For example, one company is pioneering fuel cells that are microscopic, using MEMS technology.46 They are manufactured like electronic chips but are actually energy-storage devices with an energy-to-size ratio significantly exceeding that of conventional technology. As I discussed earlier, nanoengineered solar panels will be able to meet our energy needs in a distributed, renewable, and clean fashion. Ultimately technology along these lines could power everything from our cell phones to our cars and homes. These types of decentralized energy technologies would not be subject to disaster or disruption. They are manufactured like electronic chips but are actually energy-storage devices with an energy-to-size ratio significantly exceeding that of conventional technology. As I discussed earlier, nanoengineered solar panels will be able to meet our energy needs in a distributed, renewable, and clean fashion. Ultimately technology along these lines could power everything from our cell phones to our cars and homes. These types of decentralized energy technologies would not be subject to disaster or disruption.

As these technologies develop, our need for aggregating people in large buildings and cities will diminish, and people will spread out, living where they want and gathering together in virtual reality.

Civil Liberties in an Age of Asymmetric Warfare. The nature of terrorist attacks and the philosophies of the organizations behind them highlight how civil liberties can be at odds with legitimate state interests in surveillance and control. Our law-enforcement system-and indeed, much of our thinking about security-is based on the a.s.sumption that people are motivated to preserve their own lives and well-being. That logic underlies all our strategies, from protection at the local level to mutual a.s.sured destruction on the world stage. But a foe that values the destruction of both its enemy and itself is not amenable to this line of reasoning. The nature of terrorist attacks and the philosophies of the organizations behind them highlight how civil liberties can be at odds with legitimate state interests in surveillance and control. Our law-enforcement system-and indeed, much of our thinking about security-is based on the a.s.sumption that people are motivated to preserve their own lives and well-being. That logic underlies all our strategies, from protection at the local level to mutual a.s.sured destruction on the world stage. But a foe that values the destruction of both its enemy and itself is not amenable to this line of reasoning.

The implications of dealing with an enemy that does not value its own survival are deeply troublesome and have led to controversy that will only intensify as the stakes continue to escalate. For example, when the FBI identifies a likely terrorist cell, it will arrest the partic.i.p.ants, even though there may be insufficient evidence to convict them of a crime and they may not yet even have committed a crime. Under the rules of engagement in our war on terrorism, the government continues to hold these individuals.

In a lead editorial, the New York Times New York Times objected to this policy, which it described as a "troubling provision." objected to this policy, which it described as a "troubling provision."47 The paper argued that the government should release these detainees because they have not yet committed a crime and should rearrest them only after they have done so. Of course by that time suspected terrorists might well be dead along with a large number of their victims. How can the authorities possibly break up a vast network of decentralized cells of suicide terrorists if they have to wait for each one to commit a crime? The paper argued that the government should release these detainees because they have not yet committed a crime and should rearrest them only after they have done so. Of course by that time suspected terrorists might well be dead along with a large number of their victims. How can the authorities possibly break up a vast network of decentralized cells of suicide terrorists if they have to wait for each one to commit a crime?

On the other hand this very logic has been routinely used by tyrannical regimes to justify the waiving of the judicial protections we have come to cherish. It is likewise fair to argue that curtailing civil liberties in this way is exactly the aim of the terrorists, who despise our notions of freedoms and pluralism. However, I do not see the prospect of any technology "magic bullet" that would essentially change this dilemma.

The encryption trapdoor may be considered a technical innovation that the government has been proposing in an attempt to balance legitimate individual needs for privacy with the government's need for surveillance. Along with this type of technology we also need the requisite political innovation to provide for effective oversight, by both the judicial and legislative branches, of the executive branch's use of these trapdoors, to avoid the potential for abuse of power. The secretive nature of our opponents and their lack of respect for human life including their own will deeply test the foundations of our democratic traditions.

A Program for GNR Defense

We come from goldfish, essentially, but that [doesn't] mean we turned around and killed all the goldfish. Maybe [the AIs] will feed us once a week....If you had a machine with a 10 to the 18th power IQ over humans, wouldn't you want it to govern, or at least control your economy?-SETH SHOSTAK

How can we secure the profound benefits of GNR while ameliorating its perils? Here's a review of a suggested program for containing the GNR risks: The most urgent recommendation is to greatly increase our investment in defensive technologies. greatly increase our investment in defensive technologies. Since we are already in the G era, Since we are already in the G era, the bulk of this investment today should be in (biological) antiviral medications and treatments. the bulk of this investment today should be in (biological) antiviral medications and treatments. We have new tools that are well suited to this task. RNA interference, for example, can be used to block gene expression. Virtually all infections (as well as cancer) rely on gene expression at some point during their life cycles. We have new tools that are well suited to this task. RNA interference, for example, can be used to block gene expression. Virtually all infections (as well as cancer) rely on gene expression at some point during their life cycles.

Efforts to antic.i.p.ate the defensive technologies needed to safely guide N and R should also be supported, and these should be substantially increased as we get closer to the feasibility of molecular manufacturing and strong AI, respectively. A significant side benefit would be to accelerate effective treatments for infectious disease and cancer. I've testified before Congress on this issue, advocating the investment of tens of billions of dollars per year (less than 1 percent of the GDP) to address this new and under-recognized existential threat to humanity."

We need to streamline the regulatory process for genetic and medical technologies. The regulations do not impede the malevolent use of technology but significantly delay the needed defenses. As mentioned, we need to better balance the risks of new technology (for example, new medications) against the known harm of delay.A global program of confidential, random serum monitoring for unknown or evolving biological pathogens should be funded. Diagnostic tools exist to rapidly identify the existence of unknown protein or nucleic acid sequences. Intelligence is key to defense, and such a program could provide invaluable early warning of an impending epidemic. Such a "pathogen sentinel" program has been proposed for many years by public health authorities but has never received adequate funding.Well-defined and targeted temporary moratoriums, such as the one that occurred in the genetics field in 1975, may be needed from time to time. But such moratoriums are unlikely to be necessary with nanotechnology. Broad efforts at relinquishing major areas of technology serve only to continue vast human suffering by delaying the beneficial aspects of new technologies, and actually make the dangers worse.Efforts to define safety and ethical guidelines for nanotechnology should continue. Such guidelines will inevitably become more detailed and refined as we get closer to molecular manufacturing.To create the political support to fund the efforts suggested above, it is necessary to raise public awareness of these dangers raise public awareness of these dangers. Because, of course, there exists the downside of raising alarm and generating uninformed backing for broad ant.i.technology mandates, we also need to create a public understanding of the profound benefits of continuing advances in technology.These risks cut across international boundaries-which is, of course, nothing new; biological viruses, software viruses, and missiles already cross such boundaries with impunity. International cooperation International cooperation was vital to containing the SARS virus and will become increasingly vital in confronting future challenges. Worldwide organizations such as the World Health Organization, which helped coordinate the SARS response, need to be strengthened. was vital to containing the SARS virus and will become increasingly vital in confronting future challenges. Worldwide organizations such as the World Health Organization, which helped coordinate the SARS response, need to be strengthened.A contentious contemporary political issue is the need for preemptive action to combat threats, such as terrorists with access to weapons of ma.s.s destruction or rogue nations that support such terrorists. Such measures will always be controversial, but the potential need for them is clear. A nuclear explosion can destroy a city in seconds. A self-replicating pathogen, whether biological or nanotechnology based, could destroy our civilization in a matter of days or weeks. We cannot always afford to wait for the ma.s.sing of armies or other overt indications of ill intent before taking protective action .Intelligence agencies and policing authorities will have a vital role in forestalling the vast majority of potentially dangerous incidents. Their efforts need to involve the most powerful technologies available. For example, before this decade is over, devices the size of dust particles will be able to carry out reconnaissance missions. When we reach the 2020s and have software running in our bodies and brains, government authorities will have a legitimate need on occasion to monitor these software streams. The potential for abuse of such powers is obvious. We will need to achieve a middle road of preventing catastrophic events while preserving our privacy and liberty.The above approaches will be inadequate to deal with the danger from pathological R (strong AI). Our primary strategy in this area should be to optimize the likelihood that future nonbiological intelligence will reflect our values of liberty, tolerance, and respect for knowledge and diversity. The best way to accomplish this is to foster those values in our society today and going forward. If this sounds vague, it is. But there is no purely technical strategy that is workable in this area, because greater intelligence will always find a way to circ.u.mvent measures that are the product of a lesser intelligence. The nonbiological intelligence we are creating is and will be embedded in our societies and will reflect our values. The transbiological phase will involve nonbiological intelligence deeply integrated with biological intelligence. This will amplify our abilities, and our application of these greater intellectual powers will be governed by the values of its creators. The transbiological era will ultimately give way to the postbiological era, but it is to be hoped that our values will remain influential. This strategy is certainly not foolproof, but it is the primary means we have today to influence the future course of strong AI.

Technology will remain a double-edged sword. It represents vast power to be used for all humankind's purposes. GNR will provide the means to overcome age-old problems such as illness and poverty, but it will also empower destructive ideologies. We have no choice but to strengthen our defenses while we apply these quickening technologies to advance our human values, despite an apparent lack of consensus on what those values should be.

MOLLY 2004: Okay, now run that stealthy scenario by me again-you know, the one where the bad nan.o.bots spread quietly through the bioma.s.s to get themselves into position but don't actually expand to noticeably destroy anything until they're spread around the globe. Okay, now run that stealthy scenario by me again-you know, the one where the bad nan.o.bots spread quietly through the bioma.s.s to get themselves into position but don't actually expand to noticeably destroy anything until they're spread around the globe.

RAY: Well, the nan.o.bots would spread at very low concentrations, say one carbon atom per 10 Well, the nan.o.bots would spread at very low concentrations, say one carbon atom per 1015 in the bioma.s.s, so they would be seeded throughout the bioma.s.s. Thus, the speed of physical spread of the destructive nan.o.bots would not be a limiting factor when they subsequently replicate in place. If they skipped the stealth phase and expanded instead from a single point, the spreading nanodisease would be noticed, and the spread around the world would be relatively slow. in the bioma.s.s, so they would be seeded throughout the bioma.s.s. Thus, the speed of physical spread of the destructive nan.o.bots would not be a limiting factor when they subsequently replicate in place. If they skipped the stealth phase and expanded instead from a single point, the spreading nanodisease would be noticed, and the spread around the world would be relatively slow.

MOLLY 2004: So how are we going to protect ourselves from that? By the time they start phase two, we've got only about ninety minutes, or much less if you want to avoid enormous damage. So how are we going to protect ourselves from that? By the time they start phase two, we've got only about ninety minutes, or much less if you want to avoid enormous damage.

RAY: Because of the nature of exponential growth, the bulk of the damage gets done in the last few minutes, but your point is well taken. Under any scenario, we won't have a chance without a nanotechnology immune system. Obviously, we can't wait until the beginning of a ninety-minute cycle of destruction to begin thinking about creating one. Such a system would be very comparable to our human immune system. How long would a biological human circa 2004 last without one? Because of the nature of exponential growth, the bulk of the damage gets done in the last few minutes, but your point is well taken. Under any scenario, we won't have a chance without a nanotechnology immune system. Obviously, we can't wait until the beginning of a ninety-minute cycle of destruction to begin thinking about creating one. Such a system would be very comparable to our human immune system. How long would a biological human circa 2004 last without one?

MOLLY 2004: Not long, I suppose. How does this nano-immune system pick up these bad nan.o.bots if they're only one in a thousand trillion? Not long, I suppose. How does this nano-immune system pick up these bad nan.o.bots if they're only one in a thousand trillion?

RAY: We have the same issue with our biological immune system. Detection of even a single foreign protein triggers rapid action by biological antibody factories, so the immune system is there in force by the time a pathogen achieves a near critical level. We'll need a similar capability for the nanoimmune system. We have the same issue with our biological immune system. Detection of even a single foreign protein triggers rapid action by biological antibody factories, so the immune system is there in force by the time a pathogen achieves a near critical level. We'll need a similar capability for the nanoimmune system.

CHARLES DARWIN: Now tell me, do the immune-system nan.o.bots have the ability to replicate? Now tell me, do the immune-system nan.o.bots have the ability to replicate?

RAY: They would need to be able to do this; otherwise they would not be able to keep pace with the replicating pathogenic nan.o.bots. There have been proposals to seed the bioma.s.s with protective immune-system nan.o.bots at a particular concentration, but as soon as the bad nan.o.bots significantly exceeded this fixed concentration the immune system would lose. Robert Freitas proposes nonreplicating nanofactories able to turn out additional protective nanorobots when needed. I think this is likely to deal with threats for a while, but ultimately the defensive system will need the ability to replicate its immune capabilities in place to keep pace with emerging threats. They would need to be able to do this; otherwise they would not be able to keep pace with the replicating pathogenic nan.o.bots. There have been proposals to seed the bioma.s.s with protective immune-system nan.o.bots at a particular concentration, but as soon as the bad nan.o.bots significantly exceeded this fixed concentration the immune system would lose. Robert Freitas proposes nonreplicating nanofactories able to turn out additional protective nanorobots when needed. I think this is likely to deal with threats for a while, but ultimately the defensive system will need the ability to replicate its immune capabilities in place to keep pace with emerging threats.

CHARLES: So aren't the immune-system nan.o.bots entirely equivalent to the phase one malevolent nan.o.bots? I mean seeding the bioma.s.s is the first phase of the stealth scenario. So aren't the immune-system nan.o.bots entirely equivalent to the phase one malevolent nan.o.bots? I mean seeding the bioma.s.s is the first phase of the stealth scenario.

RAY: But the immune-system nan.o.bots are programmed to protect us, not destroy us. But the immune-system nan.o.bots are programmed to protect us, not destroy us.

CHARLES: I understand that software can be modified. I understand that software can be modified.

RAY: Hacked, you mean? Hacked, you mean?

CHARLES: Yes, exactly. So if the immune-system software is modified by a hacker to simply turn on its self-replication ability without end- Yes, exactly. So if the immune-system software is modified by a hacker to simply turn on its self-replication ability without end- RAY: -yes, well, we'll have to be careful about that, won't we? -yes, well, we'll have to be careful about that, won't we?

MOLLY 2004: I'll say. I'll say.

RAY: We have the same problem with our biological immune system. Our immune system is comparably powerful, and if it turns on us that's an autoimmune disease, which can be insidious. But there's still no alternative to having an immune system. We have the same problem with our biological immune system. Our immune system is comparably powerful, and if it turns on us that's an autoimmune disease, which can be insidious. But there's still no alternative to having an immune system.

MOLLY 2004: So a software virus could turn the nan.o.bot immune system into a stealth destroyer? So a software virus could turn the nan.o.bot immune system into a stealth destroyer?

RAY: That's possible. It's fair to conclude that software security is going to be the decisive issue for many levels of the human-machine civilization. With everything becoming information, maintaining the software integrity of our defensive technologies will be critical to our survival. Even on an economic level, maintaining the business model that creates information will be critical to our well-being. That's possible. It's fair to conclude that software security is going to be the decisive issue for many levels of the human-machine civilization. With everything becoming information, maintaining the software integrity of our defensive technologies will be critical to our survival. Even on an economic level, maintaining the business model that creates information will be critical to our well-being.

MOLLY 2004: This makes me feel rather helpless. I mean, with all these good and bad nan.o.bots battling it out, I'll just be a hapless bystander. This makes me feel rather helpless. I mean, with all these good and bad nan.o.bots battling it out, I'll just be a hapless bystander.

RAY: That's hardly a new phenomenon. How much influence do you have in 2004 on the disposition of the tens of thousands of nuclear weapons in the world? That's hardly a new phenomenon. How much influence do you have in 2004 on the disposition of the tens of thousands of nuclear weapons in the world?

MOLLY 2004: At least I have a voice and a vote in elections that affect foreign-policy issues. At least I have a voice and a vote in elections that affect foreign-policy issues.

RAY: There's no reason for that to change. Providing for a reliable nanotechnology immune system will be one of the great political issues of the 2020s and 2030s. There's no reason for that to change. Providing for a reliable nanotechnology immune system will be one of the great political issues of the 2020s and 2030s.

MOLLY 2004: Then what about strong AI? Then what about strong AI?

RAY: The good news is that it will protect us from malevolent nanotechnology because it will be smart enough to a.s.sist us in keeping our defensive technologies ahead of the destructive ones. The good news is that it will protect us from malevolent nanotechnology because it will be smart enough to a.s.sist us in keeping our defensive technologies ahead of the destructive ones.

NED LUDD: a.s.suming it's on our side. a.s.suming it's on our side.

RAY: Indeed. Indeed.

CHAPTER NINE.

Response to Critics

The human mind likes a strange idea as little as the body likes a strange protein and resists it with a similar energy.-W. I. BEVERIDGE If a ... scientist says that something is possible he is almost certainly right, but if he says that it is impossible he is very probably wrong.-ARTHUR C. CLARKE

A Panoply of Criticisms

In The Age of Spiritual Machines The Age of Spiritual Machines, I began to examine some of the accelerating trends that I have sought to explore in greater depth in this book. ASM inspired a broad variety of reactions, including extensive discussions of the profound, imminent changes it considered (for example, the promise-versus-peril debate prompted by Bill Joy's Wired Wired story, "Why the Future Doesn't Need Us," as I reviewed in the previous chapter). The response also included attempts to argue on many levels why such transformative changes would not, could not, or should not happen. Here is a summary of the critiques I will be responding to in this chapter: story, "Why the Future Doesn't Need Us," as I reviewed in the previous chapter). The response also included attempts to argue on many levels why such transformative changes would not, could not, or should not happen. Here is a summary of the critiques I will be responding to in this chapter:

The "criticism from Malthus": It's a mistake to extrapolate exponential trends indefinitely, since they inevitably run out of resources to maintain the exponential growth. Moreover, we won't have enough energy to power the extraordinarily dense computational platforms forecast, and even if we did they would be as hot as the sun It's a mistake to extrapolate exponential trends indefinitely, since they inevitably run out of resources to maintain the exponential growth. Moreover, we won't have enough energy to power the extraordinarily dense computational platforms forecast, and even if we did they would be as hot as the sun. Exponential trends do reach an asymptote, but the matter and energy resources needed for computation and communication are so small per compute and per bit that these trends can continue to the point where nonbiological intelligence is trillions of trillions of times more powerful than biological intelligence. Reversible computing can reduce energy requirements, as well as heat dissipation, by many orders of magnitude. Even restricting computation to "cold" computers will achieve nonbiological computing platforms that vastly outperform biological intelligence.The "criticism from software": We're making exponential gains in hardware, but software is stuck in the mud We're making exponential gains in hardware, but software is stuck in the mud. Although the doubling time for progress in software is longer than that for computational hardware, software is also accelerating in effectiveness, efficiency, and complexity. Many software applications, ranging from search engines to games, routinely use AI techniques that were only research projects a decade ago. Substantial gains have also been made in the overall complexity of software, in software productivity, and in the efficiency of software in solving key algorithmic problems. Moreover, we have an effective game plan to achieve the capabilities of human intelligence in a machine: reverse engineering the brain to capture its principles of operation and then implementing those principles in brain-capable computing platforms. Every aspect of brain reverse engineering is accelerating: the spatial and temporal resolution of brain scanning, knowledge about every level of the brain's operation, and efforts to realistically model and simulate neurons and brain regions.The "criticism from a.n.a.log processing": Digital computation is too rigid because digital bits are either on or off. Biological intelligence is mostly a.n.a.log, so subtle gradations can be considered Digital computation is too rigid because digital bits are either on or off. Biological intelligence is mostly a.n.a.log, so subtle gradations can be considered. It's true that the human brain uses digital-controlled a.n.a.log methods, but we can also use such methods in our machines. Moreover, digital computation can simulate a.n.a.log transactions to any desired level of accuracy, whereas the converse statement is not true.The "criticism from the complexity of neural processing": The information processes in the interneuronal connections (axons, dendrites, synapses) are far more complex than the simplistic models used in neural nets The information processes in the interneuronal connections (axons, dendrites, synapses) are far more complex than the simplistic models used in neural nets. True, but brain-region simulations don't use such simplified models. We have achieved realistic mathematical models and computer simulations of neurons and interneuronal connections that do capture the nonlinearities and intricacies of their biological counterparts. Moreover, we have found that the complexity of processing brain regions is often simpler than the neurons they comprise. We already have effective models and simulations for several dozen regions of the human brain. The genome contains only about thirty to one hundred million bytes of design information when redundancy is considered, so the design information for the brain is of a manageable level.The "criticism from microtubules and quantum computing": The microtubules in neurons are capable of quantum computing, and such quantum computing is a prerequisite for consciousness. To "upload" a personality, one would have to capture its precise quantum state The microtubules in neurons are capable of quantum computing, and such quantum computing is a prerequisite for consciousness. To "upload" a personality, one would have to capture its precise quantum state. No evidence exists to support either of these statements. Even if true, there is nothing that bars quantum computing from being carried out in nonbiological systems. We routinely use quantum effects in semiconductors (tunneling in transistors, for example), and machine-based quantum computing is also progressing. As for capturing a precise quantum state, I'm in a very different quantum state than I was before writing this sentence. So am I already a different person? Perhaps I am, but if one captured my state a minute ago, an upload based on that information would still successfully pa.s.s a "Ray Kurzweil" Turing test.The "criticism from the Church-Turing thesis": We can show that there are broad cla.s.ses of problems that cannot be solved by any Turing machine. It can also be shown that Turing machines can emulate any possible computer (that is, there exists a Turing machine that can solve any problem that any computer can solve), so this demonstrates a clear limitation on the problems that a computer can solve. Yet humans are capable of solving these problems, so machines will never emulate human intelligence We can show that there are broad cla.s.ses of problems that cannot be solved by any Turing machine. It can also be shown that Turing machines can emulate any possible computer (that is, there exists a Turing machine that can solve any problem that any computer can solve), so this demonstrates a clear limitation on the problems that a computer can solve. Yet humans are capable of solving these problems, so machines will never emulate human intelligence. Humans are no more capable of universally solving such "unsolvable" problems than machines. Humans can make educated guesses to solutions in certain instances, but machines can do the same thing and can often do so more quickly.The "criticism from failure rates": Computer systems are showing alarming rates of catastrophic failure as their complexity increases. Thomas Ray writes that we are "pushing the limits of what we can effectively design and build through conventional approaches." We have developed increasingly complex systems to manage a broad variety of mission-critical tasks, and failure rates in these systems are very low. However, imperfection is an inherent feature of any complex process, and that certainly includes human intelligence.The "criticism from 'lock-in' ": The pervasive and complex support systems (and the huge investments in these systems) required by such fields as energy and transportation are blocking innovation, so this will prevent the kind of rapid change envisioned for the technologies underlying the Singularity The pervasive and complex support systems (and the huge investments in these systems) required by such fields as energy and transportation are blocking innovation, so this will prevent the kind of rapid change envisioned for the technologies underlying the Singularity. It is specifically information processes that are growing exponentially in capability and price-performance. We have already seen rapid paradigm shifts in every aspect of information technology, unimpeded by any lock-in phenomenon (despite large infrastructure investments in such areas as the Internet and telecommunications). Even the energy and transportation sectors will witness revolutionary changes from new nanotechnology-based innovations.The "criticism from ontology": John Searle describes several versions of his Chinese Room a.n.a.logy. In one formulation a man follows a written program to answer questions in Chinese. The man appears to be answering questions competently in Chinese, but since he is "just mechanically following a written program, he has no real understanding of Chinese and no real awareness of what he is doing. The "man" in the room doesn't understand anything, because, after all, "he is just a computer," according to Searle. So clearly, computers cannot understand what they are doing, since they are just following rules John Searle describes several versions of his Chinese Room a.n.a.logy. In one formulation a man follows a written program to answer questions in Chinese. The man appears to be answering questions competently in Chinese, but since he is "just mechanically following a written program, he has no real understanding of Chinese and no real awareness of what he is doing. The "man" in the room doesn't understand anything, because, after all, "he is just a computer," according to Searle. So clearly, computers cannot understand what they are doing, since they are just following rules. Searle's Chinese Room arguments are fundamentally tautological, as they just a.s.sume his conclusion that computers cannot possibly have any real understanding. Part of the philosophical sleight of hand in Searle's simple a.n.a.logies is a matter of scale. He purports to describe a simple system and then asks the reader to consider how such a system could possibly have any real understanding. But the characterization itself is misleading. To be consistent with Searle's own a.s.sumptions the Chinese Room system that Searle describes would have to be as complex as a human brain and would, therefore, have as much understanding as a human brain. The man in the a.n.a.logy would be acting as the central-processing unit, only a small part of the system. While the man may not see it, the understanding is distributed across the entire pattern of the program itself and the billions of notes he would have to make to follow the program. Consider that I understand English, but none of my neurons do. My understanding is represented in vast patterns of neurotransmitter strengths, synaptic clefts, and interneuronal connections.The "criticism from the rich-poor divide": It's likely that through these technologies the rich may obtain certain opportunities that the rest of humankind does not have access to It's likely that through these technologies the rich may obtain certain opportunities that the rest of humankind does not have access to. This, of course, would be nothing new, but I would point out that because of the ongoing exponential growth of price-performance, all of these technologies quickly become so inexpensive as to become almost free.The "criticism from the likelihood of government regulation": Governmental regulation will slow down and stop the acceleration of technology Governmental regulation will slow down and stop the acceleration of technology. Although the obstructive potential of regulation is an important concern, it has had as of yet little measurable effect on the trends discussed in this book. Absent a worldwide totalitarian state, the economic and other forces underlying technical progress will only grow with ongoing advances. Even controversial issues such as stem-cell research end up being like stones in a stream, the flow of progress rushing around them .The "criticism from theism": According to William A. Dembski, "contemporary materialists such as Ray Kurzweil ... see the motions and modifications of matter as sufficient to account for human mentality." But materialism is predictable, whereas reality is not. Predictability [is] materialism's main virtue ... and hollowness [is] its main fault." According to William A. Dembski, "contemporary materialists such as Ray Kurzweil ... see the motions and modifications of matter as sufficient to account for human mentality." But materialism is predictable, whereas reality is not. Predictability [is] materialism's main virtue ... and hollowness [is] its main fault." Complex systems of matter and energy are not predictable, since they are based on a vast number of unpredictable quantum events. Even if we accept a "hidden variables" interpretation of quantum mechanics (which says that quantum events only appear to be unpredictable but are based on undetectable hidden variables), the behavior of a complex system would still be unpredictable in practice. All of the trends show that we are clearly headed for nonbiological systems that are as complex as their biological counterparts. Such future systems will be no more "hollow" than humans and in many cases will be based on the reverse engineering of human intelligence. We don't need to go beyond the capabilities of patterns of matter and energy to account for the capabilities of human intelligence . Complex systems of matter and energy are not predictable, since they are based on a vast number of unpredictable quantum events. Even if we accept a "hidden variables" interpretation of quantum mechanics (which says that quantum events only appear to be unpredictable but are based on undetectable hidden variables), the behavior of a complex system would still be unpredictable in practice. All of the trends show that we are clearly headed for nonbiological systems that are as complex as their biological counterparts. Such future systems will be no more "hollow" than humans and in many cases will be based on the reverse engineering of human intelligence. We don't need to go beyond the capabilities of patterns of matter and energy to account for the capabilities of human intelligence .The "criticism from holism": To quote Michael Denton, organisms are "self-organizing, ... self-referential, ... self-replicating, ... reciprocal, ... self-formative, and ... holistic." Such organic forms can be created only through biological processes, and such forms are "immutable, ... impenetrable, and ... fundamental realities of existence." To quote Michael Denton, organisms are "self-organizing, ... self-referential, ... self-replicating, ... reciprocal, ... self-formative, and ... holistic." Such organic forms can be created only through biological processes, and such forms are "immutable, ... impenetrable, and ... fundamental realities of existence."1 It's true that biological design represents a profound set of principles. However, machines can use-and already are using-these same principles, and there is nothing that restricts nonbiological systems from harnessing the emergent properties of the patterns found in the biological world. It's true that biological design represents a profound set of principles. However, machines can use-and already are using-these same principles, and there is nothing that restricts nonbiological systems from harnessing the emergent properties of the patterns found in the biological world.

I've engaged in countless debates and dialogues responding to these challenges in a diverse variety of forums. One of my goals for this book is to provide a comprehensive response to the most important criticisms I have encountered. Most of my rejoinders to these critiques on feasibility and inevitability have been discussed throughout this book, but in this chapter I want to offer a detailed reply to several of the more interesting ones.

The Criticism from Incredulity

Perhaps the most candid criticism of the future I have envisioned here is simple disbelief that such profound changes could possibly occur. Chemist Richard Smalley, for example, dismisses the idea of nan.o.bots being capable of performing missions in the human bloodstream as just "silly." But scientists' ethics call for caution in a.s.sessing the prospects for current work, and such reasonable prudence unfortunately often leads scientists to shy away from considering the power of generations of science and technology far beyond today's frontier. With the rate of paradigm shift occurring ever more quickly, this ingrained pessimism does not serve society's needs in a.s.sessing scientific capabilities in the decades ahead. Consider how incredible today's technology would seem to people even a century ago.

A related criticism is based on the notion that it is difficult to predict the future, and any number of bad predictions from other futurists in earlier eras can be cited to support this. Predicting which company or product will succeed is indeed very difficult, if not impossible. The same difficulty occurs in predicting which technical design or standard will prevail. (For example, how will the wireless-communication protocols WiMAX, CDMA, and 3G fare over the next several years?) However, as this book has extensively argued, we find remarkably precise and predictable exponential trends when a.s.sessing the overall effectiveness (as measured by price-performance, bandwidth, and other measures of capability) of information technologies. For example, the smooth exponential growth of the price-performance of computing dates back over a century. Given that the minimum amount of matter and energy required to compute or transmit a bit of information is known to be vanishingly small, we can confidently predict the continuation of these information-technology trends at least through this next century. Moreover, we can reliably predict the capabilities of these technologies at future points in time.

Consider that predicting the path of a single molecule in a gas is essentially impossible, but predicting certain properties of the entire gas (composed of a great many chaotically interacting molecules) can reliably be predicted through the laws of thermodynamics. a.n.a.logously, it is not possible to reliably predict the results of a specific project or company, but the overall capabilities of information technology (comprised of many chaotic activities) can nonetheless be dependably antic.i.p.ated through the law of accelerating returns.

Many of the furious attempts to argue why machines-nonbiological systems-cannot ever possibly compare to humans appear to be fueled by this basic reaction of incredulity. The history of human thought is marked by many attempts to refuse to accept ideas that seem to threaten the accepted view that our species is special. Copernicus's insight that the Earth was not at the center of the universe was resisted, as was Darwin's that we were only slightly evolved from other primates. The notion that machines could match and even exceed human intelligence appears to challenge human status once again.

In my view there is something essentially special, after all, about human beings. We were the first species on Earth to combine a cognitive function and an effective opposable appendage (the thumb), so we were able to create technology that would extend our own horizons. No other species on Earth has accomplished this. (To be precise, we're the only surviving species in this ecological niche-others, such as the Neanderthals, did not survive.) And as I discussed in chapter 6, we have yet to discover any other such civilization in the universe.

The Criticism from Malthus

Exponential Trends Don't Last Forever. The cla.s.sical metaphorical example of exponential trends. .h.i.tting a wall is known as "rabbits in Australia." A species happening upon a hospitable new habitat will expand its numbers exponentially until its growth hits the limits of the ability of that environment to support it. Approaching this limit to exponential growth may even cause an overall reduction in numbers-for example, humans noticing a spreading pest may seek to eradicate it. Another common example is a microbe that may grow exponentially in an animal body until a limit is reached: the ability of that body to support it, the response of its immune system, or the death of the host. The cla.s.sical metaphorical example of exponential trends. .h.i.tting a wall is known as "rabbits in Australia." A species happening upon a hospitable new habitat will expand its numbers exponentially until its growth hits the limits of the ability of that environment to support it. Approaching this limit to exponential growth may even cause an overall reduction in numbers-for example, humans noticing a spreading pest may seek to eradicate it. Another common example is a microbe that may grow exponentially in an animal body until a limit is reached: the ability of that body to support it, the response of its immune system, or the death of the host.

Even the human population is now approaching a limit. Families in the more developed nations have mastered means of birth control and have set relatively high standards for the resources they wish to provide their children. As a result population expansion in the developed world has largely stopped. Meanwhile people in some (but not all) underdeveloped countries have continued to seek large families as a means of social security, hoping that at least one child will survive long enough to support them in old age. However, with the law of accelerating returns providing more widespread economic gains, the overall growth in human population is slowing.

So isn't there a comparable limit to the exponential trends that we are witnessing for information technologies?

The answer is yes, but not before the profound transformations described throughout this book take place. As I discussed in chapter 3, the amount of matter and energy required to compute or transmit one bit is vanishingly small. By using reversible logic gates, the input of energy is required only to transmit results and to correct errors. Otherwise, the heat released from each computation is immediately recycled to fuel the next computation.

As I discussed in chapter 5, nanotechnology-based designs for virtually all applications-computation, communication, manufacturing, and transportation-will require substantially less energy than they do today. Nanotechnology will also facilitate capturing renewable energy sources such as sunlight. We could meet all of our projected energy needs of thirty trillion watts in 2030 with solar power if we captured only 0.03 percent (three ten-thousandths) of the sun's energy as it hit the Earth. This will be feasible with extremely inexpensive, lightweight, and efficient nanoengineered solar panels together with nano-fuel cells to store and distribute the captured energy.

A Virtually Unlimited Limit. As I discussed in chapter 3 an optimally organized 2.2-pound computer using reversible logic gates has about 10 As I discussed in chapter 3 an optimally organized 2.2-pound computer using reversible logic gates has about 1025 atoms and can store about 10 atoms and can store about 1027 bits. Just considering electromagnetic interactions between the particles, there are at least 10 bits. Just considering electromagnetic interactions between the particles, there are at least 1015 state changes per bit per second that can be harnessed for computation, resulting in about 10 state changes per bit per second that can be harnessed for computation, resulting in about 1042 calculations per second in the ultimate "cold" 2.2-pound computer. This is about 10 calculations per second in the ultimate "cold" 2.2-pound computer. This is about 1016 times more powerful than all biological brains today. If we allow our ultimate computer to get hot, we can increase this further by as much as 10 times more powerful than all biological brains today. If we allow our ultimate computer to get hot, we can increase this further by as much as 108-fold. And we obviously won't restrict our computational resources to one kilogram of matter but will ultimately deploy a significant fraction of the matter and energy on the Earth and in the solar system and then spread out from there.

Specific paradigms do reach limits. We expect that Moore's Law (concerning the shrinking of the size of transistors on a flat integrated circuit) will hit a limit over the next two decades. The date for the demise of Moore's Law keeps getting pushed back. The first estimates predicted 2002, but now Intel says it won't take place until 2022. But as I discussed in chapter 2, every time a specific computing paradigm was seen to approach its limit, research interest and pressure increased to create the next paradigm. This has already happened four times in the century-long history of exponential growth in computation (from electromagnetic calculators to relay-based computers to vacuum tubes to discrete transistors to integrated circuits). We have already achieved many important milestones toward the next (sixth) paradigm of computing: three-dimensional self-organizing circuits at the molecular level. So the impending end of a given paradigm does not represent a true limit.