Where Good Ideas Come From - Part 1
Library

Part 1

Where good ideas come from : the natural history of innovation.

by Steven Johnson.

Introduction.

REEF, CITY, WEB.

. . . as imagination bodies forth The forms of things unknown, the poet's pen Turns them to shapes and gives to airy nothing A local habitation and a name.

-SHAKESPEARE, A Midsummer Night's Dream, V.i.14-17

Darwin's Paradox.

April 4, 1836. Over the eastern expanse of the Indian Ocean, the reliable northeast winds of monsoon season have begun to give way to the serene days of summer. On the Keeling Islands, two small atolls composed of twenty-seven coral islands six hundred miles west of Sumatra, the emerald waters are invitingly placid and warm, their hue enhanced by the brilliant white sand of disintegrated coral. On one stretch of sh.o.r.e usually guarded by stronger surf, the water is so calm that Charles Darwin wades out, under the vast blue sky of the tropics, to the edge of the live coral reef that rings the island.

For hours he stands and paddles among the crowded pageantry of the reef. Twenty-seven years old, seven thousand miles from London, Darwin is on the precipice, standing on an underwater peak ascending over an unfathomable sea. He is on the edge of an idea about the forces that built that peak, an idea that will prove to be the first great scientific insight of his career. And he has just begun exploring another hunch, still hazy and unformed, that will eventually lead to the intellectual summit of the nineteenth century.

Around him, the crowds of the coral ecosystem dart and shimmer. The sheer variety dazzles: b.u.t.terflyfish, damselfish, parrotfish, Napoleon fish, angelfish; golden anthias feeding on plankton above the cauliflower blooms of the coral; the spikes and tentacles of sea urchins and anemones. The tableau delights Darwin's eye, but already his mind is reaching behind the surface display to a more profound mystery. In his account of the Beagle Beagle's voyage, published four years later, Darwin would write: "It is excusable to grow enthusiastic over the infinite numbers of organic beings with which the sea of the tropics, so prodigal of life, teems; yet I must confess I think those naturalists who have described, in well-known words, the submarine grottoes decked with a thousand beauties, have indulged in rather exuberant language."

What lingers in the back of Darwin's mind, in the days and weeks to come, is not the beauty of the submarine grotto but rather the "infinite numbers" of organic beings. On land, the flora and fauna of the Keeling Islands are paltry at best. Among the plants, there is little but "cocoa-nut" trees, lichen, and weeds. "The list of land animals," he writes, "is even poorer than that of the plants": a handful of lizards, almost no true land birds, and those recent immigrants from European ships, rats. "The island has no domestic quadruped excepting the pig," Darwin notes with disdain.

Yet just a few feet away from this desolate habitat, in the coral reef waters, an epic diversity, rivaled only by that of the rain forests, thrives. This is a true mystery. Why should the waters at the edge of an atoll support so many different livelihoods? Extract ten thousand cubic feet of water from just about anywhere in the Indian Ocean and do a full inventory on the life you find there: the list would be about as "poor" as Darwin's account of the land animals of the Keelings. You might find a dozen fish if you were lucky. On the reef, you would be guaranteed a thousand. In Darwin's own words, stumbling across the ecosystem of a coral reef in the middle of an ocean was like encountering a swarming oasis in the middle of a desert. We now call this phenomenon Darwin's Paradox: so many different life forms, occupying such a vast array of ecological niches, inhabiting waters that are otherwise remarkably nutrient-poor. Coral reefs make up about one-tenth of one percent of the earth's surface, and yet roughly a quarter of the known species of marine life make their homes there. Darwin doesn't have those statistics available to him, standing in the lagoon in 1836, but he has seen enough of the world over the preceding four years on the Beagle Beagle to know there is something peculiar in the crowded waters of the reef. to know there is something peculiar in the crowded waters of the reef.

The next day, Darwin ventures to the windward side of the atoll with the Beagle Beagle's captain, Vice Admiral James FitzRoy, and there they watch ma.s.sive waves crash against the coral's white barrier. An ordinary European spectator, accustomed to the calmer waters of the English Channel or the Mediterranean, would be naturally drawn to the impressive crest of the surf. (The breakers, Darwin observes, are almost "equal in force [to] those during a gale of wind in the temperate regions, and never cease to rage.") But Darwin has his eye on something else-not the violent surge of water but the force that resists it: the tiny organisms that have built the reef itself.

The ocean throwing its waters over the broad reef appears an invincible, all-powerful enemy; yet we see it resisted, and even conquered, by means which at first seem most weak and inefficient. It is not that the ocean spares the rock of coral; the great fragments scattered over the reef, and heaped on the beach, whence the tall cocoa-nut springs, plainly bespeak the unrelenting power of the waves . . . Yet these low, insignificant coral-islets stand and are victorious: for here another power, as an antagonist, takes part in the contest. The organic forces separate the atoms of carbonate of lime, one by one, from the foaming breakers, and unite them into a symmetrical structure. Let the hurricane tear up its thousand huge fragments; yet what will that tell against the acc.u.mulated labour of myriads of architects at work night and day, month after month?

Darwin is drawn to those minuscule architects because he believes they are the key to solving the mystery that has brought the Beagle Beagle to the Keeling Islands. In the Admiralty's memorandum authorizing the ship's five-year journey, one of the princ.i.p.al scientific directives is the investigation of atoll formation. Darwin's mentor, the brilliant geologist Charles Lyell, had recently proposed that atolls are created by undersea volcanoes that have been driven upward by powerful movements in the earth's crust. In Lyell's theory, the distinctive circular shape of an atoll emerges as coral colonies construct reefs along the circ.u.mference of the volcanic crater. Darwin's mind had been profoundly shaped by Lyell's understanding of the deep time of geological transformation, but standing on the beach, watching the breakers crash against the coral, he knows that his mentor is wrong about the origin of the atolls. It is not a story of simple geology, he realizes. It is a story about the innovative persistence of life. And as he mulls the thought, there is a hint of something else in his mind, a larger, more encompa.s.sing theory that might account for the vast scope of life's innovations. The forms of things unknown are turning, slowly, into shapes. to the Keeling Islands. In the Admiralty's memorandum authorizing the ship's five-year journey, one of the princ.i.p.al scientific directives is the investigation of atoll formation. Darwin's mentor, the brilliant geologist Charles Lyell, had recently proposed that atolls are created by undersea volcanoes that have been driven upward by powerful movements in the earth's crust. In Lyell's theory, the distinctive circular shape of an atoll emerges as coral colonies construct reefs along the circ.u.mference of the volcanic crater. Darwin's mind had been profoundly shaped by Lyell's understanding of the deep time of geological transformation, but standing on the beach, watching the breakers crash against the coral, he knows that his mentor is wrong about the origin of the atolls. It is not a story of simple geology, he realizes. It is a story about the innovative persistence of life. And as he mulls the thought, there is a hint of something else in his mind, a larger, more encompa.s.sing theory that might account for the vast scope of life's innovations. The forms of things unknown are turning, slowly, into shapes.

Days later, back on the Beagle Beagle, Darwin pulls out his journal and reflects on that mesmerizing clash between surf and coral. Presaging a line he would publish thirty years later in the most famous pa.s.sage from On the Origin of Species On the Origin of Species, Darwin writes, "I can hardly explain the reason, but there is to my mind much grandeur in the view of the outer sh.o.r.es of these lagoon-islands." In time, the reason would come to him.

The Superlinear City From an early age, the Swiss scientist Max Kleiber had a knack for testing the edges of convention. As an undergraduate in Zurich in the 1910s, he roamed the streets dressed in sandals and an open collar, shocking attire for the day. During his tenure in the Swiss army, he discovered that his superiors had been trading information with the Germans, despite the official Swiss position of neutrality in World War I. Appalled, he simply failed to appear at his next call-up, and was ultimately jailed for several months. By the time he had settled on a career in agricultural science, he had had enough of the restrictions of Zurich society. And so Max Kleiber charted a path that would be followed by countless sandal-wearing, nonconformist war protesters in the decades to come. He moved to California.

Kleiber set up shop at the agricultural college run by the University of California at Davis, in the heart of the fertile Central Valley. His research initially focused on cattle, measuring the impact body size had on their metabolic rates, the speed with which an organism burns through energy. Estimating metabolic rates had great practical value for the cattle industry, because it enabled farmers to predict with reasonable accuracy both how much food their livestock would require, and how much meat they would ultimately produce after slaughter. Shortly after his arrival at Davis, Kleiber stumbled across a mysterious pattern in his research, a mathematical oddity that soon brought a much more diverse array of creatures to be measured in his lab: rats, ring doves, pigeons, dogs, even humans.

Scientists and animal lovers had long observed that as life gets bigger, it slows down. Flies live for hours or days; elephants live for half-centuries. The hearts of birds and small mammals pump blood much faster than those of giraffes and blue whales. But the relationship between size and speed didn't seem to be a linear one. A horse might be five hundred times heavier than a rabbit, yet its pulse certainly wasn't five hundred times slower than the rabbit's. After a formidable series of measurements in his Davis lab, Kleiber discovered that this scaling phenomenon stuck to an unvarying mathematical script called "negative quarter-power scaling." If you plotted ma.s.s versus metabolism on a logarithmic grid, the result was a perfectly straight line that led from rats and pigeons all the way up to bulls and hippopotami.

Physicists were used to discovering beautiful equations like this lurking in the phenomena they studied, but mathematical elegance was a rarity in the comparatively messy world of biology. But the more species Kleiber and his peers a.n.a.lyzed, the clearer the equation became: metabolism scales to ma.s.s to the negative quarter power. The math is simple enough: you take the square root of 1,000, which is (approximately) 31, and then take the square root of 31, which is (again, approximately) 5.5. This means that a cow, which is roughly a thousand times heavier than a woodchuck, will, on average, live 5.5 times longer, and have a heart rate that is 5.5 times slower than the woodchuck's. As the science writer George Johnson once observed, one lovely consequence of Kleiber's law is that the number of heartbeats per lifetime tends to be stable from species to species. Bigger animals just take longer to use up their quota.

Over the ensuing decades, Kleiber's law was extended down to the microscopic scale of bacteria and cell metabolism; even plants were found to obey negative quarter-power scaling in their patterns of growth. Wherever life appeared, whenever an organism had to figure out a way to consume and distribute energy through a body, negative quarter-power scaling governed the patterns of its development.

Several years ago, the theoretical physicist Geoffrey West decided to investigate whether Kleiber's law applied to one of life's largest creations: the superorganisms of human-built cities. Did the "metabolism" of urban life slow down as cities grew in size? Was there an underlying pattern to the growth and pace of life of metropolitan systems? Working out of the legendary Santa Fe Inst.i.tute, where he served as president until 2009, West a.s.sembled an international team of researchers and advisers to collect data on dozens of cities around the world, measuring everything from crime to household electrical consumption, from new patents to gasoline sales.

When they finally crunched the numbers, West and his team were delighted to discover that Kleiber's negative quarter-power scaling governed the energy and transportation growth of city living. The number of gasoline stations, gasoline sales, road surface area, the length of electrical cables: all these factors follow the exact same power law that governs the speed with which energy is expended in biological organisms. If an elephant was just a scaled-up mouse, then, from an energy perspective, a city was just a scaled-up elephant.

But the most fascinating discovery in West's research came from the data that didn't didn't turn out to obey Kleiber's law. West and his team discovered another power law lurking in their immense database of urban statistics. Every datapoint that involved creativity and innovation-patents, R&D budgets, "supercreative" professions, inventors-also followed a quarter-power law, in a way that was every bit as predictable as Kleiber's law. But there was one fundamental difference: the quarter-power law governing innovation was turn out to obey Kleiber's law. West and his team discovered another power law lurking in their immense database of urban statistics. Every datapoint that involved creativity and innovation-patents, R&D budgets, "supercreative" professions, inventors-also followed a quarter-power law, in a way that was every bit as predictable as Kleiber's law. But there was one fundamental difference: the quarter-power law governing innovation was positive, positive, not negative. A city that was ten times larger than its neighbor wasn't ten times more innovative; it was not negative. A city that was ten times larger than its neighbor wasn't ten times more innovative; it was seventeen seventeen times more innovative. A metropolis fifty times bigger than a town was 130 times more innovative. times more innovative. A metropolis fifty times bigger than a town was 130 times more innovative.

Kleiber's law proved that as life gets bigger, it slows down. But West's model demonstrated one crucial way in which human-built cities broke from the patterns of biological life: as cities get bigger, they generate ideas at a faster clip. This is what we call "superlinear scaling": if creativity scaled with size in a straight, linear fashion, you would of course find more patents and inventions in a larger city, but the number of patents and inventions per capita would be stable. West's power laws suggested something far more provocative: that despite all the noise and crowding and distraction, the average resident of a metropolis with a population of five million people was almost three times three times more creative than the average resident of a town of a hundred thousand. "Great cities are not like towns only larger," Jane Jacobs wrote nearly fifty years ago. West's positive quarter-power law gave that insight a mathematical foundation. Something about the environment of a big city was making its residents significantly more innovative than residents of smaller towns. But what was it? more creative than the average resident of a town of a hundred thousand. "Great cities are not like towns only larger," Jane Jacobs wrote nearly fifty years ago. West's positive quarter-power law gave that insight a mathematical foundation. Something about the environment of a big city was making its residents significantly more innovative than residents of smaller towns. But what was it?

The 10/10 Rule The first national broadcast of a color television program took place on January 1, 1954, when NBC aired an hour-long telecast of the Tournament of Roses parade, and distributed it to twenty-two cities across the country. For those lucky enough to see the program, the effect of a moving color image on a small screen seems to have been mesmerizing. The New York Times New York Times, in typical language, called it a "veritable bevy of hues and depth." "To concentrate so much color information within the frame of a small screen," the Times Times wrote, "would be difficult for even the most gifted artist doing a 'still' painting. To do it with constantly moving pictures seemed pure wizardry." Alas, the Rose Parade "broadcast" turned out to be not all that broad, given that it was visible only on prototype televisions in RCA showrooms. Color programming would not become standard on prime-time shows until the late 1960s. After the advent of color, the basic conventions that defined the television image would go unchanged for decades. The delivery mechanisms began to diversify with the introduction of VCRs and cable in the late 1970s. But the image remained the same. wrote, "would be difficult for even the most gifted artist doing a 'still' painting. To do it with constantly moving pictures seemed pure wizardry." Alas, the Rose Parade "broadcast" turned out to be not all that broad, given that it was visible only on prototype televisions in RCA showrooms. Color programming would not become standard on prime-time shows until the late 1960s. After the advent of color, the basic conventions that defined the television image would go unchanged for decades. The delivery mechanisms began to diversify with the introduction of VCRs and cable in the late 1970s. But the image remained the same.

In the mid-1980s, a number of influential media and technology executives, along with a few visionary politicians, had the eminently good idea that it was time to upgrade the video quality of broadcast television. Speeches were delivered, committees formed, experimental prototypes built, but it wasn't until July 23, 1996, that a Raleigh, North Carolina, CBS affiliate initiated the first public transmission of an HDTV signal. Like the Tournament of Roses footage, though, there were no ordinary consumers with sets capable of displaying its "wizardry."1 A handful of broadcasters began transmitting HDTV signals in 1999, but HD television didn't become a mainstream consumer phenomenon for another five years. Even after the FCC mandated that all television stations cease broadcasting the old a.n.a.log standard on June 12, 2009, more than 10 percent of U.S. households had televisions that went dark that day. A handful of broadcasters began transmitting HDTV signals in 1999, but HD television didn't become a mainstream consumer phenomenon for another five years. Even after the FCC mandated that all television stations cease broadcasting the old a.n.a.log standard on June 12, 2009, more than 10 percent of U.S. households had televisions that went dark that day.

It is one of the great truisms of our time that we live in an age of technological acceleration acceleration; the new paradigms keep rolling in, and the intervals between them keep shortening. This acceleration reflects not only the flood of new products, but also our growing willingness to embrace these strange new devices, and put them to use. The waves roll in at ever-increasing frequencies, and more and more of us are becoming trained surfers, paddling out to meet them the second they start to crest. But the HDTV story suggests that this acceleration is hardly a universal law. If you measure how quickly a new technology progresses from an original idea to ma.s.s adoption, then it turns out that HDTV was traveling at the exact same speed that color television had traveled four decades earlier. It took ten years for color TV to go from the fringes to the mainstream; two generations later, it took HDTV just as long to achieve ma.s.s success.

In fact, if you look at the entirety of the twentieth century, the most important developments in ma.s.s, one-to-many communications clock in at the same social innovation rate with an eerie regularity. Call it the 10/10 rule: a decade to build the new platform, and a decade for it to find a ma.s.s audience. The technology standard of amplitude-modulated radio-what we now call AM radio-evolved in the first decade of the twentieth century. The first commercial AM station began broadcasting in 1920, but it wasn't until the late 1920s that radios became a fixture in American households. Sony inaugurated research into the first consumer videoca.s.sette recorder in 1969, but didn't ship its first Betamax for another seven years, and VCRs didn't become a household necessity until the mid-eighties. The DVD player didn't statistically replace the VCR in American households until 2006, nine years after the first players went on the market. Cell phones, personal computers, GPS navigation devices-all took a similar time frame to go from innovation to ma.s.s adoption.

Consider, as an alternate scenario, the story of Chad Hurley, Steve Chen, and Jawed Karim, three former employees of the online payment site PayPal, who decided in early 2005 that the Web was ripe for an upgrade in the way it handled video and sound. Video, of course, was not native to the Web, which had begun its life fifteen years before as a platform for academics to share hypertext doc.u.ments. But over the years, video clips had begun to trickle their way online, thanks to new video standards that emerged, such as Quick-Time, Flash, or Windows Media Player. But the mechanisms that allowed people to upload and share their own videos were too challenging for most ordinary users. So Hurley, Chen, and Karim cobbled together a rough beta for a service that would correct these deficiencies, raised less than $10 million in venture capital, hired about two dozen people, and launched YouTube, a website that utterly transformed the way video information is shared online. Within sixteen months of the company's founding, the service was streaming more than 30 million videos a day. Within two years, YouTube was one of the top-ten most visited sites on the Web. Before Hurley, Chen, and Karim hit upon their idea for a start-up, video on the Web was as common as subt.i.tles on television. The Web was about doing things with text, and uploading the occasional photo. YouTube brought Web video into the mainstream.

Now compare the way these two ideas-HDTV and YouTube- changed the basic rules of engagement for their respective platforms. Going from a.n.a.log television to HDTV is a change in degree, not in kind: there are more pixels; the sound is more immersive; the colors are sharper. But consumers watch HDTV the exact same way they watched old-fashioned a.n.a.log TV. They choose a channel, and sit back and watch. YouTube, on the other hand, radically altered the basic rules of the medium. For starters, it made watching video on the Web a ma.s.s phenomenon. But with YouTube you weren't limited to sitting and watching a show, television-style; you could also upload your own clips, recommend or rate other clips, get into a conversation about them. With just a few easy keystrokes, you could take a clip running on someone else's site, and drop a copy of it onto your own site. The technology allowed ordinary enthusiasts to effectively program their own private television networks, st.i.tching together video clips from all across the planet.

Some will say that this is merely a matter of software, which is intrinsically more adaptable than hardware like televisions or cellular phones. But before the Web became mainstream in the mid-1990s, the pace of software innovation followed the exact same 10/10 pattern of development that we saw in the spread of other twentieth-century technologies. The graphical user interface, for instance, dates back to a famous technology demo given by pioneering computer scientist Doug Engelbart in 1968. During the 1970s, many of its core elements-like the now ubiquitous desktop metaphor-were developed by researchers at Xerox-PARC. But the first commercial product with a fully realized graphical user interface didn't ship until 1981, in the form of the Xerox Star workstation, followed by the Macintosh in 1984, the first graphical user interface to reach a mainstream, if niche, audience. But it wasn't until the release of Windows 3.0 in 1990-almost exactly ten years after the Xerox Star hit the market-that graphical user interfaces became the norm. The same pattern occurs in the developmental history of other software genres, such as word processors, spreadsheets, or e-mail clients. They were all built out of bits, not atoms, but they took just as long to go from idea to ma.s.s success as HDTV did.

There are many ways to measure innovation, but perhaps the most elemental yardstick, at least where technology is concerned, revolves around the job job that the technology in question lets you do. All other things being equal, a breakthrough that lets you execute two jobs that were impossible before is twice as innovative as a breakthrough that lets you do only one new thing. By that measure, YouTube was significantly more innovative than HDTV, despite the fact that HDTV was a more complicated technical problem. YouTube let you publish, share, rate, discuss, and watch video more efficiently than ever before. HDTV let you watch more pixels than ever before. But even with all those extra layers of innovation, YouTube went from idea to ma.s.s adoption in less than two years. Something about the Web environment had enabled Hurley, Chen, and Karim to unleash a good idea on the world with astonishing speed. They took the 10/10 rule and made it 1/1. that the technology in question lets you do. All other things being equal, a breakthrough that lets you execute two jobs that were impossible before is twice as innovative as a breakthrough that lets you do only one new thing. By that measure, YouTube was significantly more innovative than HDTV, despite the fact that HDTV was a more complicated technical problem. YouTube let you publish, share, rate, discuss, and watch video more efficiently than ever before. HDTV let you watch more pixels than ever before. But even with all those extra layers of innovation, YouTube went from idea to ma.s.s adoption in less than two years. Something about the Web environment had enabled Hurley, Chen, and Karim to unleash a good idea on the world with astonishing speed. They took the 10/10 rule and made it 1/1.

This is a book about the s.p.a.ce of innovation. Some environments squelch new ideas; some environments seem to breed them effortlessly. The city and the Web have been such engines of innovation because, for complicated historical reasons, they are both environments that are powerfully suited for the creation, diffusion, and adoption of good ideas. Neither environment is perfect, by any means. (Think of crime rates in big cities, or the explosion of spam online.) But both the city and the Web possess an undeniable track record at generating innovation.2 In the same way, the "myriad tiny architects" of Darwin's coral reef create an environment where biological innovation can flourish. If we want to understand where good ideas come from, we have to put them in context. Darwin's world-changing idea unfolded inside his brain, but think of all the environments and tools he needed to piece it together: a ship, an archipelago, a notebook, a library, a coral reef. Our thought shapes the s.p.a.ces we inhabit, and our s.p.a.ces return the favor. The argument of this book is that a series of shared properties and patterns recur again and again in unusually fertile environments. I have distilled them down into seven patterns, each one occupying a separate chapter. The more we embrace these patterns-in our private work habits and hobbies, in our office environments, in the design of new software tools-the better we will be at tapping our extraordinary capacity for innovative thinking. In the same way, the "myriad tiny architects" of Darwin's coral reef create an environment where biological innovation can flourish. If we want to understand where good ideas come from, we have to put them in context. Darwin's world-changing idea unfolded inside his brain, but think of all the environments and tools he needed to piece it together: a ship, an archipelago, a notebook, a library, a coral reef. Our thought shapes the s.p.a.ces we inhabit, and our s.p.a.ces return the favor. The argument of this book is that a series of shared properties and patterns recur again and again in unusually fertile environments. I have distilled them down into seven patterns, each one occupying a separate chapter. The more we embrace these patterns-in our private work habits and hobbies, in our office environments, in the design of new software tools-the better we will be at tapping our extraordinary capacity for innovative thinking.3 These patterns turn out to have a long history, much older than most of the systems that we conventionally a.s.sociate with innovation. This history is particularly rich because it is not exclusively limited to human creations like the Internet or the metropolis. The amplification and adoption of useful innovation exist throughout natural natural history as well. Coral reefs are sometimes called the "cities of the sea," and part of the argument of this book is that we need to take the metaphor seriously: the reef ecosystem is so innovative in its exploitation of those nutrient-poor waters because it shares some defining characteristics with actual cities. In the language of complexity theory, these patterns of innovation and creativity are fractal: they reappear in recognizable form as you zoom in and out, from molecule to neuron to pixel to sidewalk. Whether you're looking at the original innovations of carbon-based life, or the explosion of new software tools on the Web, the same shapes keep turning up. When life gets creative, it has a tendency to gravitate toward certain recurring patterns, whether those patterns are emergent and self-organizing, or whether they are deliberately crafted by human agents. history as well. Coral reefs are sometimes called the "cities of the sea," and part of the argument of this book is that we need to take the metaphor seriously: the reef ecosystem is so innovative in its exploitation of those nutrient-poor waters because it shares some defining characteristics with actual cities. In the language of complexity theory, these patterns of innovation and creativity are fractal: they reappear in recognizable form as you zoom in and out, from molecule to neuron to pixel to sidewalk. Whether you're looking at the original innovations of carbon-based life, or the explosion of new software tools on the Web, the same shapes keep turning up. When life gets creative, it has a tendency to gravitate toward certain recurring patterns, whether those patterns are emergent and self-organizing, or whether they are deliberately crafted by human agents.

It may seem odd to talk about such different regions of experience as though they were interchangeable. But in fact, we are constantly making equivalent conceptual leaps from biology to culture without blinking. It is not a figure of speech to say that the pattern of "compet.i.tion"-a term often a.s.sociated with innovation-plays a critical role in the behavior of marketplaces, in the interaction between a swarm of sperm cells and an egg, and in the ecosystem-scale battle between organisms for finite energy sources. We are not using a metaphor of economic compet.i.tion to describe the struggles of those sperm cells: the meaning of the word "compet.i.tion" is wide (or perhaps deep) enough to encompa.s.s sperm cells and and corporations. The same principle applies to the seven patterns I have a.s.sembled here. corporations. The same principle applies to the seven patterns I have a.s.sembled here.

Traveling across these different environments and scales is not merely intellectual tourism. Science long ago realized that we can understand something better by studying its behavior in different contexts. When we want to answer a question like "Why has the Web been so innovative?" we naturally invoke thoughts of its creators, and the works.p.a.ces, organizations, and information networks they used in building it. But it turns out that we can answer the question more comprehensively if we draw a.n.a.logies to patterns of innovation that we see in ecosystems like Darwin's coral reef, or in the structure of the human brain. We have no shortage of theories to instruct us how to make our organizations more creative, or explain why tropical rain forests engineer so much molecular diversity. What we lack is a unified theory that describes the common attributes shared by all those innovation systems. Why is a coral reef such an engine of biological innovation? Why do cities have such an extensive history of idea creation? Why was Darwin able to hit upon a theory that so many brilliant contemporaries of his missed? No doubt there are partial answers to these questions that are unique to each situation, and each scale: the ecological history of the reef; the sociology of urban life; the intellectual biography of a scientist. But the argument of this book is that there are other, more interesting answers that are applicable to all three situations, and that by approaching the problem in this fractal, cross-disciplinary way, new insights become visible. Watching the ideas spark on these different scales reveals patterns that single-scale observations easily miss or undervalue.

I call that vantage point the long zoom long zoom. It can be imagined as a kind of hourgla.s.s:

As you descend toward the center of the gla.s.s, the biological scales contract: from the global, deep time of evolution to the microscopic exchanges of neurons or DNA. At the center of the gla.s.s, the perspective shifts from nature to culture, and the scales widen: from individual thoughts and private works.p.a.ces to immense cities and global information networks. When we look at the history of innovation from the vantage point of the long zoom, what we find is that unusually generative environments display similar patterns of creativity at multiple scales simultaneously. You can't explain the bio-diversity of the coral reef by simply studying the genetics of the coral itself. The reef generates and sustains so many different forms of life because of patterns that recur on the scales of cells, organisms, and the wider ecosystem itself. The sources of innovation in the city and the Web are equally fractal. In this sense, seeing the problem of innovation from the long-zoom perspective does not just give us new metaphors. It gives us new facts facts.

The pattern of "compet.i.tion" is an excellent case in point. Every economics textbook will tell you that compet.i.tion between rival firms leads to innovation in their products and services. But when you look at innovation from the long-zoom perspective, compet.i.tion turns out to be less central to the history of good ideas than we generally think. a.n.a.lyzing innovation on the scale of individuals and organizations-as the standard textbooks do-distorts our view. It creates a picture of innovation that overstates the role of proprietary research and "survival of the fittest" compet.i.tion. The long-zoom approach lets us see that openness and connectivity may, in the end, be more valuable to innovation than purely compet.i.tive mechanisms. Those patterns of innovation deserve recognition-in part because it's intrinsically important to understand why good ideas emerge historically, and in part because by embracing these patterns we can build environments that do a better job of nurturing good ideas, whether those environments are schools, governments, software platforms, poetry seminars, or social movements. We can think more creatively if we open our minds to the many connected environments that make creativity possible.

The academic literature on innovation and creativity is rich with subtle distinctions between innovations and inventions, between different modes of creativity: artistic, scientific, technological. I have deliberately chosen the broadest possible phrasing-good ideas-to suggest the cross-disciplinary vantage point I am trying to occupy. The good ideas in this survey range from software platforms to musical genres to scientific paradigms to new models for government. My premise is that there is as much value to be found in seeking the common properties across all these varied forms of innovation and creativity as there is value to be found in doc.u.menting the differences between them. The poet and the engineer (and the coral reef) may seem a million miles apart in their particular forms of expertise, but when they bring good ideas into the world, similar patterns of development and collaboration shape that process.

If there is a single maxim that runs through this book's arguments, it is that we are often better served by connecting connecting ideas than we are by protecting them. Like the free market itself, the case for restricting the flow of innovation has long been b.u.t.tressed by appeals to the "natural" order of things. But the truth is, when one looks at innovation in nature and in culture, environments that build walls around good ideas tend to be less innovative in the long run than more open-ended environments. Good ideas may not want to be free, but they do want to connect, fuse, recombine. They want to reinvent themselves by crossing conceptual borders. They want to complete each other as much as they want to compete. ideas than we are by protecting them. Like the free market itself, the case for restricting the flow of innovation has long been b.u.t.tressed by appeals to the "natural" order of things. But the truth is, when one looks at innovation in nature and in culture, environments that build walls around good ideas tend to be less innovative in the long run than more open-ended environments. Good ideas may not want to be free, but they do want to connect, fuse, recombine. They want to reinvent themselves by crossing conceptual borders. They want to complete each other as much as they want to compete.

I.

THE ADJACENT POSSIBLE.

Sometime in the late 1870s, a Parisian obstetrician named Stephane Tarnier took a day off from his work at Maternite de Paris, the lying-in hospital for the city's poor women, and paid a visit to the nearby Paris Zoo. Wandering past the elephants and reptiles and cla.s.sical gardens of the zoo's home inside the Jardin des Plantes, Tarnier stumbled across an exhibit of chicken incubators. Seeing the hatchlings totter about in the incubator's warm enclosure triggered an a.s.sociation in his head, and before long he had hired Odile Martin, the zoo's poultry raiser, to construct a device that would perform a similar function for human newborns. By modern standards, infant mortality was staggeringly high in the late nineteenth century, even in a city as sophisticated as Paris. One in five babies died before learning to crawl, and the odds were far worse for premature babies born with low birth weights. Tarnier knew that temperature regulation was critical for keeping these infants alive, and he knew that the French medical establishment had a deep-seated obsession with statistics. And so as soon as his newborn incubator had been installed at Maternite, the fragile infants warmed by hot water bottles below the wooden boxes, Tarnier embarked on a quick study of five hundred babies. The results shocked the Parisian medical establishment: while 66 percent of low-weight babies died within weeks of birth, only 38 percent died if they were housed in Tarnier's incubating box. You could effectively halve the mortality rate for premature babies simply by treating them like hatchlings in a zoo.

Tarnier's incubator was not the first device employed for warming newborns, and the contraption he built with Martin would be improved upon significantly in the subsequent decades. But Tarnier's statistical a.n.a.lysis gave newborn incubation the push that it needed: within a few years, the Paris munic.i.p.al board required that incubators be installed in all the city's maternity hospitals. In 1896, an enterprising physician named Alexandre Lion set up a display of incubators-with live newborns-at the Berlin Exposition. Dubbed the Kinderbrutenstalt Kinderbrutenstalt, or "child hatchery," Lion's exhibit turned out to be the sleeper hit of the exposition, and launched a bizarre tradition of incubator sideshows that persisted well into the twentieth century. (Coney Island had a permanent baby incubator show until the early 1940s.) Modern incubators, supplemented with high-oxygen therapy and other advances, became standard equipment in all American hospitals after the end of World War II, triggering a spectacular 75 percent decline in infant mortality rates between 1950 and 1998. Because incubators focus exclusively on the beginning of life, their benefit to public health-measured by the sheer number of extra years they provide-rivals any medical advance of the twentieth century. Radiation therapy or a double bypa.s.s might give you another decade or two, but an incubator gives you an entire lifetime.

In the developing world, however, the infant mortality story remains bleak. Whereas infant deaths are below ten per thousand births throughout Europe and the United States, over a hundred infants die per thousand in countries like Liberia and Ethiopia, many of them premature babies that would have survived with access to incubators. But modern incubators are complex, expensive things. A standard incubator in an American hospital might cost more than $40,000. But the expense is arguably the smaller hurdle to overcome. Complex equipment breaks, and when it breaks you need the technical expertise to fix it, and you need replacement parts. In the year that followed the 2004 Indian Ocean tsunami, the Indonesian city of Meulaboh received eight incubators from a range of international relief organizations. By late 2008, when an MIT professor named Timothy Prestero visited the hospital, all eight were out of order, the victims of power surges and tropical humidity, along with the hospital staff's inability to read the English repair manual. The Meulaboh incubators were a representative sample: some studies suggest that as much as 95 percent of medical technology donated to developing countries breaks within the first five years of use.

Prestero had a vested interest in those broken incubators, because the organization he founded, Design that Matters, had been working for several years on a new scheme for a more reliable, and less expensive, incubator, one that recognized complex medical technology was likely to have a very different tenure in a developing world context than it would in an American or European hospital. Designing an incubator for a developing country wasn't just a matter of creating something that worked; it was also a matter of designing something that would break in a non-catastrophic way. You couldn't guarantee a steady supply of spare parts, or trained repair technicians. So instead, Prestero and his team decided to build an incubator out of parts that were already abundant in the developing world. The idea had originated with a Boston doctor named Jonathan Rosen, who had observed that even the smaller towns of the developing world seemed to be able to keep automobiles in working order. The towns might have lacked air conditioning and laptops and cable television, but they managed to keep their Toyota 4Runners on the road. So Rosen approached Prestero with an idea: What if you made an incubator out of automobile parts?

Three years after Rosen suggested the idea, the Design that Matters team introduced a prototype device called the NeoNurture. From the outside, it looked like a streamlined modern incubator, but its guts were automotive. Sealed-beam headlights supplied the crucial warmth; dashboard fans provided filtered air circulation; door chimes sounded alarms. You could power the device via an adapted cigarette lighter, or a standard-issue motorcycle battery. Building the NeoNurture out of car parts was doubly efficient, because it tapped both the local supply of parts themselves and the local knowledge of automobile repair. These were both abundant resources in the developing world context, as Rosen liked to say. You didn't have to be a trained medical technician to fix the NeoNurture; you didn't even have to read the manual. You just needed to know how to replace a broken headlight.

Good ideas are like the NeoNurture device. They are, inevitably, constrained by the parts and skills that surround them. We have a natural tendency to romanticize breakthrough innovations, imagining momentous ideas transcending their surroundings, a gifted mind somehow seeing over the detritus of old ideas and ossified tradition. But ideas are works of bricolage; they're built out of that detritus. We take the ideas we've inherited or that we've stumbled across, and we jigger them together into some new shape. We like to think of our ideas as $40,000 incubators, shipped direct from the factory, but in reality they've been cobbled together with spare parts that happened to be sitting in the garage.

Before his untimely death in 2002, the evolutionary biologist Stephen Jay Gould maintained an odd collection of footware that he had purchased during his travels through the developing world, in open-air markets in Quito, Nairobi, and Delhi. They were sandals made from recycled automobile tires. As a fashion statement, they may not have amounted to much, but Gould treasured his tire sandals as a testimony to "human ingenuity." But he also saw them as a metaphor for the patterns of innovation in the biological world. Nature's innovations, too, rely on spare parts. Evolution advances by taking available resources and cobbling them together to create new uses. The evolutionary theorist Francois Jacob captured this in his concept of evolution as a "tinkerer," not an engineer; our bodies are also works of bricolage, old parts strung together to form something radically new. "The tires-to-sandals principle works at all scales and times," Gould wrote, "permitting odd and unpredictable initiatives at any moment-to make nature as inventive as the cleverest person who ever pondered the potential of a junkyard in Nairobi."

You can see this process at work in the primordial innovation of life itself. We do not yet have scientific consensus on the specifics of life's origins. Some believe life originated in the boiling, metallic vents of undersea volcanoes; others suspect the open oceans; others point to the tidal ponds where Darwin believed life first took hold. Many respected scientists think that life may have arrived from outer s.p.a.ce, embedded in a meteor. But we have a much clearer picture of the composition of earth's atmosphere before life emerged, thanks to a field known as prebiotic chemistry. The lifeless earth was dominated by a handful of basic molecules: ammonia, methane, water, carbon dioxide, a smattering of amino acids, and other simple organic compounds. Each of these molecules was capable of a finite series of transformations and exchanges with other molecules in the primordial soup: methane and oxygen recombining to form formaldehyde and water, for instance.

Think of all those initial molecules, and then imagine all the potential new combinations that they could form spontaneously, simply by colliding with each other (or perhaps prodded along by the extra energy of a propitious lightning strike). If you could play G.o.d and trigger all those combinations, you would end up with most of the building blocks of life: the proteins that form the boundaries of cells; sugar molecules crucial to the nucleic acids of our DNA. But you would not be able to trigger chemical reactions that would build a mosquito, or a sunflower, or a human brain. Formaldehyde is a first-order combination: you can create it directly from the molecules in the primordial soup. The atomic elements that make up a sunflower are the very same ones available on earth before the emergence of life, but you can't spontaneously create a sunflower in that environment, because it relies on a whole series of subsequent innovations that wouldn't evolve on earth for billions of years: chloroplasts to capture the sun's energy, vascular tissues to circulate resources through the plant, DNA molecules to pa.s.s on sunflower-building instructions to the next generation.

The scientist Stuart Kauffman has a suggestive name for the set of all those first-order combinations: "the adjacent possible." The phrase captures both the limits and the creative potential of change and innovation. In the case of prebiotic chemistry, the adjacent possible defines all those molecular reactions that were directly achievable in the primordial soup. Sunflowers and mosquitoes and brains exist outside that circle of possibility. The adjacent possible is a kind of shadow future, hovering on the edges of the present state of things, a map of all the ways in which the present can reinvent itself. Yet is it not an infinite s.p.a.ce, or a totally open playing field. The number of potential first-order reactions is vast, but it is a finite number, and it excludes most of the forms that now populate the biosphere. What the adjacent possible tells us is that at any moment the world is capable of extraordinary change, but only certain certain changes can happen. changes can happen.

The strange and beautiful truth about the adjacent possible is that its boundaries grow as you explore those boundaries. Each new combination ushers new combinations into the adjacent possible. Think of it as a house that magically expands with each door you open. You begin in a room with four doors, each leading to a new room that you haven't visited yet. Those four rooms are the adjacent possible. But once you open one of those doors and stroll into that room, three new doors appear, each leading to a brand-new room that you couldn't have reached from your original starting point. Keep opening new doors and eventually you'll have built a palace.

Basic fatty acids will naturally self-organize into spheres lined with a dual layer of molecules, very similar to the membranes that define the boundaries of modern cells. Once the fatty acids combine to form those bounded spheres, a new wing of the adjacent possible opens up, because those molecules implicitly create a fundamental division between the inside and outside of the sphere. This division is the very essence of a cell. Once you have an "inside," you can put things there: food, organelles, genetic code. Small molecules can pa.s.s through the membrane and then combine with other molecules to form larger ent.i.ties too big to escape back through the boundaries of the proto-cell. When the first fatty acids spontaneously formed those dual-layered membranes, they opened a door into the adjacent possible that would ultimately lead to nucleotide-based genetic code, and the power plants of the chloroplasts and mitochondria-the primary "inhabitants" of all modern cells.

The same pattern appears again and again throughout the evolution of life. Indeed, one way to think about the path of evolution is as a continual exploration of the adjacent possible. When dinosaurs such as the velociraptor evolved a new bone called the semilunate carpal (the name comes from its half-moon shape), it enabled them to swivel their wrists with far more flexibility. In the short term, this gave them more dexterity as predators, but it also opened a door in the adjacent possible that would eventually lead, many millions of years later, to the evolution of wings and flight. When our ancestors evolved opposable thumbs, they opened up a whole new cultural branch of the adjacent possible: the creation and use of finely crafted tools and weapons.

One of the things that I find so inspiring in Kauffman's notion of the adjacent possible is the continuum it suggests between natural and man-made systems. He introduced the concept in part to ill.u.s.trate a fascinating secular trend shared by both natural and human history: this relentless pushing back against the barricades of the adjacent possible. "Something has obviously happened in the past 4.8 billion years," he writes. "The biosphere has expanded, indeed, more or less persistently exploded, into the ever-expanding adjacent possible. . . . It is more than slightly interesting that this fact is clearly true, that it is rarely remarked upon, and that we have no particular theory for this expansion." Four billion years ago, if you were a carbon atom, there were a few hundred molecular configurations you could stumble into. Today that same carbon atom, whose atomic properties haven't changed one single nanogram, can help build a sperm whale or a giant redwood or an H1N1 virus, along with a near-infinite list of other carbon-based life forms that were not part of the adjacent possible of prebiotic earth. Add to that an equally formidable list of human concoctions that rely on carbon-every single object on the planet made of plastic, for instance-and you can see how far the kingdom of the adjacent possible has expanded since those fatty acids self-a.s.sembled into the first membrane.

The history of life and human culture, then, can be told as the story of a gradual but relentless probing of the adjacent possible, each new innovation opening up new paths to explore. But some systems are more adept than others at exploring those possibility s.p.a.ces. The mystery of Darwin's paradox that we began with ultimately revolves around the question of why a coral reef ecosystem should be so adventurous in its exploration of the adjacent possible-so many different life forms sharing such a small s.p.a.ce-while the surrounding waters of the ocean lack that same marvelous diversity. Similarly, the environments of big cities allow far more commercial exploration of the adjacent possible than towns or villages, allowing tradesmen and entrepreneurs to specialize in fields that would be unsustainable in smaller population centers. The Web has explored the adjacent possible of its medium far faster than any other communications technology in history. In early 1994, the Web was a text-only medium, pages of words connected by hyperlinks. But within a few years, the possibility s.p.a.ce began to expand. It became a medium that let you do financial transactions, which turned it into a shopping mall and an auction house and a casino. Shortly afterward, it became a true two-way medium where it was as easy to publish your own writing as it was to read other people's, which engendered forms that the world had never seen before: user-auth.o.r.ed encyclopedias, the blogosphere, social network sites. YouTube made the Web one of the most influential video delivery mechanisms on the planet. And now digital maps are unleashing their own cartographic revolutions.

You can see the fingerprints of the adjacent possible in one of the most remarkable patterns in all of intellectual history, what scholars now call "the multiple": A brilliant idea occurs to a scientist or inventor somewhere in the world, and he goes public with his remarkable finding, only to discover that three other minds had independently come up with the same idea in the past year. Sunspots were simultaneously discovered in 1611 by four scientists living in four different countries. The first electrical battery was invented separately by Dean Von Kleist and Cuneus of Leyden in 1745 and 1746. Joseph Priestley and Carl Wilhelm Scheele independently isolated oxygen between 1772 and 1774. The law of the conservation of energy was formulated separately four times in the late 1840s. The evolutionary importance of genetic mutation was proposed by S. Korschinsky in 1899 and then by Hugo de Vries in 1901, while the impact of X-rays on mutation rates was independently uncovered by two scholars in 1927. The telephone, telegraph, steam engine, photograph vacuum tube, radio-just about every essential technological advance of modern life has a multiple lurking somewhere in its origin story.

In the early 1920s, two Columbia University scholars named William Ogburn and Dorothy Thomas decided to track down as many multiples as they could find, eventually publishing their survey in an influential essay with the delightful t.i.tle "Are Inventions Inevitable?" Ogburn and Thomas found 148 instances of independent innovation, most them occurring within the same decade. Reading the list now, one is struck not just by the sheer number of cases, but how indistinguishable the list is from an unfiltered history of big ideas. Multiples have been invoked to support hazy theories about the "zeitgeist," but they have a much more grounded explanation. Good ideas are not conjured out of thin air; they are built out of a collection of existing parts, the composition of which expands (and, occasionally, contracts) over time. Some of those parts are conceptual: ways of solving problems, or new definitions of what const.i.tutes a problem in the first place. Some of them are, literally, mechanical parts. To go looking for oxygen, Priestley and Scheele needed the conceptual framework that the air was itself something worth studying and that it was made up of distinct gases; neither of these ideas became widely accepted until the second half of the eighteenth century. But they also needed the advanced scales that enabled them to measure the minuscule changes in weight triggered by oxidation, technology that was itself only a few decades old in 1774. When those parts became available, the discovery of oxygen entered the realm of the adjacent possible. Isolating oxygen was, as the saying goes, "in the air," but only because a specific set of prior discoveries and inventions had made that experiment thinkable.

The adjacent possible is as much about limits as it is about openings. At every moment in the timeline of an expanding biosphere, there are doors that cannot be unlocked yet. In human culture, we like to think of breakthrough ideas as sudden accelerations on the timeline, where a genius jumps ahead fifty years and invents something that normal minds, trapped in the present moment, couldn't possibly have come up with. But the truth is that technological (and scientific) advances rarely break out of the adjacent possible; the history of cultural progress is, almost without exception, a story of one door leading to another door, exploring the palace one room at a time. But of course, human minds are not bound by the finite laws of molecule formation, and so every now and then an idea does occur to someone that teleports us forward a few rooms, skipping some exploratory steps in the adjacent possible. But those ideas almost always end up being short-term failures, precisely because they have skipped ahead. We have a phrase for those ideas: we call them "ahead of their time."

Consider the legendary a.n.a.lytical Engine designed by nineteenth-century British inventor Charles Babbage, who is considered by most technology historians to be the father of modern computing, though he should probably be called the great-grandfather of modern computing, because it took several generations for the world to catch up to his idea. Babbage is actually in the pantheon for two inventions, neither of which he managed to build during his lifetime. The first was his Difference Engine, a fantastically complex fifteen-ton contraption, with over 25,000 mechanical parts, designed to calculate polynomial functions that were essential to creating the trigonometric tables crucial to navigation. Had Babbage actually completed his project, the Difference Engine would have been the world's most advanced mechanical calculator. When the London Science Museum constructed one from Babbage's plans to commemorate the centennial of his death, the machine returned accurate results to thirty-one places in a matter of seconds. Both the speed and precision of the device would have exceeded anything else possible in Babbage's time by several orders of magnitude.

For all its complexity, however, the Difference Engine was well within the adjacent possible of Victorian technology. The second half of the nineteenth century saw a steady stream of improvements to mechanical calculation, many of them building on Babbage's architecture. The Swiss inventor Per Georg Scheutz constructed a working Difference Engine that debuted at the Exposition Universelle of 1855; within two decades the piano-sized Scheutz design had been reduced to the size of a sewing machine. In 1884, an American inventor named William S. Burroughs founded the American Arithmometer Company to sell ma.s.s-produced calculators to businesses around the country. (The fortune generated by those machines would help fund his namesake grandson's writing career, not to mention his drug habit, almost a century later.) Babbage's design for the Difference Engine was a work of genius, no doubt, but it did not transcend the adjacent possible of its day.

The same cannot be said of Babbage's other brilliant idea: the a.n.a.lytical Engine, the great unfulfilled project of Babbage's career, which he toiled on for the last thirty years of his life. The machine was so complicated that it never got past the blueprint stage, save a small portion that Babbage built shortly before his death in 1871. The a.n.a.lytical Engine was-on paper, at least-the world's first programmable computer. Being programmable meant that the machine was fundamentally open-ended; it wasn't designed for a specific set of tasks, the way the Difference Engine had been optimized for polynomial equations. The a.n.a.lytical Engine was, like all modern computers, a shape-shifter, capable of reinventing itself based on the instructions conjured by its programmers. (The brilliant mathematician Ada Lovelace, the only daughter of Lord Byron, wrote several sets of instructions for Babbage's still-vaporware a.n.a.lytical Engine, earning her the t.i.tle of the world's first programmer.) Babbage's design for the engine antic.i.p.ated the basic structure of all contemporary computers: "programs" were to be inputted via punch cards, which had been invented decades before to control textile looms; instructions and data were captured in a "store," the equivalent of what we now call random access memory, or RAM; and calculations were executed via a system that Babbage called "the mill," using industrial-era language to describe what we now call the central processing unit, or CPU.

Babbage had most of this system sketched out by 1837, but the first true computer to use this programmable architecture didn't appear for more than a hundred years. While the Difference Engine engendered an immediate series of refinements and practical applications, the a.n.a.lytical Engine effectively disappeared from the map. Many of the pioneering insights that Babbage had hit upon in the 1830s had to be independently rediscovered by the visionaries of World War II-era computer science.

Why did the a.n.a.lytical Engine prove to be such a short-term dead end, given the brilliance of Babbage's ideas? The fancy way to say it is that his ideas had escaped the bounds of the adjacent possible. But it is perhaps better put in more prosaic terms: Babbage simply didn't have the right spare parts. Even if Babbage had built a machine to his specs, it is unclear whether it would have worked, because Babbage was effectively sketching out a machine for the electronic age during the middle of the steam-powered mechanical revolution. Unlike all modern computers, Babbage's machine was to be composed entirely of mechanical gears and switches, staggering in their number and in the intricacy of their design. Information flowed through the system as a constant ballet of metal objects shifting positions in carefully ch.o.r.eographed movements. It was a maintenance nightmare, but more than that, it was bound to be hopelessly slow. Babbage bragged to Ada Lovelace that he believed the machine would be able to multiply two twenty-digit numbers in three minutes. Even if he was right-Babbage wouldn't have been the first tech entrepreneur to exaggerate his product's performance-that kind of processing time would have