The Future: six drivers of global change - Part 3
Library

Part 3

Improvements to the printing press led to lower costs and the proliferation of printers looking for material to publish. Entry barriers were very low, both for obtaining the printed works of others and for contributing one's own thoughts. Soon the demand for knowledge led to modern works-from Cervantes and Shakespeare to journals and then newspapers. Ideas that found resonance with large numbers of people attracted a larger audience still-in the manner of a Google search today.

In the Age of Enlightenment that ensued, knowledge and reason became a source of political power that rivaled wealth and force of arms. The possibility of self-governance within a framework of representative democracy was itself an outgrowth of this new public square created within the information ecosystem of the printing press. Individuals with the freedom to read and communicate with others could make decisions collectively and shape their own destiny.

At the beginning of January in 1776, Thomas Paine-who had migrated from England to Philadelphia with no money, no family connections, and no source of influence other than an ability to express himself clearly in the printed word-published Common Sense, the pamphlet that helped to ignite the American War of Independence that July. The theory of modern free market capitalism, codified by Adam Smith in the same year, operated according to the same underlying principles. Individuals with free access to information about markets could freely choose to buy or sell-and the aggregate of all their decisions would const.i.tute an "invisible hand" to allocate resources, balance supply with demand, and set prices at an optimal level to maximize economic efficiency. It is fitting that the first volume of Gibbon's Decline and Fall of the Roman Empire was also published in the same year. Its runaway popularity was a counterpoint to the prevailing exhilaration about the future. The old order was truly gone; those of the present generation were busy making the world new again, with new ways of thinking and new inst.i.tutions shaped by the print revolution.

It should not surprise us, then, that the Digital Revolution, which is sweeping the world much faster and more powerfully than the Print Revolution did in its time, is ushering in with it another wave of new societal, cultural, political, and commercial patterns that are beginning to make our world new yet again. As dramatic as the changes wrought by the Print Revolution were (and as were those wrought earlier by the introduction of complex speech, writing, and phonetic alphabets), none of these previous waves of change remotely compares with what we are now beginning to experience as a result of today's emergent combination of nearly ubiquitous computing and access to the Internet. Computers have been roughly doubling in processing power (per dollar spent) every eighteen to twenty-four months for the last half-century. This remarkable pattern-which follows Moore's Law-has continued in spite of periodic predictions that it would soon run its course. Though some experts believe that Moore's Law may now finally be expiring over the next decade, others believe that new advances such as quantum computing will lead to continued rapid increases in computing power.

Our societies, culture, politics, commerce, educational systems, ways of relating to one another-and our ways of thinking-are all being profoundly reorganized with the emergence of the Global Mind and the growth of digital information at exponential rates. The annual production and storage of digital data by companies and individuals is 60,000 times more than the total amount of information contained in the Library of Congress. By 2011, the amount of information created and replicated had grown by a factor of nine in just five years. (The amount of digital storage capacity did not surpa.s.s a.n.a.log storage until 2002, but within only five years the percentage of information stored digitally grew to 94 percent of all stored information.) Two years earlier, the volume of data transmitted from mobile devices had already exceeded the total volume of all voice data transmitted. Not coincidentally, from 2003 to 2010, the average telephone call grew shorter by almost half, from three minutes to one minute and forty-seven seconds.

The number of people worldwide connected to the Internet doubled between 2005 and 2010 and in 2012 reached 2.4 billion users globally. By 2015, there will be as many mobile devices as there are people in the world. The number of mobile-only Internet users is expected to increase 56-fold over the next five years. Aggregate information flow using smartphones is projected to increase 47-fold over the same period. Smartphones already have captured more than half of the mobile phone market in the United States and many other developed countries.

But this is not just a phenomenon in wealthy countries. Although computers and tablets are still more concentrated in advanced nations, the reduction in the cost of computing power and the proliferation of smaller, more mobile computing devices is spreading access to the Global Mind throughout the world. More than 5 billion of the 7 billion people in the world now have access to mobile phones. In 2012, there were 1.1 billion active smartphone users worldwide-still under one fifth of the global market. While smartphones capable of connecting to the Internet are still priced beyond the reach of the majority of people in developing countries, the same relentless cost reductions that have characterized the digital age since its inception are now driving the migration of smart features and Internet connectivity into affordable versions of low-end smartphones that will soon be nearly ubiquitous.

Already, the perceived value of being able to connect to the Internet has led to the labeling of Internet access as a new "human right" in a United Nations report. Nicholas Negroponte has led one of two competing global initiatives to provide an inexpensive ($100 to $140) computer or tablet to every child in the world who does not have one. This effort to close the "information gap" also follows a pattern that began in wealthy countries. For example, the United States dealt with concerns in the 1990s about a gap between "information haves" and "information have-nots" by pa.s.sing a new law that subsidized the connection of every school and library to the Internet.

The behavioral changes driven by the digital revolution in developed countries also have at least some predictive value for the changes now in store for the world as a whole. According to a survey by Ericsson, 40 percent of smartphone owners connect to the Internet immediately upon awakening-even before they get out of bed. And that kick-starts a behavioral pattern that extends throughout their waking hours. While they are driving to work in the morning, for example, they encounter one of the new hazards to public health and safety: the use of mobile communications devices by people who email, text, play games, and talk on the phone while simultaneously trying to operate their cars and trucks.

In one extreme example of this phenomenon, a commercial airliner flew ninety minutes past its scheduled destination because both the pilot and copilot were absorbed with their personal laptops in the c.o.c.kpit, oblivious as more than twelve air traffic controllers in three different cities tried to get their attention-and as the Strategic Air Command readied fighter jets to intercept the plane-before the distracted pilots finally disengaged from their computers.

The popularity of the iPhone and the amount of time people communicate over its videoconferencing feature, FaceTime, has caused a few to actually modify the appearance of their faces in order to adapt to the new technology. Plastic surgeon Robert K. Sigal reported that "patients come in with their iPhones and show me how they look on FaceTime. The angle at which the phone is held, with the caller looking downward into the camera, really captures any heaviness, fullness and sagging of the face and neck. People say, 'I never knew I looked like that! I need to do something!' I've started calling it the 'FaceTime Facelift' effect. And we've developed procedures to specifically address it."

THE RISE OF "BIG DATA"

Just as we have extended our consciousness into the Global Mind, we are now extending our peripheral nervous system into the Internet of Things, which operates almost entirely below the level of consciousness and controls functions important to maintaining the efficiency of Earth Inc. It is this part of the global Internet that is proliferating most rapidly, generating far more data than people themselves produce, and evolving toward what some call the "Internet of Everything."

The emerging field labeled "Big Data," one of the exciting new frontiers of information science, is based on the development of new algorithms for supercomputers to sift through voluminous new quant.i.ties of data that have not previously been seen as manageable. More than 90 percent of the information collected by Landsat satellites has been sent directly to electronic storage without ever firing a single neuron in a human brain, and without being processed by computers for patterns and meaning. This and other troves of unutilized data may now finally be a.n.a.lyzed.

Similarly, most of the data now being collected during the operation of industrial processes by embedded systems, sensors, and tiny devices such as actuators has been disposed of soon after it is collected. With the plummeting cost of data storage and the growing sophistication of Big Data, some of this information is now being kept and a.n.a.lyzed and is already producing a flood of insights that promote efficiency in industry and business. To take another example, some commercial vehicles mount a small video camera on the windshield that collects data continuously but only saves twenty seconds at a time; in the event of an accident, the information collected during the seconds prior to and during the accident is saved for a.n.a.lysis. The same is true of black boxes on airplanes and most security cameras in buildings. The data collected is constantly erased to make room for newer information. Soon, most all of this information will be kept, stored, and processed by Big Data algorithms for useful insights.

Plans for gathering-and a.n.a.lyzing-even larger amounts of information are now under way throughout the world. IBM is working with the Netherlands Inst.i.tute of Radio Astronomy to develop a new generation of computer technology to store and process the data soon to be captured by the Square Kilometre Array, a new radio telescope that will collect each day twice the amount of information presently generated on the entire World Wide Web.

Virtually all human endeavors that routinely produce large amounts of data will soon be profoundly affected by the use of Big Data techniques. To put it another way, just as psychologists and philosophers search for deeper meanings in the operations of the human subconscious, cutting-edge supercomputers are now divining meaningful patterns in the enormous volumes of data collected on a continuous basis not only on the Internet of Things but also by a.n.a.lyzing patterns in the flood of information exchanged among people-including in the billions of messages posted each day on social networks like Twitter and Facebook.

The U.S. Geological Survey has established a Twitter Earthquake Detector to gather information on the impact and location of shaking events more quickly, particularly in populated areas with few seismic instruments. And in 2009, U.N. Secretary General Ban Ki-moon launched the Global Pulse program to a.n.a.lyze digital communications in order to detect and understand economic and social shocks more quickly. The pattern with which people add money to their mobile phone accounts is an early warning of job loss. Online food prices can be surveyed to help predict price spikes and food shortages. Searches for terms like "flu" and "cholera" can give warnings of disease outbreaks.

The intelligence community is using the techniques of Big Data a.n.a.lysis to search for patterns in vast flows of communication to predict social unrest in countries and regions of particular interest. Some new businesses are now using similar techniques to a.n.a.lyze millions of messages or tweets in order to predict how well Hollywood-and Bollywood-movies will perform at the box office.

DEMOCRACY IN THE BALANCE.

As always, the imperatives driving commerce and national security adapt quickly to the emergence of new technologies, but what about democracy in this new age? The rapid and relentless rise of Internet-based communication is surely a hopeful sign for the renewed health of self-governance, largely because the structural characteristics of the Internet are so similar to the world of the printing press: individuals have extremely low entry barriers and ever easier access. As was true in the age of the printing press, the quality of ideas conveyed over the Internet can be at least partially a.s.sessed by the number of people with whom they resonate. And as more people find resonance with particular expressions, more still have their attention directed to the expressions whose popularity is rising.

The demand for content on the Internet is also linked to a significant rise in reading-a faint echo of the "big bang" of literacy that accompanied the creation of the Gutenberg Galaxy. In fact, after reading declined following the introduction of television, it has now tripled in just the last thirty years because the overwhelmingly dominant content on the Internet is printed words.

With democracy having fallen on hard times due to the current dominance over the public interest in so many countries by wealth and corporate power-and in others by the entrenched power of authoritarian dictatorships-many supporters of democratic self-governance are placing their hopes on the revival of robust democratic discourse in the age of the Internet.

Already, revolutionary political movements-from the Tahrir Square protesters in Cairo to Los Indignados in Spain to Occupy Wall Street to the surprisingly ma.s.sive crowds of election protesters in Moscow-are predominantly shaped by the Internet. Facebook and Twitter have played a particularly important role in several of these movements, along with email, texting, and instant messaging. Google Earth has also been significant in spotlighting the excesses of elites, in Bahrain for example-and in the Libyan revolution, Google Earth was actually used by rebels in Misrata to guide their mortars. (Google Earth also, by the way, triggered a small border dispute and brief armed standoff between Nicaragua and Costa Rica, when it mistakenly attributed a tiny portion of Costa Rica to the national territory of Nicaragua.) Thus far, however, reformist and revolutionary movements that have begun on the Internet have mostly followed the same pattern: enervation and excitement followed by disappointment and stasis. It is still an open question whether these Internet-inspired reform movements will gain a second wind and, after a period of simmering, reemerge and ultimately reach their goals.

One of the first revolutionary movements in which the Internet played a key igniting role was the 2007 Saffron Revolution in Myanmar. Activists took extreme personal risks to spread their messages urging democratic reforms by using the World Wide Web with false names from Internet cafes and by smuggling thumb drives across the border to collaborators in the diaspora living in Thailand. Unfortunately, the authoritarian government in Myanmar was able to smother and shut down the Saffron Revolution, but only at the cost of completely blacking out the Internet inside the country's borders.

Nevertheless, the revolutionary fires lit before the Internet was shut down continued to smolder in Myanmar and continued to burn brightly in other parts of the world where the forces of conscience had been awakened to the abuses and injustices of the Myanmar dictatorship. (Diasporas, particularly educated and wealthy diasporas in Western countries, have been newly empowered by the Internet to play significant roles in fostering and sustaining reform movements in their countries of origin.) A few years later, the government of Myanmar was pressured to loosen its controls on political dialogue and release the leader of the reform movement, Aung San Suu Kyi, from her long house arrest, and in March 2012 she was triumphantly elected to the Parliament amidst many signs that the popular movement that had begun on the Internet was reemerging as a force for change that seemed destined to take control of the government.

In many other authoritarian countries, however, the ferocious resistance to reform has been more effective in snuffing out Internet-based dissent movements. In 2009, Iran's Green Revolution began as a popular protest against the fraudulent presidential election. Although Western sympathizers had the impression that Twitter played a key role in igniting and sustaining the protest movement, in actuality social media played a much smaller role inside than outside Iran because the Iranian government was successful in largely controlling Internet use by the protesters. While it is true that YouTube videos doc.u.mented government excesses (most famously, the tragic death of Neda Agha-Soltan), the more potent social media sites that would have enabled dissenters to build a larger protest movement were almost completely shut down. Indeed, during the election campaign itself, when the princ.i.p.al opposition candidate, Mir-Hossein Moussavi, began to gain momentum by organizing on Facebook, the government simply blacked it out.

Worse still, the Iranian security forces gave the world a demonstration of what a malignant authoritarian government can do to its citizens by using the knowledge it gains from their Internet connections and social graphs to identify and track down dissenters, read their private communications, and effectively stifle any effective resistance to the dictatorship's authority. The entire episode was a chilling alarm that underscored the extent to which the lack of privacy on the Internet can potentially increase the power of government over the governed more easily than it can empower reform and revolution.

China, in particular, has introduced by far the most sophisticated measures to censor content on the Internet and exercise control over its potential for fostering reformist or revolutionary fervor. The "Great Firewall of China" is the largest effort at Internet control in the world today. (Iran and the retro-Stalinist dictatorship of Belarus are the other two countries that have attempted such efforts.) China's connection to the global Internet is monopolized by state-run operators that carefully follow a system of protocols that effectively turn the Internet within China into a national intranet. In 2010, even an interview with the then premier of China, Wen Jiabao, in which he advocated reforms, was censored and made unavailable to the people of China.

In 2006 the Chinese plan to control content on the Internet collided with the open values of the world's largest search engine, Google. As one who partic.i.p.ated in the company's deliberations at the time, I saw firsthand how limited the options were. After searching for ways to reconcile its commitment to full openness of information with China's determined effort to block any and all content it found objectionable, Google made the principled decision to withdraw from China and instead route its site through Hong Kong, which still maintains a higher level of freedom, albeit within constraints imposed from Beijing. Facebook, by the way, has never been allowed into China. The cofounder of Google, Sergey Brin, said in 2012 that China had been far more effective in controlling the Internet than he had expected. "I thought there was no way to put the genie back in the bottle," Brin noted, "but now it seems in certain areas the genie has been put back in the bottle."

The much admired Chinese artist Ai Weiwei expressed a different view: "[China] can't live with the consequences of that.... It's hopeless to try to control the Internet." China now has the largest number of Internet users of any country in the world-more than 500 million people, 40 percent of its total population. As a result, most observers believe it is only a matter of time before more open debate-even on topics controversial in the eyes of the Communist Party-will become uncontrollable inside China. Already, a number of Chinese leaders have found it necessary to take to the Internet themselves in order to respond to public controversies. In neighboring Russia, former president Dmitri Medvedev also felt the pressure to engage personally on the Internet.

As the role played by the Internet and connected computing devices becomes more prominent and pervasive generally, authoritarian governments may find it increasingly difficult to exert the same degree of control. When the Arab Spring began in Tunisia, it was partly due to the fact that four out of every ten Tunisians were connected to the Internet, with almost 20 percent of them on Facebook (80 percent of the Facebook users were under the age of thirty).

So even though Tunisia was one of the countries cited by Reporters Without Borders as censoring political dissent on the Internet, the largely nonviolent revolution gained momentum with startling speed, and the pervasive access to the Internet within Tunisia made it difficult for the government to control the digital blossoming of public defiance. The man who set himself on fire in protest, Mohamed Bouazizi, was not the first to do so, but he was the first to be video-recorded doing so. It was the downloaded video that ignited the Arab Spring.

In Saudi Arabia, Twitter has facilitated public criticism of the government, and even of the royal family. As the number of tweets grew faster there in 2012 than in any other country, a thirty-one-year-old lawyer, Faisal Abdullah, told The New York Times, "Twitter for us is like a parliament, but not the kind of parliament that exists in this region. It's a true parliament, where people from all political sides meet and talk freely."

But experts in the region argue that it is important to look carefully at the interplay between the Internet and other significant factors in the Arab Spring-including some that were at least as important as the Internet in bringing about this sociopolitical explosion. The combination of population growth, the growing percentage of young people, economic stagnation, and rising food prices created the conditions for unrest. When governments in the region first promised economic and political reforms, then appeared to backtrack, the frustrations reached a boiling point.

The change that many a.n.a.lysts believe was most important in sowing the seeds of the Arab Spring was the introduction in 1996 of the feisty and relatively independent satellite television channel Al Jazeera. Al Jazeera was soon followed by approximately 700 other satellite television channels that were easily accessed with small, cheap satellite dishes-even in countries where they are technically illegal. Several governments attempted to control the proliferation of small dishes, but the result was an incredible outburst of political discussion, including on topics that had not been debated openly before. By the time the Arab Spring erupted in Cairo's Tahrir Square, both access to satellite television and the Internet had spread throughout Egypt and the region. Sociologists and political scientists have had a difficult time parsing the relative influence of these two new electronic media in causing and feeding the Arab Spring, but most believe that Al Jazeera and its many siblings were the more important factor. In 2004, when then Egyptian president Hosni Mubarak paid a visit to Al Jazeera's headquarters in Qatar, he said, "All that trouble from this little matchbox?" Perhaps both were necessary but neither was sufficient.

Like Tunisia, Egypt found it difficult to shut down access to the Internet in the way Myanmar and Iran had. By 2011 it was so pervasive that when the government blocked all of the Internet access points entering the country, the public's reaction was so strong that the fires of revolt grew even hotter. The determination of the protesters ultimately succeeded in forcing Mubarak to step down, but their cohesion faded during the political struggle that followed.

Some a.n.a.lysts, including Malcolm Gladwell, have argued that online connections are inherently weak and often temporary because they do not support the stronger relationships formed when ma.s.s movements rely upon in-person gatherings. In Egypt, for example, the crowds of Tahrir Square actually represented a tiny fraction of Egypt's huge population-and those in the rest of the country who sympathized with their complaints against the Mubarak government did not remain aligned with the protesters when the time came to form a new political consensus around what kind of government would follow Mubarak. The Egyptian military soon a.s.serted its control of the government, and in the elections that followed, Islamist forces prevailed in establishing a new regime based on principles far different from those advocated by most of the Internet-inspired reformers who predominated in Tahrir Square.

Indeed, not only in Egypt but also in Libya, Syria, Bahrain, Yemen, and elsewhere-including Iran-the same pattern has unfolded: an emergent reform movement powered by a new collective political consciousness born on the Internet has stimulated change, but failed to consolidate its victory. The forces of counterrevolution have tightened control of the media and have reestablished their dominance.

The unique history of communications technology in the Middle East and North Africa offers one of the reasons for the failure by reformers to consolidate their gains. The emergent political consciousness that accompanied the Print Revolution in Europe, and later North America, bypa.s.sed the Middle East and North Africa when the Ottoman Empire banned the printing press for Arabic-speaking peoples. This contributed to the isolation of the Ottoman-ruled lands from the rapid advances (such as the Scientific Revolution) that the printing press triggered in Europe. Two centuries later, when Arab Muslims first asked the historic question "What went wrong?" part of the answer was that they had deprived themselves of the fruits of the Print Revolution.

As a result, the inst.i.tutions that emerged in the West to embody representative democracy never formed in the Middle East. Centuries later, therefore, the new political consciousness born on the Internet could not easily be embodied in formal structures that could govern according to the principles articulated by the reformers. Yet the forces of authoritarianism could easily embody their desire to control society and the economy in the inst.i.tutions that were already present-including the military, the national police, and the bureaucracies of autocratic rule.

Other a.n.a.lysts have connected the disappointment in the wake of Tahrir Square to what they regard as yet another example of "techno-optimism," in which an exciting new technology is endowed with unrealistic hopes, while overlooking the simple fact that all technologies can be used for good or ill, depending on how they are used and who uses them to greatest effect. The Internet can be used not only by reformers, but also by opponents of reform. Still, the exciting promise of Internet-based reform-both in the delivery of public goods and, more crucially, in the revitalization of democracy-continues to inspire advocates of freedom, precisely because it enables and fosters the emergence of a new collective political consciousness within which individuals can absorb political ideas, contribute their own, and partic.i.p.ate in a rapidly evolving political dialogue.

This optimism is further fueled by the fact that some governments providing services to individuals are making dramatic improvements in their ability to communicate important information on the Internet and engage in genuinely productive two-way communication with citizens. Some nations-most notably, Estonia-have even experimented with Internet voting in elections and referenda. In neighboring Latvia, two laws have already been pa.s.sed as a result of proposals placed by citizens on a government website open to suggestions from the public. Any idea attaining the support of 10,000 people or more goes directly into a legislative process. In addition, many cities are using computerized statistics and sophisticated visual displays to more accurately target the use of resources and achieve higher levels of quality in the services they deliver. Some activists promoting Internet-based forms of democracy, including NYU professor Clay Shirky, have proposed imaginative ways to use open source programming to link citizens together in productive dialogues and arguments about issues and legislation.

In Western countries, however, the potential for Internet-based reform movements has been blunted. Even in the United States, in spite of the prevailing hopes that the Internet will eventually reinvigorate democracy, it has thus far failed to do so. In order to understand why, it is important to a.n.a.lyze the emerging impact of the Internet on political consciousness in the broader context of the historic relationship between communications media and governance-with particular attention to the displacement of print media by the powerful ma.s.s medium of television.

In the politics of many countries-including the United States-we find ourselves temporarily stuck in a surprisingly slow transition from the age of television to the age of the Internet. Television is still by far and away the dominant communications medium in the modern world. More people even watch Internet videos on television screens than on computer screens. Eventually, bandwidth limitations on high-quality video will become less of a hindrance and television will, in the words of novelist William Gibson, "be appropriated into the realm of the digital." But until it does, broadcast, cable, and satellite television will continue to dominate the public square. As a result, both candidates and leaders of reform movements will continue to face the requirement of paying a king's ransom for the privilege of communicating effectively with the ma.s.s public.

Well before the Internet and computer revolution was launched, the introduction of electronic media had already begun transforming the world that had been shaped by the printing press. In a single generation, television displaced print as the dominant form of ma.s.s communication. Even now, while the Internet is still in its early days, Americans spend more time watching television than in any other activity besides sleeping and working. The average American now watches television more than five hours per day. Largely as a result, the average candidate for Congress spends 80 percent of his or her campaign money on thirty-second television advertising.

To understand the implications for democracy that flow from the continuing dominance of television, consider the significant differences between the information ecosystem of the printing press and the information ecosystem of television. First of all, access to the virtual public square that emerged in the wake of the print revolution was extremely cheap; Thomas Paine could walk out of his front door in Philadelphia and easily find several low-cost print shops.

Access to the public square shaped by television, though, is extremely expensive. The small group of corporations that serve as gatekeepers controlling access to the ma.s.s television audience is now more consolidated than ever before and continues to charge exorbitant sums for that access. If a modern-day Thomas Paine walked to the nearest television station and attempted to broadcast a televised version of Common Sense, he would be laughed off the premises if he could not pay a small fortune. By contrast, paid pundits whose views reflect the political philosophy of the corporations that own most networks are given many hours each week to promulgate their ideology.

So long as commercial television dominates political discussion, candidates will find it necessary to solicit large and ever growing sums of money from wealthy individuals, corporations, and special interests to gain access to the only public square that matters when the majority of voters spend the majority of their free time staring at television screens. This requirement, in turn, has led to the obscene dominance of decision making in American democracy by these same wealthy contributors-especially corporate lobbies. Because recent Supreme Court decisions-especially the Citizens United case-have overturned long-standing prohibitions against the use of corporate funds to support candidates, this destructive trend is likely to get much worse before it gets better. It is, in a very real sense, a slow-motion corporate coup d'etat that threatens to destroy the integrity and functioning of American democracy.

Although the political systems and legal regimes of countries vary widely, the relative roles of television and the Internet are surprisingly similar. It is notable that in both China and Russia, television is much more tightly controlled than the Internet. In the Potemkin democracy that has been constructed in Vladimir Putin's Russia, the government is choosing to tolerate a much freer, more robust freedom of speech on the Internet than on television. Mikhail Kasyanov, one of the prime ministers who served under Putin (and whose candidacy for president against Putin's handpicked successor, Dmitri Medvedev, was derailed when Putin ordered him removed from the ballot), told me that when he was prime minister under Putin he was given clear instructions that debate on the Internet mattered little so long as the government exercised tight control over what appeared on Russian television.

Four years later, in the spring of 2012, the Internet-inspired protest movement challenging the obviously fraudulent process used in the first round of the elections (in which Putin was ultimately victorious, as expected), one Russian a.n.a.lyst said, "The old people come and the old people come and the old people come and all vote for one candidate-for Putin. Why are they voting for Putin? Watch TV. There is one face: Putin." And indeed, one of the many reasons for television's dominance in the political media landscape of almost every country is that older people both simultaneously vote in higher percentages and watch television more hours per day than any other age group. In the U.S., people aged sixty-five and older watch, on average, almost seven hours per day.

In many nations, inst.i.tutions important to the rise and survival of democracy, like journalism, have also been profoundly affected by the historic transformation of communications technology. Newspapers have fallen on hard times. They used to be able to bundle together revenue from subscriptions, commercial advertising, and cla.s.sified advertising to pay not only for the printing and distribution of their papers but also the salaries of professional reporters, editors, and investigative journalists. With the introduction of television-and particularly with the launch of evening television news programs-the afternoon newspapers in most major cities that people used to read upon returning home from work were the first to go bankrupt. The loss of increasing amounts of commercial advertising to television and radio also began to hurt the morning newspapers. Then, when cla.s.sified advertising migrated en ma.s.se to the Internet and the widespread availability of online news sources led many readers to stop their subscriptions to newspapers, the morning newspapers began to go bankrupt as well.

Eventually, Internet-based journalism will begin to thrive. In the U.S., digital news stories already reach more people than either newspapers or radio. As yet, however, a high percentage of quality journalism available on the Internet is still derived from the repurposing of articles originally prepared for print publications. And there are as yet few business models for journalism originating on the Internet that bundle together enough revenue to support the salaries of reporters engaged in the kind of investigative journalism essential to provide accountability in a democracy.

Like the journalism essential to its flourishing, democracy itself is now stuck in this odd and dangerous transition era that falls between the waning age of the printing press and the still nascent maturation of effective democratic discourse on the Internet. Reformers and advocates of the public interest are connecting with one another in ever larger numbers over the Internet and are searching with ever greater intensity for ways to break through the quasi-hypnotic spell cast over the ma.s.s television audience-day after day, night after night-by constant, seductive, expensive, and richly produced television programming.

Virtually all of this programming is punctuated many times each hour by slick and appealing corporate messages designed to sell their products and by corporate issue advertising designed to shape the political agenda. During election years, especially in the United States, television viewers are also deluged with political advertis.e.m.e.nts from candidates who-again because of the economics of the television medium-are under constant and unrelenting pressure from wealthy and powerful donors to adopt the donors' political agendas-agendas that are, unsurprisingly, congruent with those contained in the corporate issue advertising.

Public goods-such as education, health care, environmental protection, public safety, and self-governance-have not yet benefited from the new efficiencies of the digital age to the same extent as have private goods. The power of the profit motive has been more effective at driving the exploitation of new opportunities in the digital universe. By contrast, the ability of publics to insist upon the adoption of new, more efficient, digital models for the delivery of public goods has been severely hampered by the sclerosis of democratic systems during this transition period when digital democracy has yet to take hold.

EDUCATION AND HEALTH CARE IN A NEW WORLD.

The crisis in public education is a case in point. Our civilization has barely begun the necessary process of adapting schools to the tectonic shift in our relationship to the world of knowledge. Education is still too frequently based on memorizing significant facts. Yet in a world where all facts are constantly at our fingertips, we can afford to spend more time teaching the skills necessary to not only learn facts but also learn the connections among them, evaluate the quality of information, discern larger patterns, and focus on the deeper meaning inherent in those patterns. Students accustomed to the rich and immersive experience of television, video games, and social media frequently find the experience of sitting in desks staring at chalk on a blackboard to be the least compelling and engaging part of their day.

There is clearly a great potential for the development of a new curriculum, with tablet-based e-books and search-based, immersive, experiential, and collaborative online courses. E. O. Wilson's new, enhanced digital textbook Life on Earth is a terrific example of what the future may hold. In higher education, a new generation of high-quality ventures has emerged-including Coursera, Udacity, Minerva, and edX-that is already beginning to revolutionize and globalize world-cla.s.s university-level instruction. Most of the courses are open to all, for free!

The hemorrhaging of government revenues at the local, state, and national level-caused in part by the lower wages and persistent high unemployment a.s.sociated with the outsourcing and robosourcing in Earth Inc., and declining property values in the wake of the global economic crisis triggered in part by computer-generated subprime mortgages-is leading to sharp declines in budgets for public education at the very time when reforms are most needed. In addition, the aging of populations in developed countries and the declining percentage of parents of school-aged children have diminished the political clout wielded by advocates for increasing these budgets.

Even though public funding for education has been declining, many creative teachers and princ.i.p.als have found ways to adapt educational materials and routines to the digital age. The Khan Academy is a particularly exciting and innovative breakthrough that is helping many students. Nevertheless, in education as in journalism, no enduring model has yet emerged with enough appeal to replace the aging and decaying model that is now failing to meet necessary standards. And some online, for-profit ventures-like the University of Phoenix and Argosy University Online-appear to have taken advantage of the hunger for college-level instruction on the Internet without meeting their responsibility to the students who are paying them. One online college, Trinity Southern University, gave an online degree in business administration to a cat named Colby Nolan, which happened to be owned by an attorney general. The school was later prosecuted and shut down.

Health care, like education, is struggling to adapt to the new opportunities inherent in the digital universe. Crisis intervention, payment for procedures, and ridiculously expensive record keeping required by insurance companies and other service providers still dominate the delivery of health care. We have not yet exploited the new ability inherent with smartphones and purpose-built digital health monitors to track health trends in each individual and enable timely, cost-effective interventions to prevent the emergence of chronic disease states that account for most medical problems.

More sophisticated information-based strategies utilizing genomic and proteomic data for each individual could also clearly improve health outcomes dramatically at much lower cost. Epidemiological strategies-such as the monitoring of aggregate Internet searches for flu symptoms-are beginning to improve the allocation and deployment of public health resources. While interesting experiments have begun in these and other areas, however, there has as yet been no effectively focused public pressure or sustained political initiative to implement a comprehensive new Internet-empowered health care strategy. Some insurance companies have begun to use data mining techniques to scour social media and databases aggregated by marketing companies in order to better a.s.sess the risk of selling life insurance to particular individuals. At least two U.S. insurance companies have found the approach so fruitful that they even waive medical exams for customers whose data profiles cla.s.sify them as low-risk.

THE SECURITY CONUNDRUM.

With all of the exciting potential for the Internet to improve our lives, why have the results been so mixed thus far? Perhaps because of human nature, it is common for us to overemphasize the positive impacts of any important new technology when it is introduced and first used. It is also common, unfortunately, for us to give short shrift to the risks of new technologies and underestimate unintended side effects.

History teaches, of course, that any tool-the mighty Internet included-can and will be used for both good and ill. While the Internet may be changing the way we organize our thinking, and while it is changing the way we organize our relationships with one another, it certainly does not change basic human nature. And thus the age-old struggle between order and chaos-and dare I say good and evil-will play out in new ways.

More than four centuries ago, when the explosion of information created by the printing press was just beginning, the legend of Doctor Faust first appeared. Some historians claim that Faust was based on the financier and business partner of Gutenberg, Johann Fust, who was charged in France with witchcraft because of the seemingly magical process by which thousands of copies of the same text could be replicated perfectly.

In the Faust legend, which has appeared in varying forms over the centuries, the protagonist makes a deal with the devil in which he exchanges his soul for "unlimited knowledge and worldly pleasures." Ever since then, as the scientific and technological revolution accelerated, many new breakthroughs, like nuclear power and stem cell technology, among others, have frequently been described as "Faustian bargains." It is a literary shorthand for the price of power-a price that is often not fully comprehended at the beginning of the bargain.

In our time, when we adapt our thinking processes to use the Internet (and the devices and databases connected to it) as an extension of our own minds, we enter into a kind of "cyber-Faustian bargain"-in which we gain the "unlimited knowledge and worldly pleasures" of the Internet. Unless we improve privacy and security safeguards, however, we may be risking values more precious than worldly wealth.

For individuals, the benefits of this bargain-vastly increased power to access and process information anywhere and anytime, a greatly increased capacity to communicate and collaborate with others-are incredibly compelling. But the price we pay in return for these incalculable benefits is a significant loss of control over the security and privacy of the thoughts and information that we send into this extended nervous system. Two new phrases that have crept into our lexicon-"the death of distance" and "the disappearance of privacy"-are intimately connected, each to the other. Most who use the Internet are tracked by many websites that then sell the information. Private emails can be read by the government without a warrant, without permission, and without notification. And hacking has become easy and widespread.

The same cyber-Faustian bargain has been made by corporations and governments. Like individuals, they are just beginning to recognize the magnitude of the cybersecurity price that apparently has to be paid on an ongoing basis. And to be clear, virtually no one argues with the gains in efficiency, power, productivity, and convenience that accompany this revolutionary change in the architecture of the information economy. What is not yet clear is how the world can resolve-or at least manage-the ma.s.sive new threats to security and privacy that accompany this shift.

Internet and software companies are themselves also making the same bargain, with a historic and ma.s.sive shift from software, databases, and services located within computers themselves to "the cloud"-which means, essentially, using the Internet and the remote servers and databases connected to it as extensions of the memory, software, and processing power that used to be primarily contained within each computer. The growing reliance on the cloud creates new potential choke points that may have implications for both data security and reliability of service. In late 2012, several popular Internet companies in the U.S. that rely on Amazon.com's cloud services were all knocked out of commission when problems shut down Amazon's data centers in Virginia.

The world's historic shift onto the Internet confronts us with a set of dilemmas that are inherent in the creation of a planet-wide nervous system connecting all of us to the global brain. Some of these dilemmas have arisen because digital information is now recognized-and valued-as the key strategic resource in the twenty-first century.

Unlike land, iron ore, oil, or money, information is a resource that you can sell or give away and yet still have. The value of information often expands with the number of people who share it, but the commercial value can often be lost when its initial owner loses exclusivity. The essence of patent and copyright law has been to resolve that tension and promote the greatest good for the greatest number, consistent with principles of justice and fairness. The inventor of a new algorithm or the discoverer of a new principle of electromagnetism deserves to be rewarded-partly to provide incentives for others to chase similar breakthroughs-but society as a whole also deserves to benefit from the widespread application of such new discoveries.

This inherent tension has been heightened by the world's shift onto the Internet. Longtime technology thought leader Stewart Brand is often quoted as having said in the early years of the Internet, "Information wants to be free." But what he actually said was, "On the one hand, information wants to be expensive, because it's so valuable," adding, "On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other."

Because digital information has become so strategic in the operations of Earth Inc., we are witnessing a global, multip.r.o.nged struggle over the future of the Internet, with battlefronts scattered throughout the overlapping worlds of politics and power, commerce and industry, art and culture, science and technology: * Between those who want information to be free and others who want to control it and exchange it for wealth or power; * Between those who want people to be free and those who want to control their lives; * Between individuals who share private information freely on social networks and others who use that information in unantic.i.p.ated and sometimes harmful ways; * Between Internet-based companies who indiscriminately collect vast amounts of information about their customers and customers who value their privacy; * Between legacy centers of power that occupied privileged positions in the old order of information now breaking down and new centers of inchoate power seeking their own place in the new pattern struggling to emerge; * Between activists (and "hacktivists") who value transparency and nations and corporations that value secrecy; * Between corporations whose business models depend upon the ability to protect intellectual property contained in computers connected to the Internet and compet.i.tors who seek to steal that intellectual property by using other computers also connected to the Internet; * Between cybercriminals intent on exploiting rich new targets in the flows of wealth and information on the Internet and law enforcement organizations whose strategy for stopping cybercrime sometimes threatens to destroy historic and hard-won boundaries between the spheres occupied by individuals and the episodic desire by their governments to invade those private spheres.

The complexity of the world's transition to the Internet is even more fraught because all of these conflicts are occurring simultaneously on the same common Internet that everyone shares. And, not surprisingly, proposed remedies for problems in one set of conflicts frequently enhance the potential for disrupting efforts to resolve problems in other sets of conflicts.

Proposals to require measures that eliminate anonymity on the Internet in order to protect cybersecurity and fight cybercrime pose a deadly threat to the ability of dissidents in authoritarian countries to propose reforms and connect with others seeking change in their governments. By the same token, the dream of reformers that the global Internet will inevitably drive global change in the direction of more freedom for individuals, regardless of where they live, strikes fear in the hearts of authoritarian rulers.

Even in free countries, activists who expose information that governments have tried to keep secret often trigger intrusive new government measures to expand the information they collect about citizens. When the Wikileaks organization, run by an Australian living in Sweden on servers based in Sweden, Iceland, and possibly other locations, publicized information stolen from the U.S. government, the subsequent crackdown enraged other hacktivists, who then broke into numerous other government and corporate websites around the world.

Because the Internet crosses national boundaries, it diminishes the ability of nation-states to manage such conflicts through laws and regulations that reflect the values in each nation (or at least the values of the governments in power). Independent groups of hacktivists have been able to break into sites controlled by the FBI, CIA, the U.S. Senate, the Pentagon, the International Monetary Fund, the official website of the Vatican, Interpol, 10 Downing Street in London, the British Ministry of Justice, and NASA (even breaking into the software of the s.p.a.ce station while it was...o...b..ting the Earth). When the FBI organized a secure conference call to discuss how to respond to such attacks with Scotland Yard, hackers recorded the call and put it on the web. The inmates have clearly taken over a large part of the Internet asylum when Nurse Ratched's private conversations about security are broadcast for all to hear.

The extreme difficulty in protecting cybersecurity was vividly demonstrated when EMC, a technology security company used by the National Security Agency, the Central Intelligence Agency, the Pentagon, the White House, the Department of Homeland Security, and many leading defense contractors, was penetrated by a cyberattack believed to have originated in China. EMC's security system was considered the state of the art in protecting computers connected to the Internet-which, of course, is why it was used by the organizations with the greatest need for protecting their digital data. It remains undisclosed how much sensitive information was stolen, but this attack was a sobering wake-up call.

In 2010, U.S. secretary of defense Robert Gates labeled cybers.p.a.ce as the "fifth domain" for potential military conflict-alongside land, sea, air, and s.p.a.ce. In 2012, Rear Admiral Samuel c.o.x, the director of intelligence at the U.S. Cyber Command (established in 2009), said that we are now witnessing "a global cyber arms race." Other experts have noted that at this stage in the development of cybersecurity technology, offense has the advantage over defense.

Securing the secrecy of important communications has always been a struggle. It was first mentioned by "the father of history," Herodotus, in his description of the "secret writing" that he said was responsible for the Greek victory in the Battle of Thermopylae, which prevented ancient Greece's conquest by Persia. A Greek living in Persia, Demaratus, witnessed the preparations for what the leader of Persia, Xerxes, intended as a surprise invasion and sent an elaborately hidden warning to Sparta. Later during the same war, a Greek leader shaved his messenger's head, wrote what he wished to convey on the messenger's scalp, and then "waited for the hair to regrow." From the use of "invisible ink" in the Middle Ages to n.a.z.i Germany's use of the Enigma machine during World War II, cryptography in its various forms has often been recognized as crucial to the survival of nations.

The speed with which the Internet proliferated made it difficult for its original architects to remedy the lack of truly secure encryption-which they quickly recognized in the Internet's early days as a structural problem. "The system kind of got loose," said Vint Cerf.

It is theoretically possible to develop new and more effective protections for the security of Internet data flows, and many engineers and information scientists are working to solve the problem. However, the rapidity with which Earth Inc. adapted to and coalesced around the Internet has made industry and commerce so dependent on its current architecture that any effort to change its design radically would be fraught with difficulty. And the extent to which billions of people have adapted their daily lives to the constant use of the Internet would also complicate efforts to fundamentally change its architecture.

McKinsey, the global management consulting firm, concluded in a recent report that four trends have converged to make cybersecurity a problem: * Value continues to migrate online and digital data has become more pervasive; * Corporations are now expected to be more "open" than ever before; * Supply chains are increasingly interconnected; and * Malevolent actors are becoming more sophisticated.

As a result, this radical transformation of the global economy has created what most experts describe as a ma.s.sive cybersecurity threat to almost all companies that are using the Internet as part of their core business strategy. Particular attention has been focused on what appears to be a highly organized and persistent effort by organizations in China to steal highly sensitive information from corporations, government agencies, and organizations that have links to one or both categories.

U.S. intelligence agencies have long been a.s.sumed to conduct surveillance of foreign governments, including through cybertools to take information from computers if they have reason to believe that U.S. security is threatened. What is different about the apparent Chinese effort is that it seems to be driven not only by military and national intelligence concerns, but also by a mercantilist effort to confer advantage on Chinese businesses. "There's a big difference," says Richard Clarke, the former counterterrorism czar. "We don't hack our way into a Chinese computer company like Huawei and provide the secrets of Huawei technology to their American compet.i.tor Cisco. We don't do that."

There is no doubt that U.S. companies are being regularly and persistently attacked. Recent research published by the Aspen Inst.i.tute indicates that the U.S. economy is losing more than 373,000 jobs each year-and $16 billion in lost earnings-from the theft of intellectual property. Shawn Henry, formerly a top official in the FBI's cybercrime unit, reported that one U.S. company lost a decade's worth of research and development-worth $1 billion-in a single night.

Mike McConnell, a former director of national intelligence, said recently, "In looking at computer systems of consequence-in government, Congress, at the Department of Defense, aeros.p.a.ce, companies with valuable trade secrets-we've not examined one yet that has not been infected by an advanced persistent threat." The U.S. Secret Service testified in 2010 that "nearly four times the amount of data collected in the archives of the Library of Congress" was stolen from the United States. The director of the FBI testified that cybersecurity will soon overtake terrorism: "The cyberthreat will be the number one threat to the country."

Another digital security company, McAfee, reported that a 2010 series of cyberattacks (called "Operation Shady RAT") resulted in the infiltration of highly secure computer systems in not only the United States, but also Taiwan, South Korea, Vietnam, Canada, j.a.pan, Switzerland, the United Kingdom, Indonesia, Denmark, Singapore, Hong Kong, Germany, India, the International Olympic Committee, thirteen U.S. defense contractors, and a large number of other corporations-none of them in China.

But the United States-as the nation whose commerce has migrated online more than that of any other nation-is most at risk. The United States Chamber of Commerce was informed by the FBI that some of its Asia policy experts who regularly visit China had been hacked, but before the Chamber was able to secure its network, the hackers had stolen six weeks' worth of emails between the Chamber and most of the largest U.S. corporations. Long afterward, the Chamber found out that one of its office printers and one of its thermostats in a corporate apartment were still sending information over the Internet to China.

Along with printers and thermostats, billions of other devices are now connected to the Internet of Things, ranging from refrigerators, lights, furnaces, and air conditioners to cars, trucks, planes, trains, and ships to the small embedded systems inside the machinery of factories to the individual packages containing the products they produce. Some dairy farmers in Switzerland are even connecting the genitals of their cows to the Internet with a device that monitors their estrous cycles and sends a text when a cow is ready to be bred. Interspecies "s.e.xting"?

THE PERVASIVENESS AND significance of the Internet of Things has clearly raised the possibility that cyberattacks can not only pose risks to the security of important information with commercial, intelligence, and military value, but can also have kinetic impacts. With so many Internet-connected computerized devices now controlling water and electric systems, power plants and refineries, transportation grids and other crucial systems, it is not difficult to conjure scenarios in which a coordinated attack on a nation's vital infrastructure could do real physical harm.

According to John O. Brennan, the White House official in charge of counterterrorism, "Last year alone [2011] there were nearly 200 known attempted or successful cyberintrusions of the control systems that run these facilities, a nearly fivefold increase from 2010." In the spring of 2012, Iran announced that it had been forced to sever the Internet connections of major Iranian oil terminals on the Persian Gulf, oil rigs, and the Tehran offices of the Oil Ministry because of repeated cyberattacks from an unknown source. Later that year, Saudi Arabia's state-owned oil company, Aramco, was the victim of cyberattacks that U.S. security officials said were almost certainly launched by Iran, which announced in 2011 that it had established a special military "cybercorps" after one of its nuclear enrichment facilities, in Natanz, was attacked by a computer virus. The attack on Aramco, which replaced all of the data on 75 percent of the firm's computers with an image of a burning American flag, demonstrated, in the words of former national counterterrorism czar Richard Clarke, that "you don't have to be sophisticated to do a lot of damage."

The Stuxnet computer worm, which was probably set loose by Israel and the U.S. working together, found its way-as intended-into a small Siemens industrial control system connected to the motors running the Iranian gas centrifuges that were enriching uranium as part of their nuclear program. When the Stuxnet worm confirmed that it was inside the specific piece of equipment it was looking for, it turned itself on and began to vary the speeds of the motors powering the Iranian centrifuges and desynchronize them in a way that caused them to break apart and destroy themselves. In 2010, an even more sophisticated software worm, called Flame, which a.n.a.lysts said "dwarfs Stuxnet" in the amount of code it contains, reportedly began infecting computers in Iran and several other nations in the Middle East and North Africa.

Although the result of the Stuxnet attack, which slowed down the Iranian effort to develop weapons-grade nuclear material, was cheered in much of the world, many experts have expressed concern that the sophisticated code involved-much of it now downloaded on the Internet-could be used for destructive attacks against Internet-connected machinery and systems in industrial countries. Some have already been inadvertently infected by Stuxnet. After a wave of cyberattacks against U.S. financial inst.i.tutions in late 2012 that security officials said they believed were launched by Iran, U.S. Defense Secretary Leon Panetta publicly warned that a "cyberPearl Harbor" could do serious damage to U.S. infrastructure.

Because computer viruses, worms, and other threats can be resent from remote servers located in almost any country around the world, the original source of the attack is often virtually impossible to identify. Even when circ.u.mstantial evidence overwhelmingly points toward a single country-China, for example-it is difficult to identify what organization or individuals within that country are responsible for the attack, much less whether the Chinese government or a specific corporation or group was ultimately responsible. According to Scott Aken, a former counterintelligence agent and expert in cybercrime, "In most cases, companies don't realize they've been burned until years later when a foreign compet.i.tor puts out their very same product-only they're making it 30 percent cheaper."

While organizations in China have apparently been the princ.i.p.al offenders in this category, a large number of Western corpor