Robot Visions - Robot Visions Part 41
Library

Robot Visions Part 41

"So where is it, you old puke?"

Rodney said, "I don't think it would be wise to tell you, Little Master. That would disappoint Gracie and Howard who would like to give the presents to you tomorrow morning."

"Listen," said little LeRoy, "who you think you're talking to, you dumb robot? Now I gave you an order. You bring those presents to me." And in an attempt to show Rodney who was master, he kicked the robot in the shin.

It was a mistake. I saw it would be that a second before and that was a joyous second. Little LeRoy, after all, was ready for bed (though I doubted that he ever went to bed before he was good and ready). Therefore, he was wearing slippers. What's more, the slipper sailed off the foot with which he kicked, so that he ended by slamming his bare toes hard against the solid chrome-steel of the robotic shin.

He fell to the floor howling and in rushed his mother. "What is it, LeRoy? What is it?"

Whereupon little LeRoy had the immortal gall to say, "He hit me. That old monster-robot hit me."

Hortense screamed. She saw me and shouted, "That robot of yours must be destroyed."

I said, "Come, Hortense. A robot can't hit a boy. First law of robotics prevents it."

"It's an old robot, a broken robot. LeRoy says-"

"LeRoy lies. There is no robot, no matter how old or how broken, who could hit a boy."

"Then he did it. Grampa did it," howled LeRoy.

"I wish I did," I said, quietly, "but no robot would have allowed me to. Ask your own. Ask Rambo if he would have remained motionless while either Rodney or I had hit your boy. Rambo!"

I put it in the imperative, and Rambo said, "I would not have allowed any harm to come to the Little Master, Madam, but I did not know what he purposed. He kicked Rodney's shin with his bare foot, Madam."

Hortense gasped and her eyes bulged in fury. "Then he had a good reason to do so. I'll still have your robot destroyed."

"Go ahead, Hortense. Unless you're wining to ruin your robot's efficiency by trying to reprogram him to lie, he win bear witness to just what preceded the kick and so, of course, with pleasure, win I."

Hortense left the next morning, carrying the pale-faced LeRoy with her (it turned out he had broken a toe-nothing he didn't deserve) and an endlessly wordless DeLancey.

Gracie wrung her hands and implored them to stay, but I watched them leave without emotion. No, that's a lie. I watched them leave with lots of emotion, an pleasant.

Later, I said to Rodney, when Gracie was not present, "I'm sorry, Rodney. That was a horrible Christmas, an because we tried to have it without you. We'll never do that again, I promise."

"Thank you, Sir," said Rodney. "I must admit that there were times these two days when I earnestly wished the laws of robotics did not exist."

I grinned and nodded my head, but that night I woke up out of a sound sleep and began to worry. I've been worrying ever since.

I admit that Rodney was greatly tried, but a robot can't wish the laws of robotics did not exist. He can't, no matter what the circumstances.

If I report this, Rodney will undoubtedly be scrapped, and if we're issued a new robot as recompense, Gracie will simply never forgive me. Never! No robot, however new, however talented, can possibly replace Rodney in her affection.

In fact, I'll never forgive myself. Quite apart from my own liking for Rodney, I couldn't bear to give Hortense the satisfaction.

But if I do nothing, I live with a robot capable of wishing the laws of robotics did not exist. From wishing they did not exist to acting as if they did not exist is just a step. At what moment will he take that step and in what form will he show that he has done so?

What do I do? What do I do?

Robots I Have Known Mechanical men, or, to use Capek's now universally-accepted term, robots, are a subject to which the modern science-fiction writer has turned again and again. There is no uninvented invention, with the possible exception of the spaceship, that is so clearly pictured in the minds of so many: a sinister form, large, metallic, vaguely human, moving like a machine and speaking with no emotion.

The key word in the description is "sinister" and therein lies a tragedy, for no science-fiction theme wore out its welcome as quickly as did the robot. Only one robot-plot seemed available to the average author: the mechanical man that proved a menace, the creature that turned against its creator, the robot that became a threat to humanity. And almost all stories of this sort were heavily surcharged, either explicitly or implicitly, with the weary moral that "there are some things mankind must never seek to learn."

This sad situation has, since 1940, been largely ameliorated. Stories about robots abound; a newer viewpoint, more mechanistic and less moralistic, has developed. For this development, some people (notably Mr. Groff Conklin in the introduction to his science-fiction anthology entitled "Science-Fiction Thinking Machines," published in 1954) have seen fit to attach at least partial credit to a series of robot stories I wrote beginning in 1940. Since there is probably no one on Earth less given to false modesty than myself, I accept said partial credit with equanimity and ease, modifying it only to include Mr. John w. Campbell, Jr., editor of " Astounding Science-Fiction," with whom I had many fruitful discussions on robot stories.

My own viewpoint was that robots were story material, not as blasphemous imitations of life, but merely as advanced machines. A machine does not "turn against its creator" if it is properly designed. When a machine, such as a power-saw, seems to do so by occasionally lopping off a limb, this regrettable tendency towards evil is combated by the installation of safety devices. Analogous safety devices would, it seemed obvious, be developed in the case of robots. And the most logical place for such safety devices would seem to be in the circuit-patterns of the robotic "brain."

Let me pause to explain that in science-fiction, we do not quarrel intensively concerning the actual engineering of the robotic "brain." Some mechanical device is assumed which in a volume that approximates that of the human brain must contain all the circuits necessary to allow the robot a range of perception-and-response reasonably equivalent to that of a human being. How that can be done without the use of mechanical units the size of a protein molecule or, at the very least, the size of a brain cell, is not explained. Some authors may talk about transistors and printed circuits. Most say nothing at all. My own pet trick is to refer, somewhat mystically, to "positronic brains," leaving it to the ingenuity of the reader to decide what positrons have to do with it and to his good-will to continue reading after having failed to reach a decision.

In any case, as I wrote my series of robot stories, the safety devices gradually crystallized in my mind as "The Three Laws of Robotics. " These three laws were first explicitly stated in "Runaround. " As finally perfected, the Three Laws read as follows.

First Law-A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

Second Law-A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law-A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These laws are firmly built into the robotic brain, or at least the circuit equivalents are. Naturally, I don't describe the circuit equivalents. In fact, I never discuss the engineering of the robots for the very good reason that I am colossally ignorant of the practical aspects of robotics.

The First Law, as you can readily see, immediately eliminates that old, tired plot which I will not offend you by referring to any further.

Although, at first flush, it may appear that to set up such restrictive rules must hamper the creative imagination, it has turned out that the Laws of Robotics have served as a rich source of plot material. They have proved anything but a mental road-block.

An example would be the story "Runaround" to which I have already referred. The robot in that story, an expensive and experimental model, is designed for operation on the sunside of the planet Mercury. The Third Law has been built into him more strongly than usual for obvious economic reasons. He has been sent out by his human employers, as the story begins, to obtain some liquid selenium for some vital and necessary repairs. (Liquid selenium lies about in puddles in the heat of Mercury's sunward side, I will ask you to believe.) Unfortunately, the robot was given his order casually so that the Second Law circuit set up was weaker than usual. Still more unfortunately, the selenium pool to which the robot was sent was near a site of volcanic activity, as a result of which there were sizable concentrations of carbon monoxide in the area. At the temperature of Mercury's sunside, I surmised that carbon monoxide would react fairly quickly with iron to form volatile iron carbonyls so that the robot's more delicate joints might be badly damaged. The further the robot penetrates into this area, the greater the danger to his existence and the more intensive is the Third Law effect driving him away. The Second Law, however, ordinarily the superior, drives him onward. At a certain point, the unusually weak Second Law potential and the unusually strong Third Law potential reach a balance and the robot can neither advance nor retreat. He can only circle the selenium pool on the equipotential locus that makes a rough circle about the site.

Meanwhile, our heroes must have the selenium. They chase after the robot in special suits, discover the problem and wonder how to correct it. After several failures, the correct answer is hit upon. One of the men deliberately exposes himself to Mercury's sun in such a way that unless the robot rescues him, he will surely die. That brings the First Law into operation, which being superior to both Second and Third, pulls the robot out of his useless orbit and brings on the necessary happy ending.

It is in the story "Runaround," by the way, that I believe I first made use of the term "robotics" (implicitly defined as the science of robot design, construction, maintenance, etc). Years later, I was told that I had invented the term and that it had never seen publication before. I do not know whether this is true. If it is true, I am happy, because I think it is a logical and useful word, and I hereby donate it to real workers in the field with all good will.

None of my other robot stories spring so immediately out of the Three Laws as does "Runaround" but all are born of the Laws in some way. There is the story, for instance, of the mind-reading robot who was forced to lie because he was unable to tell any human being anything other than that which the human in question wished to hear. The truth, you see, would almost invariably cause "harm" to the human being in the form of disappointment, disillusion, embarrassment, chagrin and other similar emotions, all of which were but too plainly visible to the robot.

Then there was the puzzle of the man who was suspected of being a robot, that is, of having a quasi-protoplasmic body and a robot's "positronic brain." One way of proving his humanity would be for him to break the First Law in public, so he obliges by deliberately striking a man. But the story ends in doubt because there is still the suspicion that the other "man" might also be a robot and there is nothing in the Three Laws that would prevent a robot from hitting another robot.

And then we have the ultimate robots, models so advanced that they are used to precalculate such things as weather, crop harvests, industrial production figures, political developments and so on. This is done in order that world economy may be less subject to the whims of those factors which are now beyond man's control. But these ultimate robots, it seems, are still subject to the First Law. They cannot through inaction allow human beings to come to harm, so they deliberately give answers which are not necessarily truthful and which cause localized economic upsets so designed as to maneuver mankind along the road that leads to peace and prosperity. So the robots finally win the mastery after all, but only for the good of man.

The interrelationship of man and robot is not to be neglected. Mankind may know of the existence of the Three Laws on an intellectual level and yet have an ineradicable fear and distrust for robots on an emotional level. If you wanted to invent a term, you might call it a "Frankenstein complex." There is also the more practical matter of the opposition of labor unions, for instance, to the possible replacement of human labor by robot labor.

This, too, can give rise to stories. My first robot story concerned a robot nursemaid and a child. The child adored its robot as might be expected, but the mother feared it, as might also be expected. The nub of the story lay in the mother's attempt to get rid of it and in the child's reaction to that.

My first full-length robot novel, "The Caves of Steel" (1954), peers further into the future, and is laid in a time when other planets, populated by emigrating Earthmen, have adopted a thoroughly robotized economy, but where Earth itself, for economic and emotional reasons, still objects to the introduction of the metal creatures. A murder is committed, with robot-hatred as the motive. It is solved by a pair of detectives, one a man, one a robot, with a great portion of the deductive reasoning (to which detective stories are prone) revolving about the Three Laws and their implications.

I have managed to convince myself that the Three Laws are both necessary and sufficient for human safety in regard to robots. It is my sincere belief that some day when advanced human-like robots are indeed built, something very like the Three Laws will be built into them. I would enjoy being a prophet in this respect, and I regret only the fact that the matter probably cannot be arranged in my lifetime. *

*This essay was written in 1956. In the years since, "robotics" has indeed entered the English language and is universally used, and I have lived to see roboticists taking the Three Laws very seriously.

The New Teachers The percentage of older people in the world is increasing and that of younger people decreasing, and this trend will continue if the birthrate should drop and medicine continue to extend the average life span.

In order to keep older people imaginative and creative and to prevent them from becoming an ever-growing drag on a shrinking pool of creative young, I have recommended frequently that our educational system be remodeled and that education be considered a lifelong activity.

But how can this be done? Where will an the teachers come from?

Who says, however, that an teachers must be human beings or even animate?

Suppose that over the next century communications satellites become numerous and more sophisticated than those we've placed in space so far. Suppose that in place of radio waves the more capacious laser beam of visible light becomes the chief communications medium.

Under these circumstances, there would be room for many minions of separate channels for voice and picture, and it is easy to imagine every human being on Earth having a particular television wavelength assigned to her or him.

Each person (child, adult, or elderly) can have his own private outlet to which could be attached, at certain desirable periods of time, his or her personal teaching machine. It would be a far more versatile and interactive teaching machine than anything we could put together now, for computer technology will also have advanced in the interval.

We can reasonably hope that the teaching machine will be sufficiently intricate and flexible to be capable of modifying its own program (that is, "learning") as a result of the student's input.

In other words, the student will ask questions, answer questions, make statements, offer opinions, and from all of this, the machine will be able to gauge the student well enough to adjust the speed and intensity of its course of instruction and, what's more, shift it in the direction of the student interest displayed.

We can't imagine a personal teaching machine to be very big, however. It might resemble a television set in size and appearance. Can so small an object contain enough information to teach the students as much as they want to know, in any direction intellectual curiosity may lead them? No, not if the teaching machine is self-contained-but need it be?

In any civilization with computer science so advanced as to make teaching machines possible, there will surely be thoroughly computerized central libraries. Such libraries may even be interconnected into a single planetary library.

All teaching machines would be plugged into this planetary library and each could then have at its disposal any book, periodical, document, recording, or video cassette encoded there. If the machine has it, the student would have it too, either placed directly on a viewing screen, or reproduced in print-on-paper for more leisurely study.

Of course, human teachers will not be totally eliminated. In some subjects, human interaction is essential-athletics, drama, public speaking, and so on. There is also value, and interest, in groups of students working in a particular field-getting together to discuss and speculate with each other and with human experts, sparking each other to new insights.

After this human interchange they may return, with some relief, to the endlessly knowledgeable, endlessly flexible, and, most of all, endlessly patient machines.

But who will teach the teaching machines?

Surely the students who learn will also teach. Students who learn freely in those fields and activities that interest them are bound to think, speculate, observe, experiment, and, now and then, come up with something of their own that may not have been previously known.

They would transmit that knowledge back to the machines, which will in turn record it (with due credit, presumably) in the planetary library-thus making it available to other teaching machines. All will be put back into the central hopper to serve as a new and higher starting point for those who come after: The teaching machines will thus make it possible for the human species to race forward to heights and in directions now impossible to foresee.

But I am describing only the mechanics of learning. What of the content? What subjects will people study in the age of the teaching machine? I'll speculate on that in the next essay.

Whatever You Wish The difficulty in deciding on what the professions of the future would be is that it all depends on the kind of future we choose to have. If we allow our civilization to be destroyed, the only profession of the future will be scrounging for survival, and few will succeed at it.

Suppose, though, that we keep our civilization alive and flourishing and, therefore, that technology continues to advance. It seems logical that the professions of such a future would include computer programming, lunar mining, fusion engineering, space construction, laser communications, neurophysiology, and so on.

I can't help but think, however, that the advance of computerization and automation is going to wipe out the subwork of humanity-the dull pushing and shoving and punching and clicking and filing and all the other simple and repetitive motions, both physical and mental, that can be done perfectly easily-and better-by machines no more complicated than those we can already build.

In short, the world could be so well run that only a relative handful of human "foremen" would be needed to engage in the various professions and supervisory work necessary to keep the world's population fed, housed, and cared for.

What about the majority of the human species in this automated future? What about those who don't have the ability or the desire to work at the professions of the future -or for whom there is no room in those professions? It may be that most people will have nothing to do of what we think of as work nowadays.

This could be a frightening thought. What will people do without work? Won't they sit around and be bored; or worse, become unstable or even vicious? The saying is that Satan finds mischief still for idle hands to do.

But we judge from the situation that has existed till now, a situation in which people are left to themselves to rot.

Consider that there have been times in history when an aristocracy lived in idleness off the backs of flesh-and-blood machines called slaves or serfs or peasants. When such a situation was combined with a high culture, however, aristocrats used their leisure to become educated in literature, the arts, and philosophy. Such studies were not useful for work, but they occupied the mind, made for interesting conversation and an enjoyable life.

These were the liberal arts, arts for free men who didn't have to work with their hands. And these were considered higher and more satisfying than the mechanical arts, which were rarely materially useful.

Perhaps, then, the future will see a world aristocracy supported by the only slaves that can humanely serve in such a post-sophisticated machines. And there will be an infinitely newer and broader liberal arts program, taught by the teaching machines, from which each person could choose.

Some might choose computer technology or fusion engineering or lunar mining or any of the professions that would seem vital to the proper functioning of the world. Why not? Such professions, placing demands on human imagination and skill, would be very attractive to many, and there will surely be enough who will be voluntarily drawn to these occupations to fill them adequately.

But to most people the field of choice might be far less cosmic. It might be stamp collecting, pottery, ornamental painting, cooking, dramatics, or whatever. Every field will be an elective, and the only guide will be "whatever you wish."

Each person, guided by teaching machines sophisticated enough to offer a wide sampling of human activities, can then choose what he or she can best and most willingly do.

Is the individual person wise enough to know what he or she can best do? -Why not? Who else can know? And what can a person do best except that which he or she wants to do most?

Won't people choose to do nothing? Sleep their lives away?

If that's what they want, why not?-Except that I have a feeling they won't. Doing nothing is hard work, and, it seems to me, would be indulged in only by those who have never had the opportunity to evolve out of themselves something more interesting and, therefore, easier to do.

In a properly automated and educated world, then, machines may prove to be the true humanizing influence. It may be that machines will do the work that makes life possible and that human beings will do all the other things that make life pleasant and worthwhile.

The Friends We Make The term "robot" dates back only sixty years. It was invented by the Czech playwright, Karel Capek, in his play, R. U. R., and is a Czech word meaning worker.

The idea, however, is far older. It is as old as man's longing for a servant as smart as a human being, but far stronger, and incapable of growing weary, bored, or dissatisfied. In the Greek myths, the god of the forge, Hephaistos, had two golden girls-as bright and alive as flesh-and-blood girls-to help him. And the island of Crete was guarded, in the myths, by a bronze giant named Talos, who circled its shores perpetually and tirelessly, watching for intruders.

Are robots possible, though? And if they are, are they desirable?

Mechanical devices with gears and springs and ratchets could certainly make manlike devices perform manlike actions, but the essence of a successful robot is to have it think-and think well enough to perform useful functions without being continually supervised.

But thinking takes a brain. The human being is made up of microscopic neurons, each of which has an extraordinarily complex substructure. There are 10 billion neurons in the brain and 90 billion supporting cells, all hooked together in a very intricate pattern. How can anything like that be duplicated by some man-made device in a robot?

It wasn't until the invention of the electronic computer thirty-five years ago that such a thing became conceivable. Since its birth, the electronic computer has grown ever more compact, and each year it becomes possible to pack more and more information into less and less volume.

In a few decades, might not enough versatility to direct a robot be packed into a volume the size of the human brain? Such a computer would not have to be as advanced as the human brain, but only advanced enough to guide the actions of a robot designed, let us say, to vacuum rugs, to run a hydraulic press, to survey the lunar surface.

A robot would, of course, have to include a self-contained energy source; we couldn't expect it to be forever plugged into a wall socket. This, however, can be handled. A battery that needs periodic charging is not so different from a living body that needs periodic feeding.

But why bother with a humanoid shape? Would it not be more sensible to devise a specialized machine to perform a particular task without asking it to take on all the inefficiencies involved in arms, legs, and torso? Suppose you design a robot that can hold a finger in a furnace to test its temperature and turn the heating unit on and off to maintain that temperature nearly constant. Surely a simple thermostat made of a bimetallic strip will do the job as well.

Consider, though, that over the thousands of years of man's civilization, we have built a technology geared to the human shape. Products for humans' use are designed in size and form to accommodate the human body-how it bends and how long, wide, and heavy the various bending parts are. Machines are designed to fit the human reach and the width and position of human fingers.

We have only to consider the problems of human beings who happen to be a little taller or shorter than the norm-or even just left-handed-to see how important it is to have a good fit into our technology.

If we want a directing device then, one that can make use of human tools and machines, and that can fit into the technology, we would find it useful to make that device in the human shape, with all the bends and turns of which the human body is capable. Nor would we want it to be too heavy or too abnormally proportioned. Average in all respects would be best.

Then too, we relate to all nonhuman things by finding, or inventing, something human about them. We attribute human characteristics to our pets, and even to our automobiles. We personify nature and all the products of nature and, in earlier times, made human-shaped gods and goddesses out of them.

Surely, if we are to take on thinking partners-or, at the least, thinking servants-in the form of machines, we will be more comfortable with them, and we will relate to them more easily, if they are shaped like humans.

It will be easier to be friends with human-shaped robots than with specialized machines of unrecognizable shape. And I sometimes think that, in the desperate straits of humanity today, we would be grateful to have nonhuman friends, even if they are only friends we build ourselves.

Our Intelligent Tools Robots don't have to be very intelligent to be intelligent enough. If a robot can follow simple orders and do the housework, or run simple machines in a cut-and-dried, repetitive way, we would be perfectly satisfied.

Constructing a robot is hard because you must fit a very compact computer inside its skull, if it is to have a vaguely human shape. Making a sufficiently complex computer as compact as the human brain is also hard.