Robot Visions - Robot Visions Part 43
Library

Robot Visions Part 43

What is worse is that the universe doesn't die with us. Callously and immortally it continues onward in its cyclic changes, adding to the injury of death the insult of indifference.

And what is still worse is that other human beings don't die with us. There are younger human beings, born later, who were helpless and dependent on us to start with, but who grow into supplanting nemeses and take our places as we age and die. To the injury of death is added the insult of supplantation.

Did I say it is useless to fight this honor of death accompanied by indifference and supplantation? Not quite. The uselessness is apparent only if we cling to the rational, but there is no law that says we must cling to it, and human beings do not, in fact, do so.

Death can be avoided by simply denying it exists. We can suppose that life on Earth is an illusion, a short testing period prior to entry into some afterlife where all is eternal and there is no question of irreversible change. Or we can suppose that it is only the body that is subject to death and that there is an immortal component of ourselves, not subject to irreversible change, which might, after the death of one body, enter another, in indefinite, cyclic repetitions of life.

These mythic inventions of afterlife and transmigration may make life tolerable for many human beings and enable them to face death with reasonable equanimity-but the fear of death and supplantation is only masked and overlaid; it is not removed.

In fact, the Greek myths involve the successive supplantation of one set of immortals by another-in what seems to be a despairing admission that not even eternal life and superhuman power can remove the danger of irreversible change and the humiliation of being supplanted.

To the Greeks it was disorder (Chaos) that first ruled the universe, and it was supplanted by Ouranos (the sky), whose intricate powdering of stars and complexly moving planets symbolized order ("Kosmos").

But Ouranos was castrated by Kronos, his son. Kronos, his brothers, his sisters, and their progeny then ruled the universe.

Kronos feared that he would be served by his children as he had served his father (a kind of cycle of irreversible changes) and devoured his children as they were born. He was duped by his wife, however, who managed to save her last-born, Zeus, and to spirit him away to safety. Zeus grew to adult godhood, rescued his siblings from his father's stomach, warred against Kronos and those who followed him, defeated him, and replaced him as ruler.

(There are supplantation myths among other cultures, too, even in our own-as the one in which Satan tried to supplant God and failed; a myth that reached its greatest literary expression in John Milton's Paradise Lost.) And was Zeus safe? He was attracted to the sea nymph Thetis and would have married her had he not been informed by the Fates that Thetis was destined to bear a son mightier than his father. That meant it was not safe for Zeus, or for any other god, either, to marry her. She was therefore forced (much against her will) to marry Peleus, a mortal, and bear a mortal son, the only child the myths describe her as having. That son was Achilles, who was certainly far mightier than his father (and, like Talos, had only his heel as his weak point through which he might be killed).

Now, then, translate this fear of irreversible change and of being supplanted into the relationship of man and machine and what do we have? Surely the great fear is not that machinery will harm us-but that it will supplant us. It is not that it will render us ineffective-but that it will make us obsolete.

The ultimate machine is an intelligent machine and there is, only one basic plot to the intelligent-machine story-that it is created to serve man, but that it ends by dominating man. It cannot exist without threatening to supplant us, and it must therefore be destroyed or we will be.

There is the danger of the broom of the sorcerer's apprentice, the golem of Rabbi Loew, the monster created by Dr. Frankenstein. As the child born of our body eventually supplants us, so does the machine born of our mind.

Mary Shelley's Frankenstein, which appeared in 1818, represents a peak of fear, however, for, as it happened, circumstances conspired to reduce that fear, at least temporarily.

Between the year 1815, which saw the end of a series of general European wars, and 1914, which saw the beginning of another, there was a brief period in which humanity could afford the luxury of optimism concerning its relationship to the machine. The Industrial Revolution seemed suddenly to uplift human power and to bring on dreams of a technological utopia on Earth in place of the mythic one in Heaven. The good of machines seemed to far outbalance the evil and the response of love far outbalance the response of fear.

It was in that interval that modern science fiction began-and by modern science fiction I refer to a form of literature that deals with societies differing from our own specifically in the level of science and technology, and into which we might conceivably pass from our own society by appropriate changes in that level. (This differentiates science fiction from fantasy or from "speculative fiction," in which the fictional society cannot be connected with our own by any rational set of changes.) Modern science fiction, because of the time of its beginning, took on an optimistic note. Man's relationship to the machine was one of use and control. Man's power grew and man's machines were his faithful tools, bringing him wealth and security and carrying him to the farthest reaches of the universe.

This optimistic note continues to this day, particularly among those writers who were molded in the years before the coming of the fission bomb-notably, Robert Heinlein, Arthur C. Clarke, and myself.

Nevertheless, with World War I, disillusionment set in. Science and technology, which promised an Eden, turned out to be capable of delivering Hell was well. The beautiful airplane that fulfilled the age-old dream of flight could deliver bombs. The chemical techniques that produced anesthetics, dyes, and medicines produced poison gas as well.

The fear of supplantation rose again. In 1921, not long after the end of World War I, Karel Capek's drama R.U.R. appeared and it was the tale of Frankenstein again, escalated to the planetary level. Not a single monster was created but millions of robots (Capek's word, meaning "worker," a mechanical one, that is). And it was not a single monster turning upon his single creator, but robots turning on humanity, wiping them out and supplanting them.

From the beginning of the science fiction magazine in 1926 to 1959 (a third of a century or a generation) optimism and pessimism battled each other in science fiction, with optimism-thanks chiefly to the influence of John W. Campbell, Jr.-having the better of it.

Beginning in 1939, I wrote a series of influential robot stories that self-consciously combated the "Frankenstein complex" and made of the robots the servants, friends, and allies of humanity.

It was pessimism, however, that won in the end, and for two reasons: First, machinery grew more frightening. The fission bomb threatened physical destruction, of course, but worse still was the rapidly advancing electronic computer. Those computers seemed to steal the human soul. Deftly they solved our routine problems and more and more we found ourselves placing our questions in the hands of these machines with increasing faith, and accepting their answers with increasing humility.

All that fission and fusion bombs can do is destroy us, the computer might supplant us.

The second reason is more subtle, for it involved a change in the nature of the science fiction writer.

Until 1959, there were many branches of fiction, with science fiction perhaps the least among them. It brought its writers less in prestige and money than almost any other branch, so that no one wrote science fiction who wasn't so fascinated by it that he was willing to give up any chance at fame and fortune for its sake. Often that fascination stemmed from an absorption in the romance of science so that science fiction writers would naturally picture men as winning the universe by learning to bend it to their will.

In the 19508, however, competition with TV gradually killed the magazines that supported fiction, and by the time the 1960s arrived the only form of fiction that was flourishing, and even expanding, was science fiction. Its magazines continued and an incredible paperback boom was initiated. To a lesser extent it invaded movies and television, with its greatest triumphs undoubtedly yet to come.

This meant that in the 1960s and 19708, young writers began to write science fiction not because they wanted to, but because it was there-and because very little else was there. It meant that many of the new generation of science fiction writers had no knowledge of science, no sympathy for it-and were in fact rather hostile to it. Such writers were far more ready to accept the fear half of the love/fear relationship of man to machine.

As a result, contemporary science fiction, far more often than not, is presenting us, over and over, with the myth of the child supplanting the parent, Zeus supplanting Kronos, Satan supplanting God, the machine supplanting humanity.

Nightmares they are, and they are to be read as such.

-But allow me my own cynical commentary at the end. Remember that although Kronos foresaw the danger of being supplanted, and though he destroyed his children to prevent it-he was supplanted anyway, and rightly so, for Zeus was the better ruler.

So it may be that although we will hate and fight the machines, we will be supplanted anyway, and rightly so, for the intelligent machines to which we will give birth may, better than we, carry on the striving toward the goal of understanding and using the universe, climbing to heights we ourselves could never aspire to.

The New Profession Back in 1940, I wrote a story in which the leading character was named Susan Calvin. (Good heavens, that's nearly half a century ago.) She was a "robopsychologist" by profession and knew everything there was to know about what made robots tick. It was a science fiction story, of course. I wrote other stories about Susan Calvin over the next few years, and as I described matters, she was born in 1982, went to Columbia, majored in robotics, and graduated in 2003. She went on to do graduate work and by 2010 was working at a firm called U.S. Robots and Mechanical Men, Inc. I didn't really take any of this seriously at the time I wrote it. What I was writing was "just science fiction."

Oddly enough, however, it's working out. Robots are in use on the assembly lines and are increasing in importance each year. The automobile companies are installing them in their factories by the tens of thousands. Increasingly, they will appear elsewhere as well, while ever more complex and intelligent robots will be appearing on the drawing boards. Naturally, these robots are going to wipe out many jobs, but they are going to create jobs, too. The robots will have to be designed, in the first place. They will have to be constructed and installed. Then, since nothing is perfect, they will occasionally go wrong and have to be repaired. To keep the necessity for repair to a minimum, they will have to be intelligently maintained. They may even have to be modified to do their work differently on occasion.

To do all this, we will need a group of people whom we can call, in general, robot technicians. There are some estimates that by the time my fictional Susan Calvin gets out of college, there will be over 2 million robot technicians in the United States alone, and perhaps 6 million in the world generally. Susan won't be alone. To these technicians, suppose we add all the other people that will be employed by those rapidly growing industries that are directly or indirectly related to robotics. It may well turn out that the robots will create more jobs than they will wipe out-but, of course, the two sets of jobs will be different, which means there will be a difficult transition period in which those whose jobs have vanished are retrained so that they can fill new jobs that have appeared.

This may not be possible in every case, and there will have to be innovative social initiatives to take care of those who, because of age or temperament, cannot fit in to the rapidly changing economic scene.

In the past, advances in technology have always necessitated the upgrading of education. Agricultural laborers didn't have to be literate, but factory workers did, so once the Industrial Revolution came to pass, industrialized nations had to establish public schools for the mass education of their populations. There must now be a further advance in education to go along with the new high-tech economy. Education in science and technology will have to be taken more seriously and made lifelong, for advances will occur too rapidly for people to be able to rely solely on what they learned as youngsters.

Wait! I have mentioned robot technicians, but that is a general term. Susan Calvin was not a robot technician; she was, specifically, a robopsychologist. She dealt with robotic "intelligence," with robots' ways of "thinking." I have not yet heard anyone use that term in real life, but I think the time will come when it will be used, just as "robotics" was used after I had invented that term. After all, robot theoreticians are trying to develop robots that can see, that can understand verbal instructions, that can speak in reply. As robots are expected to do more and more tasks, more and more efficiently, and in a more and more versatile way, they will naturally seem more "intelligent." In fact, even now, there are scientists at MIT and elsewhere who are working very seriously on the question of "artificial intelligence."

Still, even if we design and construct robots that can do their jobs in such a way as to seem intelligent, it is scarcely likely that they will be intelligent in the same way that human beings are. For one thing, their "brains" will be constructed of materials different from the ones in our brains. For another, their brains will be made up of different components hooked together and organized in different ways, and will approach problems (very likely) in a totally different manner.

Robotic intelligence may be so different from human intelligence that it will take a new discipline-"robopsychology"-to deal with it. That is where Susan Calvin will come in. It is she and others like her who will deal with robots, where ordinary psychologists could not begin to do so. And this might turn out to be the most important aspect of robotics, for if we study in detail two entirely different kinds of intelligence, we may learn to understand intelligence in a much more general and fundamental way than is now possible. Specifically, we will learn more about human intelligence than may be possible to learn from human intelligence alone.

The Robot As Enemy?

It was back in 1942 that I invented "the Three Laws of Robotics," and of these, the First Law is, of course, the most important. It goes as follows: " A robot may not injure a human being, or, through inaction, allow a human being to come to harm." In my stories, I always make it clear that the Laws, especially the First Law, are an inalienable part of all robots and that robots cannot and do not disobey them.

I also make it clear, though perhaps not as forcefully, that these Laws aren't inherent in robots. The ores and raw chemicals of which robots are formed do not already contain the Laws. The Laws are there only because they are deliberately added to the design of the robotic brain, that is, to the computers that control and direct robotic action. Robots can fail to possess the Laws, either because they are too simple and crude to be given behavior patterns sufficiently complex to obey them or because the people designing the robots deliberately choose not to include the Laws in their computerized makeup.

So far-and perhaps it will be so for a considerable time to come-it is the first of these alternatives that holds sway. Robots are simply too crude and primitive to be able to foresee that an act of theirs will harm a human being and to adjust their behavior to avoid that act. They are, so far, only computerized levers capable of a few types of rote behavior, and they are unable to step beyond the very narrow limits of their instructions. As a result, robots have already killed human beings, just as enormous numbers of noncomputerized machines have. It is deplorable but understandable, and we can suppose that as robots are developed with more elaborate sense perceptions and with the capability of more flexible responses, there will be an increasing likelihood of building safety factors into them that will be the equivalent of the Three Laws.

But what about the second alternative? Will human beings deliberately build robots without the Laws? I'm afraid that is a distinct possibility. People are already talking about security Robots. There could be robot guards patrolling the grounds of a building or even its hallways. The function of these robots could be to challenge any person entering the grounds or the building. Presumably, persons who belonged there, or who were invited there, would be carrying (or would be given) some card or other form of identification that would be recognized by the robot, who would then let them pass. In our security-conscious times, this might even seem a good thing. It would cut down on vandalism and terrorism and it would, after all, only be fulfilling the function of a trained guard dog.

But security breeds the desire for more security. Once a robot became capable of stopping an intruder, it might not be enough for it merely to sound an alarm. It would be tempting to endow the robot with the capability of ejecting the intruder, even if it would do injury in the process-just as a dog might injure you in going for your leg or throat. What would happen, though, when the chairman of the board found he had left his identifying card in his other pants and was too upset to leave the building fast enough to suit the robot? Or what if a child wandered into the building without the proper clearance? I suspect that if the robot roughed up the wrong person, there would be an immediate clamor to prevent a repetition of the error.

To go to a further extreme, there is talk of robot weapons: computerized planes, tanks, artillery, and so on, that would stalk the enemy relentlessly, with superhuman senses and stamina. It might be argued that this would be a way of sparing human beings. We could stay comfortably at home and let our intelligent machines do the fighting for us. If some of them were destroyed-well, they are only machines. This approach to warfare would be particularly useful if we had such machines and the enemy didn't.

But even so, could we be sure that our machines could always tell an enemy from a friend? Even when all our weapons are controlled by human hands and human brains, there is the problem of "friendly fire. " American weapons can accidentally kill American soldiers or civilians and have actually done so in the past. This is human error, but nevertheless it's hard to take. But what if our robot weapons were to accidentally engage in "friendly fire" and wipe out American people, or even just American property? That would be far harder to take (especially if the enemy had worked out stratagems to confuse our robots and encourage them to hit our own side). No, I feel confident that attempts to use robots without safeguards won't work and that, in the end, we will come round to the Three Laws.

Intelligences Together In "Our Intelligent Tools" I mentioned the possibility that robots might become so intelligent that they would eventually replace us. I suggested, with a touch of cynicism, that in view of the human record, such a replacement might be a good thing. Since then, robots have rapidly become more and more important in industry, and, although they are as yet quite idiotic on the intelligence scale, they are advancing quickly.

Perhaps, then, we ought to take another look at the matter of robots (or computers-which are the actual driving mechanism of robots) replacing us. The outcome, of course, depends on how intelligent computers become and whether they will become so much more intelligent than we are that they will regard us as no more than pets, at best, or vermin, at worst. This implies that intelligence is a simple thing that can be measured with something like a ruler or a thermometer (or an IQ test) and then expressed in a single number. If the average human being is measured as 100 on an overall intelligence scale, then as soon as the average computer passes 100, we will be in trouble.

Is that the way it works, though? Surely there must be considerable variety in such a subtle quality as intelligence; different species of it, so to speak. I presume it takes intelligence to write a coherent essay, to choose the right words, and to place them in the right order. I also presume it takes intelligence to study some intricate technical device, to see how it works and how it might be improved-or how it might be repaired if it had stopped working. As far as writing is concerned, my intelligence is extremely high; as far as tinkering is concerned, my intelligence is extremely low. Well, then, am I a genius or an imbecile? The answer is: neither. I'm just good at some things and not good at others-and that's true of everyone of us.

Suppose, then, we think about the origins of both human intelligence and computer intelligence. The human brain is built up essentially of proteins and nucleic acids; it is the product of over 3 billion years of hit-or-miss evolution; and the driving forces of its development have been adaptation and survival. Computers, on the other hand, are built up essentially of metal and electron surges; they are the product of some forty years of deliberate human design and development; and the driving force of their development has been the human desire to meet perceived human needs. If there are many aspects and varieties of intelligence among human beings themselves, isn't it certain that human and computer intelligences are going to differ widely since they have originated and developed under such different circumstances, out of such different materials, and under the impulse of such different drives?

It would seem that computers, even comparatively simple and primitive specimens, are extraordinarily good in some ways. They possess capacious memories, have virtually instant and unfailing recall, and demonstrate the ability to carry through vast numbers of repetitive arithmetical operations without weariness or error. If that sort of thing is the measure of intelligence, then already computers are far more intelligent than we are. It is because they surpass us so greatly that we use them in a million different ways and know that our economy would fall apart if they all stopped working at once.

But such computer ability is not the only measure of intelligence. In fact, we consider that ability of so little value that no matter how quick a computer is and how impressive its solutions, we see it only as an overgrown slide rule with no true intelligence at all. What the human specialty seems to be, as far as intelligence is concerned, is the ability to see problems as a whole, to grasp solutions through intuition or insight; to see new combinations; to be able to make extraordinarily perceptive and creative guesses. Can't we program a computer to do the same thing? Not likely, for we don't know how we do it.

It would seem, then, that computers should get better and better in their variety of point-by-point, short-focus intelligence, and that human beings (thanks to increasing knowledge and understanding of the brain and the growing technology of genetic engineering) may improve in their own variety of whole-problem, long-focus intelligence. Each variety of intelligence has its advantages and, in combination, human intelligence and computer intelligence-each filling in the gaps and compensating for the weaknesses of the other-can advance far more rapidly than either one could alone. It will not be a case of competing and replacing at all, but of intelligences together, working more efficiently than either alone within the laws of nature.

My Robots I wrote my first robot story, "Robbie," in May of 1939, when I was only nineteen years old.

What made it different from robot stories that had been written earlier was that I was determined not to make my robots symbols. They were not to be symbols of humanity's overweening arrogance. They were not to be examples of human ambitions trespassing on the domain of the Almighty. They were not to be a new Tower of Babel requiring punishment.

Nor were the robots to be symbols of minority groups. They were not to be pathetic creatures that were unfairly persecuted so that I could make Aesopic statements about Jews, Blacks or any other mistreated members of society. Naturally, I was bitterly opposed to such mistreatment and I made that plain in numerous stories and essays-but not in my robot stories.

In that case, what did I make my robots?-I made them engineering devices. I made them tools. I made them machines to serve human ends. And I made them objects with built-in safety features. In other words, I set it up so that a robot could not kill his creator, and having outlawed that heavily overused plot, I was free to consider other, more rational consequences.

Since I began writing my robot stories in 1939, I did not mention computerization in their connection. The electronic computer had not yet been invented and I did not foresee it. I did foresee, however, that the brain had to be electronic in some fashion. However, "electronic" didn't seem futuristic enough. The positron-a subatomic particle exactly like the electron but of opposite electric charge-had been discovered only four years before I wrote my first robot story. It sounded very science fictional indeed, so I gave my robots "positronic brains" and imagined their thoughts to consist of flashing streams of positrons, coming into existence, then going out of existence almost immediately. These stories that I wrote were therefore called "the positronic robot series," but there was no greater significance than what I have just described to the use of positrons rather than electrons.

At first, I did not bother actually systematizing, or putting into words, just what the safeguards were that I imagined to be built into my robots. From the very start, though, since I wasn't going to have it possible for a robot to kill its creator, I had to stress that robots could not harm human beings; that this was an ingrained part of the makeup of their positronic brains.

Thus, in the very first printed version of "Robbie," I had a character refer to a robot as follows: "He just can't help being faithful and loving and kind. He's a machine, made so."

After writing "Robbie," which John Campbell, of Astounding Science Fiction, rejected, I went on to other robot stories which Campbell accepted. On December 23, 1940, I came to him with an idea for a mind-reading robot (which later became "Liar!") and John was dissatisfied with my explanations of why the robot behaved as it did. He wanted the safeguard specified precisely so that we could understand the robot. Together, then, we worked out what came to be known as the "Three Laws of Robotics. " The concept was mine, for it was obtained out of the stories I had already written, but the actual wording (if I remember correctly) was beaten out then and there by the two of us.

The Three Laws were logical and made sense. To begin with, there was the question of safety, which had been foremost in my mind when I began to write stories about my robots. What's more I was aware of the fact that even without actively attempting to do harm, one could quietly, by doing nothing, allow harm to come. What was in my mind was Arthur Hugh Clough's cynical "The Latest Decalog," in which the Ten Commandments ate rewritten in deeply satirical Machiavellian fashion. The one item most frequently quoted is: "Thou shalt not kill, but needst not strive / Officiously to keep alive."

For that reason I insisted that the First Law (safety) had to be in two parts and it came out this way: 1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

Having got that out of the way, we had to pass on to the second law (service). Naturally, in giving the robot the built-in necessity to follow orders, you couldn't forfeit the overall concern of safety. The Second Law had to read as follows, then: 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

And finally, we had to have a third law (prudence). A robot was bound to be an expensive machine and it must not needlessly be damaged or destroyed. Naturally, this must not be used as a way of compromising either safety or service. The Third Law, therefore, had to read as follows: 3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Laws.

Of course, these laws are expressed in words, which is an imperfection. In the positronic brain, they are competing positronic potentials that are best expressed in terms of advanced mathematics (which is well beyond my ken, I assure you). However, even so, there are clear ambiguities. What constitutes "harm" to a human being? Must a robot obey orders given it by a child, by a madman, by a malevolent human being? Must a robot give up its own expensive and useful existence to prevent a trivial harm to an unimportant human being? What is trivial and what is unimportant?

These ambiguities are not shortcomings as far as a writer is concerned. If the Three Laws were perfect and unambiguous there would be no room for stories. It is in the nooks and crannies of the ambiguities that all one's plots can lodge, and which provide a foundation, if you'll excuse the pun, for Robot City.

I did not specifically state the Three Laws in words in "Liar!" which appeared in the May 1941 Astounding. I did do so, however, in my next robot story, "Runaround," which appeared in the March 1942 Astounding. In that issue on line seven of page one hundred, I have a character say, "Now, look, let's start with the three fundamental Rules of Robotics," and I then quote them. That incidentally, as far as I or anyone else has been able to tell, represents the first appearance in print of the word "robotics"-which, apparently, I invented.

Since then, I have never had occasion, over a period of over forty years during which I wrote many stories and novels dealing with robots, to be forced to modify the Three Laws. However, as time passed, and as my robots advanced in complexity and versatility, I did feel that they would have to reach for something still higher. Thus, in Robots and Empire, a novel published by Doubleday in 1985, I talked about the possibility that a sufficiently advanced robot might feel it necessary to consider the prevention of harm to humanity generally as taking precedence over the prevention of harm to an individual. This I called the "Zeroth Law of Robotics," but I'm still working on that.

My invention of the Three Laws of Robotics is probably my most important contribution to science fiction. They are widely quoted outside the field, and no history of robotics could possibly be complete without mention of the Three Laws. In 1985, John Wiley and Sons published a huge tome, Handbook of Industrial Robotics, edited by Shimon Y. Nof, and, at the editor's request, I wrote an introduction concerning the Three Laws.

Now it is understood that science fiction writers generally have created a pool of ideas that form a common stock into which all writers can dip. For that reason, I have never objected to other writers who have used robots that obey the Three Laws. I have, rather, been flattered and, honestly, modem science fictional robots can scarcely appear without those Laws.

However, I have firmly resisted the actual quotation of the Three Laws by any other writer. Take the Laws for granted, is my attitude in this matter, but don't recite them. The concepts are everyone's but the words are mine.

The Laws Of Humanics My first three robot novels were, essentially, murder mysteries, with Elijah Baley as the detective. Of these first three, the second novel, The Naked Sun, was a locked-room mystery, in the sense that the murdered person was found with no weapon on the site and yet no weapon could have been removed either.

I managed to produce a satisfactory solution but I did not do that sort of thing again.

The fourth robot novel, Robots and Empire, was not primarily a murder mystery. Elijah Baley had died a natural death at a good, old age, the book veered toward the Foundation universe so that it was clear that both my notable series, the Robot series and the Foundation series, were going to be fused into a broader whole. (No, I didn't do this for some arbitrary reason. The necessities arising out of writing sequels in the 1980s to tales originally written in the 19408 and 1950s forced my hand.) In Robots and Empire, my robot character, Giskard, of whom I was very fond, began to concern himself with "the Laws of Humanics," which, I indicated, might eventually serve as the basis for the science of psychohistory, which plays such a large role in the Foundation series.

Strictly speaking, the Laws of Humanics should be a description, in concise form, of how human beings actually behave. No such description exists, of course. Even psychologists, who study the matter scientifically (at least, I hope they do) cannot present any "laws" but can only make lengthy and diffuse descriptions of what people seem to do. And none of them are prescriptive. When a psychologist says that people respond in this way to a stimulus of that sort, he merely means that some do at some times. Others may do it at other times, or may not do it at all.

If we have to wait for actual laws prescribing human behavior in order to establish psychohistory (and surely we must) then I suppose we will have to wait a long time.

Well, then, what are we going to do about the Laws of Humanics? I suppose what we can do is to start in a very small way, and then later slowly build it up, if we can.

Thus, in Robots and Empire, it is a robot, Giskard, who raises the question of the Laws of Humanics. Being a robot, he must view everything from the standpoint of the Three Laws of Robotics-these robotic laws being truly prescriptive, since robots are forced to obey them and cannot disobey them.

The Three Laws of Robotics are: 1-A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

2-A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3-A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Well, then, it seems to me that a robot could not help but think that human beings ought to behave in such a way as to make it easier for robots to obey those laws.

In fact, it seems to me that ethical human beings should be as anxious to make life easier for robots as the robots themselves would. I took up this matter in my story "The Bicentennial Man," which was published in 1976. In it, I had a human character say in part: "If a man has the right to give a robot any order that does not involve harm to a human being, he should have the decency never to give a robot any order that involves harm to a robot, unless human safety absolutely requires it. With great power goes great responsibility, and if the robots have Three Laws to protect men, is it too much to ask that men have a law or two to protect robots?"

For instance, the First Law is in two parts. The first part, "A robot may not injure a human being," is absolute and nothing need be done about that. The second part, "or, through inaction, allow a human being to come to harm," leaves things open a bit. A human being might be about to come to harm because of some event involving an inanimate object. A heavy weight might be likely to fall upon him, or he may slip and be about to fall into a lake, or anyone of uncountable other misadventures of the sort may be involved. Here the robot simply must try to rescue the human being; pull him from under, steady him on his feet and so on. Or a human being might be threatened by some form of life other than human-a lion, for instance-and the robot must come to his defense.

But what if harm to a human being is threatened by the action of another human being? There a robot must decide what to do. Can he save one human being without harming the other? Or if there must be harm, what course of action must he pursue to make it minimal?

It would be a lot easier for the robot, if human beings were as concerned about the welfare of human beings, as robots are expected to be. And, indeed, any reasonable human code of ethics would instruct human beings to care for each other and to do no harm to each other. Which is, after all, the mandate that humans gave robots. Therefore the First Law of Humanics from the robots' standpoint is: 1-A human being may not injure another human being, or, through inaction, allow a human being to come to harm.

If this law is carried through, the robot will be left guarding the human being from misadventures with inanimate objects and with non-human life, something which poses no ethical dilemmas for it. Of course, the robot must still guard against harm done a human being unwittingly by another human being. It must also stand ready to come to the aid of a threatened human being, if another human being on the scene simply cannot get to the scene of action quickly enough. But then, even a robot may unwittingly harm a human being, and even a robot may not be fast enough to get to the scene of action in time or skilled enough to take the necessary action. Nothing is perfect.

That brings us to the Second Law of Robotics, which compels a robot to obey all orders given it by human beings except where such orders would conflict with the First Law. This means that human beings can give robots any order without limitation as long as it does not involve harm to a human being.

But then a human being might order a robot to do something impossible, or give it an order that might involve a robot in a dilemma that would do damage to its brain. Thus, in my short story "Liar!," published in 1940, I had a human being deliberately put a robot into a dilemma where its brain burnt out and ceased to function.

We might even imagine that as a robot becomes more intelligent and self-aware, its brain might become sensitive enough to undergo harm if it were forced to do something needlessly embarrassing or undignified. Consequently, the Second Law of Humanics would be: 2-A human being must give orders to a robot that preserve robotic existence, unless such orders cause harm or discomfort to human beings.

The Third Law of Robotics is designed to protect the robot, but from the robotic view it can be seen that it does not go far enough. The robot must sacrifice its existence if the First or Second Law makes that necessary. Where the First Law is concerned, there can be no argument. A robot must give up its existence if that is the only way it can avoid doing harm to a human being or can prevent harm from coming to a human being. If we admit the innate superiority of any human being to any robot (which is something I am a little reluctant to admit, actually), then this is inevitable.

On the other hand, must a robot give up its existence merely in obedience to an order that might be trivial, or even malicious? In "The Bicentennial Man," I have some hoodlums deliberately order a robot to take itself apart for the fun of watching that happen. The Third Law of Humanics must therefore be: 3-A human being must not harm a robot, or, through inaction, allow a robot to come to harm, unless such harm is needed to keep a human being from harm or to allow a vital order to be carried out.

Of course, we cannot enforce these laws as we can the Robotic Laws. We cannot design human brains as we design robot brains. It is, however, a beginning, and I honestly think that if we are to have power over intelligent robots, we must feel a corresponding responsibility for them, as the human character in my story "The Bicentennial Man" said.

Cybernetic Organism A robot is a robot and an organism is an organism.

An organism, as we all know, is built up of cells. From the molecular standpoint, its key molecules are nucleic acids and proteins. These float in a watery medium, and the whole has a bony support system. If is useless to go on with the description, since we are all familiar with organisms and since we are examples of them ourselves.

A robot, on the other hand, is (as usually pictured in science fiction) an object, more or less resembling a human being, constructed out of strong, rust-resistant metal. Science fiction writers are generally chary of describing the robotic details too closely since they are not usually essential to the story and the writers are generally at a loss how to do so.

The impression one gets from the stories, however, is that a robot is wired, so that it has wires through which electricity flows rather than tubes through which blood flows. The ultimate source of power is either unnamed, or is assumed to partake of the nature of nuclear power.

What of the robotic brain?