Robot Visions - Robot Visions Part 44
Library

Robot Visions Part 44

When I wrote my first few robot stories in 1939 and 1940, I imagined a "positronic brain" of a spongy type of platinum-iridium alloy. It was platinum-iridium because that is a particularly inert metal and is least likely to undergo chemical changes. It was spongy so that it would offer an enormous surface on which electrical patterns could be formed and un-formed. It was "positronic" because four years before my first robot story, the positron had been discovered as a reverse kind of electron, so that "positronic" in place of "electronic" had a delightful science-fiction sound.

Nowadays, of course, my positronic platinum-iridium brain is hopelessly archaic. Even ten years after its invention it became outmoded. By the end of the 1940s, we came to realize that a robot's brain must be a kind of computer. Indeed, if a robot were to be as complex as the robots in my most recent novels, the robot brain-computer must be every bit as complex as the human brain. It must be made of tiny microchips no larger than, and as complex as, brain cells.

But now let us try to imagine something that is neither organism nor robot, but a combination of the two. Perhaps we can think of it as an organism-robot or "orbot." That would clearly be a poor name, for it is only "robot" with the first two letters transposed. To say "orgabot," instead, is to be stuck with a rather ugly word.

We might call it a robot-organism, or a "robotanism," which, again, is ugly or "roborg." To my ears, "roborg" doesn't sound bad, but we can't have that. Something else has arisen.

The science of computers was given the name "cybernetics" by Norbert Weiner a generation ago, so that if we consider something that is part robot and part organism and remember that a robot is cybernetic in nature, we might think of the mixture as a "cybernetic organism," or a "cyborg." In fact, that is the name that has stuck and is used.

To see what a cyborg might be, let's try starting with a human organism and moving toward a robot; and when we are quite done with that, let's start with a robot and move toward a human being.

To move from a human organism toward a robot, we must begin replacing portions of the human organism with robotic parts. We already do that in some ways. For instance, a good percentage of the original material of my teeth is now metallic, and metal is, of course, the robotic substance par excellence.

The replacements don't have to be metallic, of course. Some parts of my teeth are now ceramic in nature, and can't be told at a glance from the natural dentine. Still, even though dentine is ceramic in appearance and even, to an extent, in chemical structure, it was originally laid down by living material and bears the marks of its origin. The ceramic that has replaced the dentine shows no trace of life, now or ever.

We can go further. My breastbone, which had to be split longitudinally in an operation a few years back is now held together by metallic staples, which have remained in place ever since. My sister-in-law has an artificial hip-joint replacement. There are people who have artificial arms or legs and such non-living limbs are being designed, as time passes on, to be ever more complex and useful. There are people who have lived for days and even months with artificial hearts, and many more people who live for years with pacemakers.

We can imagine, little by little, this part and that part of the human being replaced by inorganic materials and engineering devices. Is there any part which we would find difficult to replace, even in imagination?

I don't think anyone would hesitate there. Replace every part of the human being but one-the limbs, the heart, the liver, the skeleton, and so on-and the product would remain human. It would be a human being with artificial parts, but it would be a human being.

But what about the brain?

Surely, if there is one thing that makes us human it is the brain. If there is one thing that makes us a human individual, it is the intensely complex makeup, the emotions, the learning, the memory content of our particular brain. You can't simply replace a brain with a thinking device off some factory shelf. You have to put in something that incorporates all that a natural brain has learned, that possesses all its memory, and that mimics its exact pattern of working.

An artificial limb might not work exactly like a natural one, but might still serve the purpose. The same might be true of an artificial lung, kidney, or liver. An artificial brain, however, must be the precise replica of the brain it replaces, or the human being in question is no longer the same human being.

It is the brain, then, that is the sticking point in going from human organism to robot.

And the reverse?

In "The Bicentennial Man," I described the passage of my robot-hero, Andrew Martin, from robot to man. Little by little, he had himself changed, till his every visible part was human in appearance. He displayed an intelligence that was increasingly equivalent (or even superior) to that of a man. He was an artist, a historian, a scientist, an administrator. He forced the passage of laws guaranteeing robotic rights, and achieved respect and admiration in the fullest degree.

Yet at no point could he make himself accepted as a man. The sticking point, here, too, was his robotic brain. He found that he had to deal with that before the final hurdle could be overcome.

Therefore, we come down to the dichotomy, body and brain. The ultimate cyborgs are those in which the body and brain don't match. That means we can have two classes of complete cyborgs: a) a robotic brain in a human body, or b) a human brain in a robotic body.

We can take it for granted that in estimating the worth of a human being (or a robot, for that matter) we judge first by superficial appearance.

I can very easily imagine a man seeing a woman of superlative beauty and gazing in awe and wonder at the sight. "What a beautiful woman," he will say, or think, and he could easily imagine himself in love with her on the spot. In romances, I believe that happens as a matter of routine. And, of course, a woman seeing a man of superlative beauty is surely likely to react in precisely the same way.

If you fall in love with a striking beauty, you are scarcely likely to spend much time asking if she (or he, of course) has any brains, or possesses a good character, or has good judgment or kindness or warmth. If you find out eventually that good looks are the person's only redeeming quality, you are liable to make excuses and continue to be guided, for a time at least, by the conditioned reflex of erotic response. Eventually, of course, you will tire of good looks without content, but who knows how long that will take?

On the other hand, a person with a large number of good qualities who happened to be distinctly plain might not be likely to entangle you in the first place unless you were intelligent enough to see those good qualities so that you might settle down to a lifetime of happiness.

What I am saying, then, is that a cyborg with a robotic brain in a human body is going to be accepted by most, if not all, people as a human being; while a cyborg with a human brain in a robotic body is going to be accepted by most, if not all, people as a robot. You are, after all-at least to most people-what you seem to be.

These two diametrically opposed cyborgs will not, however, pose a problem to human beings to the same degree.

Consider the robotic brain in the human body and ask why the transfer should be made. A robotic brain is better off in a robotic body since a human body is far the more fragile of the two. You might have a young and stalwart human body in which the brain has been damaged by trauma and disease, and you might think, "Why waste that magnificent human body? Let's put a robotic brain in it so that it can live out its life."

If you were to do that, the human being that resulted would not be the original. It would be a different individual human being. You would not be conserving an individual but merely a specific mindless body. And a human body, however fine, is (without the brain that goes with it) a cheap thing. Every day, half a million new bodies come into being. There is no need to save anyone of them if the brain is done.

On the other hand, what about a human brain in a robotic body? A human brain doesn't last forever, but it can last up to ninety years without falling into total uselessness. It is not at all unknown to have a ninety-year-old who is still sharp, and capable of rational and worthwhile thought. And yet we also know that many a superlative mind has vanished after twenty or thirty years because the body that housed it (and was worthless in the absence of the mind) had become uninhabitable through trauma or disease. There would be a strong impulse then to transfer a perfectly good (even superior) brain into a robotic body to give it additional decades of useful life.

Thus, when we say "cyborg" we are very likely to think, just about exclusively, of a human brain in a robotic body-and we are going to think of that as a robot.

We might argue that a human mind is a human mind, and that it is the mind that counts and not the surrounding support mechanism, and we would be right. I'm sure that any rational court would decide that a human-brain cyborg would have all the legal rights of a man. He could vote, he must not be enslaved, and so on.

And yet suppose a cyborg were challenged: "Prove that you have a human brain and not a robotic brain, before I let you have human rights."

The easiest way for a cyborg to offer the proof is for him to demonstrate that he is not bound by the Three Laws of Robotics. Since the Three Laws enforce socially acceptable behavior, this means he must demonstrate that he is capable of human (i.e. nasty) behavior. The simplest and most unanswerable argument is simply to knock the challenger down, breaking his jaw in the process, since no robot could do that. (In fact, in my story "Evidence," which appeared in 1947, I use this as a way of proving someone is not a robot-but in that case there was a catch.) But if a cyborg must continually offer violence in order to prove he has a human brain, that will not necessarily win him friends.

For that matter, even if he is accepted as human and allowed to vote and to rent hotel rooms and do all the other things human beings can do, there must nevertheless be some regulations that distinguish between him and complete human beings. The cyborg would be stronger than a man, and his metallic fists could be viewed as lethal weapons. He might still be forbidden to strike a human being, even in self-defense. He couldn't engage in various sports on an equal basis with human beings, and so on.

Ah, but need a human brain be housed in a metallic robotic body? What about housing it in a body made of ceramic and plastic and fiber so that it looks and feels like a human body-and has a human brain besides?

But you know, I suspect that the cyborg will still have his troubles. He'll be different. No matter how small the difference is, people will seize upon it.

We know that people who have human brains and full human bodies sometimes hate each other because of a slight difference in skin pigmentation, or a slight variation in the shape of the nose, eyes, lips, or hair.

We know that people who show no difference in any of the physical characteristics that have come to represent a cause for hatred, may yet be at daggers-drawn over matters that are not physical at all, but cultural-differences in religion, or in political outlook, or in place of birth, or in language, or in just the accent of a language.

Let's face it. Cyborgs will have their difficulties, no matter what.

The Sense Of Humor Would a robot feel a yearning to be human?

You might answer that question with a counter-question. Does a Chevrolet feel a yearning to be a Cadillac?

The counter-question makes the unstated comment that a machine has no yearnings.

But the very point is that a robot is not quite a machine, at least in potentiality. A robot is a machine that is made as much like a human being as it is possible to make it, and somewhere there may be a boundary line that may be crossed.

We can apply this to life. An earthworm doesn't yearn to be a snake; a hippopotamus doesn't yearn to be an elephant. We have no reason to think such creatures are self-conscious and dream of something more than they are. Chimpanzees and gorillas seem to be self-aware, but we have no reason to think that they yearn to be human.

A human being, however, dreams of an afterlife and yearns to become one of the angels. Somewhere, life crossed a boundary line. At some point a species arose that was not only aware of itself but had the capacity to be dissatisfied with itself.

Perhaps a similar boundary line will someday be crossed in the construction of robots.

But if we grant that a robot might someday aspire to humanity, in what way would he so aspire? He might aspire to the possession of the legal and social status that human beings are born to. That was the theme of my story "The Bicentennial Man," and in his pursuit of such status, my robot-hero was willing to give up all his robotic qualities, one by one, right down to his immortality.

That story, however, was more philosophical than realistic. What is there about a human being that a robot might properly envy-what human physical or mental characteristic? No sensible robot would envy human fragility, or human incapacity to withstand mild changes in the environment, or human need for sleep, or aptitude for the trivial mistake, or tendency to infectious and degenerative disease, or incapacitation through illogical storms of emotion.

He might, more properly, envy the human capacity for friendship and love, his wide-ranging curiosity, his eagerness for experience. I would like to suggest, though, that a robot who yearned for humanity might well find that what he would most want to understand, and most frustratingly fail to understand, would be the human sense of humor.

The sense of humor is by no means universal among human beings, though it does cut across all cultures. I have known many people who didn't laugh, but who looked at you in puzzlement or perhaps disdain if you tried to be funny. I need go no further than my father, who routinely shrugged off my cleverest sallies as unworthy of the attention of a serious man. (Fortunately, my mother laughed at all my jokes, and most uninhibitedly, or I might have grown up emotionally stunted.) The curious thing about the sense of humor, however, is that, as far as I have observed, no human being will admit to its lack. People might admit they hate dogs and dislike children, they might cheerfully own up to cheating on their income tax or on their marital partner as a matter of right, and might not object to being considered inhumane or dishonest, through the simple expediency of switching adjectives and calling themselves realistic or businesslike.

However, accuse them of lacking a sense of humor and they will deny it hotly every time, no matter how openly and how often they display such a lack. My father, for instance, always maintained that he had a keen sense of humor and would prove it as soon as he heard a joke worth laughing at (though he never did, in my experience).

Why, then, do people object to being accused of humorlessness? My theory is that people recognize (subliminally, if not openly) that a sense of humor is typically human, more so than any other characteristic, and refuse demotion to subhumanity.

Only once did I take up the matter of a sense of humor in a science-fiction story, and that was in my story "Jokester," which first appeared in the December, 1956 issue of Infinity Science Fiction and which was most recently reprinted in my collection The Best Science Fiction of Isaac Asimov (Doubleday, 1986).

The protagonist of the story spent his time telling jokes to a computer (I quoted six of them in the course of the story). A computer, of course, is an immobile robot; or, which is the same thing, a robot is a mobile computer; so the story deals with robots and jokes. Unfortunately, the problem in the story for which a solution was sought was not the nature of humor, but the source of all the jokes one hears. And there is an answer, too, but you'll have to read the story for that.

However, I don't just write science fiction. I write whatever it falls into my busy little head to write, and (by some undeserved stroke of good fortune) my various publishers are under the weird impression that it is illegal not to publish any manuscript I hand them. (You can be sure that I never disabuse them of this ridiculous notion.) Thus, when I decided to write a joke book, I did, and Houghton-Mifflin published it in 1971 under the title of Isaac Asimov's Treasury of Humor. In it, I told 640 jokes that I happened to have as part of my memorized repertoire. (I also have enough for a sequel to be entitled Isaac Asimov Laughs Again, but I can't seem to get around to writing it no matter how long I sit at the keyboard and how quickly I manipulate the keys.) I interspersed those jokes with my own theories concerning what is funny and how one makes what is funny even funnier.

Mind you, there are as many different theories of humor as there are people who write on the subject, and no two theories are alike. Some are, of course, much stupider than others, and I felt no embarrassment whatever in adding my own thoughts on the subject to the general mountain of commentary.

It is my feeling, to put it as succinctly as possible, that the one necessary ingredient in every successful joke is a sudden alteration in point of view. The more radical the alteration, the more suddenly it is demanded, the more quickly it is seen, the louder the laugh and the greater the joy.

Let me give you an example with a joke that is one of the few I made up myself: Jim comes into a bar and finds his best friend, Bill, at a corner table gravely nursing a glass of beer and wearing a look of solemnity on his face. Jim sits down at the table and says sympathetically, "What's the matter, Bill?"

Bill sighs, and says, "My wife ran off yesterday with my best friend."

Jim says, in a shocked voice, "What are you talking about, Bill? I'm your best friend."

To which Bin answers softly, "Not anymore."

I trust you see the change in point of view. The natural supposition is that poor Bill is sunk in gloom over a tragic loss. It is only with the last three words that you realize, quite suddenly, that he is, in actual fact, delighted. And the average human male is sufficiently ambivalent about his wife (however beloved she might be) to greet this particular change in point of view with delight.

Now, if a robot is designed to have a brain that responds to logic only (and of what use would any other kind of robot brain be to humans who are hoping to employ robots for their own purposes?), a sudden change in point of view would be hard to achieve. It would imply that the rules of logic were wrong in the first place or were capable of a flexibility that they obviously don't have. In addition, it would be dangerous to build ambivalence into a robot brain. What we want from him is decision and not the to-be-or-not-to-be of a Hamlet.

Imagine, then, telling a robot the joke I have just given you, and imagine the robot staring at you solemnly after you are done, and questioning you, thus.

Robot: "But why is Jim no longer Bill's best friend? You have not described Jim as doing anything that would cause Bill to be angry with him or disappointed in him."

You: "Well, no, it's not that Jim has done anything. It's that someone else has done something for Bill that was so wonderful, that he has been promoted over Jim's head and has instantly become Bill's new best friend."

Robot: "But who has done this?" You: "The man who ran away with Bill's wife, of course." Robot (after a thoughtful pause): "But that can't be so. Bill must have felt profound affection for his wife and a great sadness over her loss. Is that not how human males feel about their wives, and how they would react to their loss?"

You: "In theory, yes. However, it turns out that Bill strongly disliked his wife and was glad someone had run off with her."

Robot (after another thoughtful pause): "But you did not say that was so."

You: "I know. That's what makes it funny. I led you in one direction and then suddenly let you know that was the wrong direction."

Robot: "Is it funny to mislead a person?"

You (giving up): "Well, let's get on with building this house."

In fact, some jokes actually depend on the illogical responses of human beings. Consider this one: The inveterate horse player paused before taking his place at the betting windows, and offered up a fervent prayer to his Maker.

"Blessed Lord," he murmured with mountain-moving sincerity. "I know you don't approve of my gambling, but just this once, Lord, just this once, please let me break even. I need the money so badly."

If you were so foolish as to tell this joke to a robot, he would immediately say, "But to break even means that he would leave the races with precisely the amount of money he had when he entered. Isn't that so?"

"Yes, that's so."

"Then, if he needs the money so badly, all he need do is not bet at all, and it would be just as though he had broken even."

"Yes, but he has this unreasoning need to gamble."

"You mean even if he loses."

"Yes."

"But that makes no sense."

"But the point of the joke is that the gambler doesn't understand this."

"You mean it's funny if a person lacks any sense of logic and is possessed of not even the simplest understanding?"

And what can you do but turn back to building the house again?

But tell me, is this so different from dealing with the ordinary humorless human being? I once told my father this joke: Mrs. Jones, the landlady, woke up in the middle of the night because there were strange noises outside her door. She looked out, and there was Robinson, one of her boarders, forcing a frightened horse up the stairs.

She shrieked, "What are you doing, Mr. Robinson?"

He said, "Putting the horse in the bathroom."

"For goodness sake, why?"

"Well, old Higginbotham is such a wise guy. Whatever I tell him, he answers, 'I know. I know,' in such a superior way. Well, in the morning, he'll go to the bathroom and he'll come out yelling, 'There's a horse in the bathroom.' And I'll yawn and say, 'I know, I know.' "

And what was my father's response? He said, "Isaac, Isaac. You're a city boy, so you don't understand. You can't push a horse up the stairs if he doesn't want to go."

Personally, I thought that was funnier than the joke.

Anyway, I don't see why we should particularly want a robot to have a sense of humor, but the point is that the robot himself might want to have one-and how do we give it to him?

Robots In Combination I have been inventing stories about robots now for very nearly half a century. In that time, I have rung almost every conceivable change upon the theme.

Mind you, it was not my intention to compose an encyclopedia of robot nuances; it was not even my intention to write about them for half a century. It just happened that I survived that long and maintained my interest in the concept. And it also just happened that in attempting to think of new story ideas involving robots, I ended up thinking about nearly everything.

For instance, in the sixth volume of the Robot City series, there are the "chemfets," which have been introduced into the hero's body in order to replicate and, eventually, give him direct psycho-electronic control over the core computer, and hence all the robots of Robot City.

Well, in my book Foundation's Edge (Doubleday, 1982), my hero, Golan Trevize, before taking off in a spaceship, makes contact with an advanced computer by placing his hands on an indicated place on the desk before him.

"And as he and the computer held hands, their thinking merged...

"...he saw the room with complete clarity-not just in the direction in which he was looking, but all around and above and below.

"He saw every room in the spaceship, and he saw outside as well. The sun had risen...but he could look at it directly without being dazzled...

"He felt the gentle wind and its temperature, and the sounds of the world about him. He detected the planet's magnetic field and the tiny electrical charges on the wall of the ship.

"He became aware of the controls of the ship...He knew...that if he wanted to lift the ship, or turn it, or accelerate, or make use of any of its abilities, the process was the same as that of performing the analogous process to his body. He had but to use his will."