The Positronic Man (Page 41)

And yet there was no question but that he was disturbed in some way that could only be traced to the loss of Little Miss. He could not have quantified it. A certain heaviness about his thoughts, a certain odd sluggishness about his movements, a perception of general imbalance in his rhythms-he felt these things, but he suspected that no instruments would be able to detect any measurable change in his capacities.

To ease this sensation of what he would not let himself call grief he plunged deep into his research on robot history, and his manuscript began to grow from day to day.

A brief prologue sufficed to deal with the concept of the robot in history and literature-the metal men of the ancient Greek myths, the automata imagined by clever storytellers like E. T. A. Hoffmann and Karel Capek, and other such fantasies. He summarized the old fables quickly and dispensed with them. It was the positronic robot-the real robot, the authentic item-that Andrew was primarily concerned with.

And so Andrew moved swiftly to the year 1982 and the incorporation of United States Robots and Mechanical Men by its visionary founder, Lawrence Robertson. He felt almost as though he were reliving the story himself, as he told of the early years of struggle in drafty converted-warehouse rooms and the first dramatic breakthrough in the construction of the platinum-iridium positronic brain, after endless trial-and-error. The conception and development of the indispensable Three Laws; research director Alfred Lanning’s early triumphs at designing mobile robot units, clumsy and ponderous and incapable of speech, but versatile enough to be able to interpret human orders and select the best of a number of possible alternative responses. Followed by the first mobile speaking units at the turn of the Twenty-First Century.

And then Andrew turned to something much more troublesome for him to describe: the period of negative human reaction which followed, the hysteria and downright terror that the new robots engendered, the worldwide outburst of legislation prohibiting the use of robot labor on Earth. Because miniaturization of the positronic brain was still in the development stage then and the need for elaborate cooling systems was great, the early mobile speaking units had been gigantic-nearly twelve feet high, frightful lumbering monsters that had summoned up all of humanity’s fears of artificial beings-of Frankenstein’s monster and the Golem and all the rest of that assortment of nightmares.

Andrew’s book devoted three entire chapters to that time of extreme robot-fear. They were enormously difficult chapters to write, for they dealt entirely with human irrationality, and that was a subject almost impossible for Andrew to comprehend.

He grappled with it as well as he could, striving to put himself in the place of human beings who-though they knew that the Three Laws provided foolproof safeguards against the possibility that robots could do harm to humans-persisted in looking upon robots with dread and loathing. And after a time Andrew actually succeeded in understanding, as far as he was able, how it had been possible for humans to have felt insecure in the face of such a powerful guarantee of security.

For what he discovered, as he made his way through the archives of robotics, was that the Three Laws were not as foolproof a safeguard as they seemed. They were, in fact, full of ambiguities and hidden sources of conflict. And they could unexpectedly confront robots-straightforward literal-minded creatures that they were-with the need to make decisions that were not necessarily ideal from the human point of view.

The robot who was sent on a dangerous errand on an alien planet, for example-to find and bring back some substance vital to the safety and well-being of a human explorer-might feel such a conflict between the Second Law of obedience and the Third Law of self-preservation that he would fall into a hopeless equilibrium, unable either to go forward or to retreat. And by such a stalemate the robot-through inaction-thus could create dire jeopardy for the human who had sent him on his mission, despite the imperatives of the First Law that supposedly took precedence over the other two. For how could a robot invariably know that the conflict he was experiencing between the Second and Third Laws was placing a human in danger? Unless the nature of his mission had been spelled out precisely in advance, he might remain unaware of the consequences of his inaction and never realize that his dithering was creating a First Law violation.

Or the robot who might, through faulty design or poor programming, decide that a certain human being was not human at all, and therefore not in a position to demand the protection that the First and Second Laws were supposed to afford

Or the robot who was given a poorly phrased order, and interpreted it so literally that he inadvertently caused danger to humans nearby

There were dozens of such case histories in the archives. The early roboticists-most notably the extraordinary robopsychologist, Susan Calvin, that formidable and austere woman-had labored long and mightily to cope with the difficulties that kept cropping up.

The problems had become especially intricate as robots with more advanced types of positronic pathways began to emerge from the workshops of U. S. Robots and Mechanical Men toward the middle of the Twenty-First Century: robots with a broader capacity for thought, robots who were able to look at situations and perceive their complexities with an almost human depth of understanding. Robots like-though he took care not to say so explicitly-Andrew Martin himself. The new generalized-pathway robots, equipped with the ability to interpret data in much more subjective terms than their predecessors, often reacted in ways that humans were not expecting. Always within the framework of the Three Laws, of course. But sometimes from a perspective that had not been anticipated by the framers of those laws.

As he studied the annals of robot development, Andrew at last understood why so many humans had been so phobic about robots. It wasn’t that the Three Laws were badly drawn-not at all. Indeed, they were masterly exemplars of logic. The trouble was that humans themselves were not always logical-were, on occasion, downright illogical-and robots were not always capable of coping with the swoops and curves and tangents of human thought.