2017-05-04 03:54:46 UTC
This, IMHO, is one of the things that always bothered me about the Laws as a concept, though I only recently crystalized it. The First Law is dangerous not merely because the definitions of harm are endlessly mutable, and not merely because the omissive clause 'through inaction, etc.', is packed with highly enriched U-235, and not just because the Zeroth Law allows essentially limitless potential for a robot to rationalize its way into doing or not anything at all.
You do not however, want orders by unauthorized users taking equal priority with orders by owners.It is a tough call to make. Are you willing to be personally responsible for the actions of your robot ?
The other big problem with the First Law is that it leaves a powerful, potentially dangerous machine partly _out of control_ from the moment it's turned on.
What crystalized it was a reread of an old authorized non-Asimov series set in his world, a series of juveniles called _Robot City_ and _Robots and Aliens_.
There's no need to delve into the plot, the thing that crystalized it for me was that there are several instances in which the heroes (and their foes) find themselves trying to get their robotic servants and creations to do what they want, in the face of First Law problems.
Sometimes it takes the form of trying to prevent the robot (or group of robots) from doing something, again First Law. It happens often enough that a rational observer would note that these machines are a real danger, heck, even some of the characters comment on the problem.
From an engineering POV, a car that might suddenly decide it doesn't have to obey the steering wheel would be seen as _malfunctioning_, and a car manufacturer who knowingly sold one that way could be sued into oblivion, or prosecuted. It's the responsibility of the _driver_ to control the car, and determine what it should and should not do.
Likewise, room lights that sometimes may decide to ignore the light switch are malfunctioning.
One way of looking at the First Law of Robotics is that it is an evasion of responsibility by the designers and end users of the robots. If the robot starts to do something ghastly-stupid because it doesn't understand some nuance, and the mistake is rooted in the First Law, _there's no way to override it other than to destroy the robot_.
This is part of what Jack Williamson was getting at in his Humanoids stories, of course. Another, less well known instance, is from the original novel of _Colossus: The Forbin Project". In that novel, it's shown that one of the reasons Colossus is built, and set up so it intentionally cannot be overridden, is that the President, in that story, wants to be relieved of the _responsibility_ of judging whether or not to use the nuclear arsenal. This desire leads to disaster.
From an engineering-design POV, _deliberately_ designing and constructing a powerful machine so that it can't be stopped or halted if it malfunctions is...insane. It's like building a car with no brakes.
A rational set of Robotic Laws would have the Second Law in first place, with safeguards, but in first place. The First Law might still be there in some form, but it would be subject to human override in case the robot got confused or followed some weird chain of machine logic to do something Stupid that nobody predicted because no human would ever think that way.