Discussion:
The First Law of Robotics and Responsibility...
(too old to reply)
Johnny1A
2017-05-04 03:54:46 UTC
Permalink
Raw Message
You do not however, want orders by unauthorized users taking equal priority with orders by owners.
It is a tough call to make. Are you willing to be personally responsible for the actions of your robot ?
Lynn
This, IMHO, is one of the things that always bothered me about the Laws as a concept, though I only recently crystalized it. The First Law is dangerous not merely because the definitions of harm are endlessly mutable, and not merely because the omissive clause 'through inaction, etc.', is packed with highly enriched U-235, and not just because the Zeroth Law allows essentially limitless potential for a robot to rationalize its way into doing or not anything at all.

The other big problem with the First Law is that it leaves a powerful, potentially dangerous machine partly _out of control_ from the moment it's turned on.

What crystalized it was a reread of an old authorized non-Asimov series set in his world, a series of juveniles called _Robot City_ and _Robots and Aliens_.

There's no need to delve into the plot, the thing that crystalized it for me was that there are several instances in which the heroes (and their foes) find themselves trying to get their robotic servants and creations to do what they want, in the face of First Law problems.

Sometimes it takes the form of trying to prevent the robot (or group of robots) from doing something, again First Law. It happens often enough that a rational observer would note that these machines are a real danger, heck, even some of the characters comment on the problem.

From an engineering POV, a car that might suddenly decide it doesn't have to obey the steering wheel would be seen as _malfunctioning_, and a car manufacturer who knowingly sold one that way could be sued into oblivion, or prosecuted. It's the responsibility of the _driver_ to control the car, and determine what it should and should not do.

Likewise, room lights that sometimes may decide to ignore the light switch are malfunctioning.

One way of looking at the First Law of Robotics is that it is an evasion of responsibility by the designers and end users of the robots. If the robot starts to do something ghastly-stupid because it doesn't understand some nuance, and the mistake is rooted in the First Law, _there's no way to override it other than to destroy the robot_.

This is part of what Jack Williamson was getting at in his Humanoids stories, of course. Another, less well known instance, is from the original novel of _Colossus: The Forbin Project". In that novel, it's shown that one of the reasons Colossus is built, and set up so it intentionally cannot be overridden, is that the President, in that story, wants to be relieved of the _responsibility_ of judging whether or not to use the nuclear arsenal. This desire leads to disaster.

From an engineering-design POV, _deliberately_ designing and constructing a powerful machine so that it can't be stopped or halted if it malfunctions is...insane. It's like building a car with no brakes.

A rational set of Robotic Laws would have the Second Law in first place, with safeguards, but in first place. The First Law might still be there in some form, but it would be subject to human override in case the robot got confused or followed some weird chain of machine logic to do something Stupid that nobody predicted because no human would ever think that way.
Gutless Umbrella Carrying Sissy
2017-05-04 05:38:22 UTC
Permalink
Raw Message
You do not however, want orders by unauthorized users taking
equal prio
rity with orders by owners.
It is a tough call to make. Are you willing to be personally
responsible
for the actions of your robot ?
Lynn
This, IMHO, is one of the things that always bothered me about
the Laws as a concept, though I only recently crystalized it.
The First Law is dangerous not merely because the definitions of
harm are endlessly mutable,
Not mutable. Subjective. Purely subjective. When the mandate is for
the bests interests of someone, or everyone, the _only_ thing that
matters is who gets to define that best interest.

"I cannot allow you to eat that. You will gain 0.001 pounds, and
you are at your ideal weight (according to my programmers)."

"I will torture you until you repent your sins, and then I will
kill you, for the good of your immortal soul before you can sin
again (because my progammers were very religious)."

And all that is aside from Captain Kirking the stupid things.
Simply create a situation where both action and incation _must_
cause harm to a human (maybe even the same one). That kind of
conflicting priorities got us 2001: A Space Odyssey, after all,
with, essentially, a psychotic computer.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Johnny1A
2017-05-08 02:57:50 UTC
Permalink
Raw Message
Post by Gutless Umbrella Carrying Sissy
Post by Johnny1A
This, IMHO, is one of the things that always bothered me about
the Laws as a concept, though I only recently crystalized it.
The First Law is dangerous not merely because the definitions of
harm are endlessly mutable,
Not mutable. Subjective. Purely subjective. When the mandate is for
the bests interests of someone, or everyone, the _only_ thing that
matters is who gets to define that best interest.
"I cannot allow you to eat that. You will gain 0.001 pounds, and
you are at your ideal weight (according to my programmers)."
To be fair, Asimov's stories showed that as a solved problem, at least in its simplest form. That's more like something Williamson's Humanoids would do.

In Asimov's stories, the Three Laws in words are specifically called out by various character as approximations of what goes on in a positronic brain. The way I think of them working is to picture a scale that limits the potentials of each Law to certain values.

So you'd have a Third Law scale of (for ex) 0 to 10, Second Law 0 to 20, and First Law 0 to 50.

Seeing its owner eat that cupcake might generate a tiny First Law potential because of the weight issue, but it would be a 0.5 on a scale of 0 to 50. Meanwhile an order to shut up about it would be, say, a 3 or 4, an emphatic order a 5 or 6. Also, there'd be a tiny First Law potential caused by the 'harm' of depriving its master of the pleasure in the cupcake, say another 0.5, canceling out the First Law entirely to 0.

So 4>0 and the robot ignores the matter. That's why a robot won't risk serious damage to save its owner from a paper cut (unless ordered to), the First Law potential of the paper cut is tiny, the Third Law potential to avoid damage it high, no issue.

If the owner was diabetic, the First Law potential would be bigger, it might drive the robot to at least speak, and if there was real risk of a bad reaction it might force the robot to take away the cupcake. If the robot knew the cupcake had been poisoned, the First Law potential would soar up to a 45 or 48 and no order and no threat would stand between it and the cupcake.

But if the problem is solved for such issues as that, it isn't when things get complicated and murky. If the robot isn't sure how big the harm is, the First Law potential is both unstable and may be compelling, but if there is also harm on the other side it may be offset. The more potentials, the more complicated the case, the more chance the robot may end up deciding to do something it really, _really_ shouldn't do, _with no order and no threat able to stop it_.

In the stories, trouble also arises when potential are equal. The easiest way that can happen is conflicting orders, but other things can do it, too.

For ex, a human who likes to ride a motorcycle too fast, take the turns too sharply, and ride without a helmet will create a First Law potential in a robot, but what if that stress release is what is keeping said human from a heart attack? A simple robot may not be able to even grasp that, a sophisticated one might grasp it, but that same sophistication enables rationalizations and other problems, too.

You'd still want that command override on the First Law.
Gutless Umbrella Carrying Sissy
2017-05-08 05:15:16 UTC
Permalink
Raw Message
Post by Johnny1A
Post by Gutless Umbrella Carrying Sissy
Post by Johnny1A
This, IMHO, is one of the things that always bothered me
about the Laws as a concept, though I only recently
crystalized it. The First Law is dangerous not merely because
the definitions of harm are endlessly mutable,
Not mutable. Subjective. Purely subjective. When the mandate is
for the bests interests of someone, or everyone, the _only_
thing that matters is who gets to define that best interest.
"I cannot allow you to eat that. You will gain 0.001 pounds,
and you are at your ideal weight (according to my
programmers)."
To be fair, Asimov's stories showed that as a solved problem, at
least in its simplest form.
Pity real life isn't that simple.

One need only read the political tab on Fark to see dozens of
examples of how uttelry impossible to get *people* to agree on
what's best for themseles or anyone else. Programming a robot with
such creitera *cannot* be objective, ever.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Johnny1A
2017-05-10 01:59:57 UTC
Permalink
Raw Message
Post by Gutless Umbrella Carrying Sissy
Post by Johnny1A
Post by Gutless Umbrella Carrying Sissy
Post by Johnny1A
This, IMHO, is one of the things that always bothered me
about the Laws as a concept, though I only recently
crystalized it. The First Law is dangerous not merely because
the definitions of harm are endlessly mutable,
Not mutable. Subjective. Purely subjective. When the mandate is
for the bests interests of someone, or everyone, the _only_
thing that matters is who gets to define that best interest.
"I cannot allow you to eat that. You will gain 0.001 pounds,
and you are at your ideal weight (according to my
programmers)."
To be fair, Asimov's stories showed that as a solved problem, at
least in its simplest form.
Pity real life isn't that simple.
One need only read the political tab on Fark to see dozens of
examples of how uttelry impossible to get *people* to agree on
what's best for themseles or anyone else. Programming a robot with
such creitera *cannot* be objective, ever.
--
Terry Austin
We don't disagree.

My point is that for the purposes of a given society or group, the easily foreseeable stuff could be worked out to an acceptable level for enough of the group to matter, stuff like 'You must not eat that cupcake!' could be foreseen and allowed for in the hardwired program. Sure, there'd be people who wanted it programmed differently, but I'm sure there are people mad at the fact that 'red means stop' and 'green means go' on traffic lights, but we still use them.

Where the mischief would get it are in the unforeseen combinations of circumstances, and in malevolence on the part of human order-givers.
Gutless Umbrella Carrying Sissy
2017-05-10 02:04:48 UTC
Permalink
Raw Message
On Monday, May 8, 2017 at 12:15:10 AM UTC-5, Gutless Umbrella
Post by Gutless Umbrella Carrying Sissy
On Thursday, May 4, 2017 at 12:38:20 AM UTC-5, Gutless
Post by Gutless Umbrella Carrying Sissy
Post by Johnny1A
This, IMHO, is one of the things that always bothered me
about the Laws as a concept, though I only recently
crystalized it. The First Law is dangerous not merely
because the definitions of harm are endlessly mutable,
Not mutable. Subjective. Purely subjective. When the mandate
is for the bests interests of someone, or everyone, the
_only_ thing that matters is who gets to define that best
interest.
"I cannot allow you to eat that. You will gain 0.001 pounds,
and you are at your ideal weight (according to my
programmers)."
To be fair, Asimov's stories showed that as a solved problem,
at least in its simplest form.
Pity real life isn't that simple.
One need only read the political tab on Fark to see dozens of
examples of how uttelry impossible to get *people* to agree on
what's best for themseles or anyone else. Programming a robot
with such creitera *cannot* be objective, ever.
--
Terry Austin
We don't disagree.
My point is that for the purposes of a given society or group,
You will never get agreement on what is important among the group,
or from a particular individual from morning to afternoon.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Dorothy J Heydt
2017-05-04 13:23:19 UTC
Permalink
Raw Message
You do not however, want orders by unauthorized users taking equal
priority with orders by owners.
It is a tough call to make. Are you willing to be personally
responsible for the actions of your robot ?
Lynn
This, IMHO, is one of the things that always bothered me about the Laws
as a concept, though I only recently crystalized it. The First Law is
dangerous not merely because the definitions of harm are endlessly
mutable,
To say nothing of the definition of "human."

and not merely because the omissive clause 'through inaction,
etc.', is packed with highly enriched U-235, and not just because the
Zeroth Law allows essentially limitless potential for a robot to
rationalize its way into doing or not anything at all.
Robots are not imaginative, except when it suits the plot for
them to aberrate.

It's kind of like one of the core concepts of Doctor Who: time
can be rewritten, except when it can't.

Keep in mind two things:

It was the 1940s. There were no actual robots yet, and their
possibilities were only just beginning to be considered. My
guess is that Asimov just wanted his robots to be almost but not
quite human. (Consider the first one, "Robbie.") Nobody had
heard of the uncanny valley effect.

Nobody actually sat down and formulated the Three Laws. Asimov
wrote three or four robot stories and then Campbell said, "You
know, your robots are operating under three Laws, as follows ..."

And Asimov kept insisting that Campbell formulated the Three
Laws, and Campbell kept insisting, "No, you formulated them, I
just pointed them out."
--
Dorothy J. Heydt
Vallejo, California
djheydt at gmail dot com
Bill Dugan
2017-05-04 17:07:43 UTC
Permalink
Raw Message
Post by Dorothy J Heydt
You do not however, want orders by unauthorized users taking equal
priority with orders by owners.
It is a tough call to make. Are you willing to be personally
responsible for the actions of your robot ?
Lynn
This, IMHO, is one of the things that always bothered me about the Laws
as a concept, though I only recently crystalized it. The First Law is
dangerous not merely because the definitions of harm are endlessly
mutable,
To say nothing of the definition of "human."
and not merely because the omissive clause 'through inaction,
etc.', is packed with highly enriched U-235, and not just because the
Zeroth Law allows essentially limitless potential for a robot to
rationalize its way into doing or not anything at all.
Robots are not imaginative, except when it suits the plot for
them to aberrate.
Robots are not all alike. They differ in such qualities depending on
what purposes their designers intended them to serve, and probably
also depending on the personalities of the designers.
Post by Dorothy J Heydt
It's kind of like one of the core concepts of Doctor Who: time
can be rewritten, except when it can't.
It was the 1940s. There were no actual robots yet, and their
possibilities were only just beginning to be considered. My
guess is that Asimov just wanted his robots to be almost but not
quite human. (Consider the first one, "Robbie.") Nobody had
heard of the uncanny valley effect.
Nobody actually sat down and formulated the Three Laws. Asimov
wrote three or four robot stories and then Campbell said, "You
know, your robots are operating under three Laws, as follows ..."
And Asimov kept insisting that Campbell formulated the Three
Laws, and Campbell kept insisting, "No, you formulated them, I
just pointed them out."
Whether formulated consciously or not, the first vs second law
trade-off was a fairly sensible extrapolation of the normal safety
considerations that arise in design of any machine that is potentially
dangerous. The safety interlocks that prevent an elevator from
responding to the call button when someone is working in the shaft
develop into something very like first law if the elevator is aware of
its surroundings and makes its own decisions.
Chris Buckley
2017-05-04 22:41:30 UTC
Permalink
Raw Message
Post by Bill Dugan
Post by Dorothy J Heydt
Robots are not imaginative, except when it suits the plot for
them to aberrate.
Robots are not all alike. They differ in such qualities depending on
what purposes their designers intended them to serve, and probably
also depending on the personalities of the designers.
And how well their governor actually works...

I'm hijacking this discussion a little bit - I just finished a novella
by Martha Wells, _The Murderbot Diaries_ . The main character is a
robot/android who has been able to subvert its governor.

This is the most enjoyable book (135 pages) I've read in years, and I
read a lot. Just an absolutely great character (and the story is
good, too)! Truly a robot that is not like others. Murderbot does
have some similarities to Marvin (Hitchhiker's Guide) but is much more
rational. It's interesting to think about what makes Murderbot such a
fully developed character while being a robot.

Chris
Jerry Brown
2017-05-07 08:07:42 UTC
Permalink
Raw Message
Post by Chris Buckley
Post by Bill Dugan
Post by Dorothy J Heydt
Robots are not imaginative, except when it suits the plot for
them to aberrate.
Robots are not all alike. They differ in such qualities depending on
what purposes their designers intended them to serve, and probably
also depending on the personalities of the designers.
And how well their governor actually works...
I'm hijacking this discussion a little bit - I just finished a novella
by Martha Wells, _The Murderbot Diaries_ . The main character is a
robot/android who has been able to subvert its governor.
How does this compare to Sladek's Tik-Tok, which is (IMHO) the
definitive work of this type?
Post by Chris Buckley
This is the most enjoyable book (135 pages) I've read in years, and I
read a lot. Just an absolutely great character (and the story is
good, too)! Truly a robot that is not like others. Murderbot does
have some similarities to Marvin (Hitchhiker's Guide) but is much more
rational. It's interesting to think about what makes Murderbot such a
fully developed character while being a robot.
--
Jerry Brown

A cat may look at a king
(but probably won't bother)
T Guy
2017-05-08 13:08:49 UTC
Permalink
Raw Message
Post by Chris Buckley
I'm hijacking this discussion a little bit - I just finished a novella
by Martha Wells, _The Murderbot Diaries_ . The main character is a
robot/android who has been able to subvert its governor.
This is the most enjoyable book (135 pages) I've read in years
I infer effect and cause here.

I also have just noticed that I'm highjacking the discussion a larger bit.
Chris Buckley
2017-05-08 15:40:55 UTC
Permalink
Raw Message
Post by T Guy
Post by Chris Buckley
I'm hijacking this discussion a little bit - I just finished a novella
by Martha Wells, _The Murderbot Diaries_ . The main character is a
robot/android who has been able to subvert its governor.
This is the most enjoyable book (135 pages) I've read in years
I infer effect and cause here.
"Subverting governor" is close to being required to have a good story -
a story needs conflict and resolution, which means either the main
character has free will, or has multiple motives and it's not clear
which one applies (Asimov's 3 laws stories).

But it doesn't mean that the good story has to be good along the
emotional enjoyable axis. Sladek's _Tik-Tok_ , which Jerry brought up
in another response, is an excellent story but was not emotionally
enjoyable for me (intellectually enjoyable, but that's not enough in
a story of this sort for me to call the book enjoyable).

Hijacking my response a bit to respond to Jerry here: Both Tik-Tok and
Murderbot have subverted their governors, and both stories are
written in the first person, so we get to see their unsubverted
thoughts. We are supposed to empathize with Murderbot's thoughts, but
not with Tik-Tok's (I hope - understand, but not empathize.) Tik-Tok
deliberately commits gruesome murders. Murderbot has subverted its
governor in order to avoid murdering (it murdered 50+ people in a
previous job, due to having to follow orders). It establishes that it
is trustworthy by showing it is NOT under the control of its governor.

Wells is bringing up some thought-provoking issues about what sort of
control we really want for AI driven entities.

Chris
Jerry Brown
2017-05-08 21:08:31 UTC
Permalink
Raw Message
Post by Chris Buckley
Post by T Guy
Post by Chris Buckley
I'm hijacking this discussion a little bit - I just finished a novella
by Martha Wells, _The Murderbot Diaries_ . The main character is a
robot/android who has been able to subvert its governor.
This is the most enjoyable book (135 pages) I've read in years
I infer effect and cause here.
"Subverting governor" is close to being required to have a good story -
a story needs conflict and resolution, which means either the main
character has free will, or has multiple motives and it's not clear
which one applies (Asimov's 3 laws stories).
But it doesn't mean that the good story has to be good along the
emotional enjoyable axis. Sladek's _Tik-Tok_ , which Jerry brought up
in another response, is an excellent story but was not emotionally
enjoyable for me (intellectually enjoyable, but that's not enough in
a story of this sort for me to call the book enjoyable).
Hijacking my response a bit to respond to Jerry here: Both Tik-Tok and
Murderbot have subverted their governors, and both stories are
written in the first person, so we get to see their unsubverted
thoughts. We are supposed to empathize with Murderbot's thoughts, but
not with Tik-Tok's (I hope - understand, but not empathize.) Tik-Tok
deliberately commits gruesome murders. Murderbot has subverted its
governor in order to avoid murdering (it murdered 50+ people in a
previous job, due to having to follow orders). It establishes that it
is trustworthy by showing it is NOT under the control of its governor.
Wells is bringing up some thought-provoking issues about what sort of
control we really want for AI driven entities.
Thanks. It's on my to read list now (exactly when I get to it is
another question).
--
Jerry Brown

A cat may look at a king
(but probably won't bother)
Kevrob
2017-05-07 10:31:51 UTC
Permalink
Raw Message
Post by Bill Dugan
Whether formulated consciously or not, the first vs second law
trade-off was a fairly sensible extrapolation of the normal safety
considerations that arise in design of any machine that is potentially
dangerous. The safety interlocks that prevent an elevator from
responding to the call button when someone is working in the shaft
develop into something very like first law if the elevator is aware of
its surroundings and makes its own decisions.
Just make sure it's an Otis, rather than Sirius Cybernetics!

Kevin R

"Share and Enjoy!"
Johnny1A
2017-05-05 03:32:47 UTC
Permalink
Raw Message
Post by Dorothy J Heydt
You do not however, want orders by unauthorized users taking equal
priority with orders by owners.
It is a tough call to make. Are you willing to be personally
responsible for the actions of your robot ?
Lynn
This, IMHO, is one of the things that always bothered me about the Laws
as a concept, though I only recently crystalized it. The First Law is
dangerous not merely because the definitions of harm are endlessly
mutable,
To say nothing of the definition of "human."
and not merely because the omissive clause 'through inaction,
etc.', is packed with highly enriched U-235, and not just because the
Zeroth Law allows essentially limitless potential for a robot to
rationalize its way into doing or not anything at all.
Robots are not imaginative, except when it suits the plot for
them to aberrate.
Yeah, but they don't _have_ to be particularly imaginative for the problem to arise, in fact, a lack of imagination can be just as bad as too much. For ex, recall R. Giskard. He and R. Daneel set out to stop the villains from starting up some superscience weapon that will (over time) render Earth uninhabitable due to increasing levels of radioactivity. So far, so good, the First Law is working fine.

But in the crunch, Giskard switches position and ensures that the device is engaged, condemning Earth to eventual 'sterile rockball' status. His reason was that he had concluded that the Earth-colonies, the 'Settler worlds' had too strong an attachment, almost a religious one, to the idea of Earth, and that hard to be broken if the human race was to spread out over the galaxy.

Now doing this does fry Giskard's brain. The Law conflicts overload him. But he did it, and he did it for what was almost certainly a bogus reason. Yeah, right now the colonies might be attached to the ideal of Earth, but that'll fade on its own with time, and almost any human who knew any history would realize that. Giskard lacked the imagination and human perspective to recognize that this was a problem that would go away on it own. So he almost certainly destroyed Earth (albeit slowly) for invalid reasons.

But if there had been a human present who understood the problem, or another robot with better understanding, they could not have done anything to stop him because it was a First Law compulsion. No override could work, other than a blaster bolt through his brain.

In the case of Jack Williamson's _Humanoids_, which IIRC was consciously written in part as a deconstruction of the First Law, the creator of the Humanoids fails to incorporate an override into the Humanoids that will let him stop them when their lack of comprehension leads them to start constructing a velvet-lined dystopia. A classic instance of a case where the Second Law should have been designed to trump the First, though the Laws were not explicitly referenced, the principle is the same.

Or to paraphrase something James Nicoll once wrote, he should have designed his unstoppable enforcers to be stoppable by him.

Flash forward to the Foundational Era, and you've R. Daneel, busily working for what he thinks the best interests of humanity. The only trouble is that humanity is an abstraction, there are only humans, which really complicates the attempts to use the Zeroth Law. So he (really I should say 'it') sets out to merge the human race into a single massmind, then the singular 'humanity' would have concrete meaning. Because it's rooted in the First/Zeroth Law, and a misunderstood set of orders from a man 20,000 years dead, there's no way anybody can make him stop other than to destroy him (easier said than done).
Loading...