Discussion:
Asimov's ideas on robotics inspire EU
Add Reply
a***@gmail.com
2017-01-12 12:09:04 UTC
Reply
Permalink
Raw Message
The EU has come out with a report in which they have rules on how humans interact with robots and AI. The report draws on Asimov's three laws of robotics. Asimov was ahead of his time.

Abhinav Lal
Writer & Investor
Dorothy J Heydt
2017-01-12 13:45:15 UTC
Reply
Permalink
Raw Message
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.

And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?

(Though, IIRC [it's been a long time since I read it], the first
thing the Humanoids do is to make people stop smoking. On the
grounds that fire is dangerous. Right action, wrong reason.
--
Dorothy J. Heydt
Vallejo, California
djheydt at gmail dot com
Dimensional Traveler
2017-01-12 15:48:04 UTC
Reply
Permalink
Raw Message
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
(Though, IIRC [it's been a long time since I read it], the first
thing the Humanoids do is to make people stop smoking. On the
grounds that fire is dangerous. Right action, wrong reason.
AI doesn't "think" like people. I remember hearing about a military
experiment with a learning computer and and what it learned from some
intelligence/recon photos. One of the things it learned was that tanks
were a form of mushroom because they kept appearing in a clearing after
it had rained.
--
Running the rec.arts.TV Channels Watched Survey.
Winter 2016 survey began Dec 01 and will end Feb 28
Juho Julkunen
2017-01-12 20:04:11 UTC
Reply
Permalink
Raw Message
Post by Dimensional Traveler
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
(Though, IIRC [it's been a long time since I read it], the first
thing the Humanoids do is to make people stop smoking. On the
grounds that fire is dangerous. Right action, wrong reason.
AI doesn't "think" like people. I remember hearing about a military
experiment with a learning computer and and what it learned from some
intelligence/recon photos. One of the things it learned was that tanks
were a form of mushroom because they kept appearing in a clearing after
it had rained.
Scope of AI is larger than just classifiers.

I remember reading about a military experiment where a neural net
learned to distinguish between photos with grass/not grass, because
certain tanks were always shown on grass. Or between nighttime/daytime,
because certain Russian vehicles were always shown at night. All
versions seem like they were inspired by the same illustrative example,
which may or may not be based on a true event.

You don't know what inputs a neural net uses to reach a conclusion, and
even after a number of successes in a row, you can get a result that
throws you for a loop. Then again, most of the processing in your brain
happens below the threshold of consciousness.
--
Juho Julkunen
Dorothy J Heydt
2017-01-12 20:13:42 UTC
Reply
Permalink
Raw Message
Post by Juho Julkunen
Post by Dimensional Traveler
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
(Though, IIRC [it's been a long time since I read it], the first
thing the Humanoids do is to make people stop smoking. On the
grounds that fire is dangerous. Right action, wrong reason.
AI doesn't "think" like people. I remember hearing about a military
experiment with a learning computer and and what it learned from some
intelligence/recon photos. One of the things it learned was that tanks
were a form of mushroom because they kept appearing in a clearing after
it had rained.
Scope of AI is larger than just classifiers.
I remember reading about a military experiment where a neural net
learned to distinguish between photos with grass/not grass, because
certain tanks were always shown on grass. Or between nighttime/daytime,
because certain Russian vehicles were always shown at night. All
versions seem like they were inspired by the same illustrative example,
which may or may not be based on a true event.
You don't know what inputs a neural net uses to reach a conclusion, and
even after a number of successes in a row, you can get a result that
throws you for a loop. Then again, most of the processing in your brain
happens below the threshold of consciousness.
Is it germane to recall here the adventure of the military AI in
whose simulation some helicopters buzzed some kangaroos and they
retaliated with anti-aircraft fire?
--
Dorothy J. Heydt
Vallejo, California
djheydt at gmail dot com
news{@bestley.co.uk (Mark Bestley)
2017-01-12 20:56:26 UTC
Reply
Permalink
Raw Message
Post by Dorothy J Heydt
Post by Juho Julkunen
Post by Dimensional Traveler
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
(Though, IIRC [it's been a long time since I read it], the first
thing the Humanoids do is to make people stop smoking. On the
grounds that fire is dangerous. Right action, wrong reason.
AI doesn't "think" like people. I remember hearing about a military
experiment with a learning computer and and what it learned from some
intelligence/recon photos. One of the things it learned was that tanks
were a form of mushroom because they kept appearing in a clearing after
it had rained.
Scope of AI is larger than just classifiers.
I remember reading about a military experiment where a neural net
learned to distinguish between photos with grass/not grass, because
certain tanks were always shown on grass. Or between nighttime/daytime,
because certain Russian vehicles were always shown at night. All
versions seem like they were inspired by the same illustrative example,
which may or may not be based on a true event.
You don't know what inputs a neural net uses to reach a conclusion, and
even after a number of successes in a row, you can get a result that
throws you for a loop. Then again, most of the processing in your brain
happens below the threshold of consciousness.
Is it germane to recall here the adventure of the military AI in
whose simulation some helicopters buzzed some kangaroos and they
retaliated with anti-aircraft fire?
Not exactly
<http://www.snopes.com/humor/nonsense/kangaroo.asp>
the fire was beach balls
--
Mark
Dorothy J Heydt
2017-01-12 20:12:16 UTC
Reply
Permalink
Raw Message
Post by Dimensional Traveler
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
(Though, IIRC [it's been a long time since I read it], the first
thing the Humanoids do is to make people stop smoking. On the
grounds that fire is dangerous. Right action, wrong reason.
AI doesn't "think" like people. I remember hearing about a military
experiment with a learning computer and and what it learned from some
intelligence/recon photos. One of the things it learned was that tanks
were a form of mushroom because they kept appearing in a clearing after
it had rained.
O-o-o-kay, why (in real world terms) *did* the tanks appear in
the clearing after it had rained?
--
Dorothy J. Heydt
Vallejo, California
djheydt at gmail dot com
Peter Trei
2017-01-12 15:58:26 UTC
Reply
Permalink
Raw Message
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
(Though, IIRC [it's been a long time since I read it], the first
thing the Humanoids do is to make people stop smoking. On the
grounds that fire is dangerous. Right action, wrong reason.
I recall Asimov writing that the 3 Laws had enough ambiguity in them to
write interesting stories.

I can't recall one where he set up a 'Trolley Problem'. Such a story would
have given us some insight into how he expected them to work.

BTW: Here's the actual EU report:

http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML%2BCOMPARL%2BPE-582.443%2B01%2BDOC%2BPDF%2BV0//EN

http://preview.tinyurl.com/gpvzt6m

...and yes, whoever wrote it was familiar with classic robot SF.

It doesn't have much to say about the military use of robots.

pt
David Johnston
2017-01-12 18:52:37 UTC
Reply
Permalink
Raw Message
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
The whole "through inaction cause humans to come to harm" thing is
basically intended to generate problems. There's no reason to have it.
Gutless Umbrella Carrying Sissy
2017-01-12 18:00:32 UTC
Reply
Permalink
Raw Message
Post by David Johnston
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules on
how humans interact with robots and AI. The report draws on
Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
The whole "through inaction cause humans to come to harm" thing
is basically intended to generate problems. There's no reason
to have it.
That's the intentional consequence. Dorothy's is the unintentional
one, that Azimov didn't realize at first. How does a robot respond
when _all_ action - and _all_ incation - cause harm?

"I cannot let you eat that donut, it will make you gain weight in
an unleathy way."

"Donuts relieve my depression. If I don't get them, I'll become
suicidal. Now go get me more donuts."

Or, as Star Trek fans might call it, the Captain Kirk Syndrome.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Dorothy J Heydt
2017-01-12 20:15:32 UTC
Reply
Permalink
Raw Message
Post by Gutless Umbrella Carrying Sissy
Post by David Johnston
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules on
how humans interact with robots and AI. The report draws on
Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
The whole "through inaction cause humans to come to harm" thing
is basically intended to generate problems. There's no reason
to have it.
That's the intentional consequence. Dorothy's is the unintentional
one, that Azimov didn't realize at first. How does a robot respond
when _all_ action - and _all_ incation - cause harm?
It freaks out, sometimes fatally. (Fatally to the robot, I
mean.)
Post by Gutless Umbrella Carrying Sissy
"I cannot let you eat that donut, it will make you gain weight in
an unleathy way."
"Donuts relieve my depression. If I don't get them, I'll become
suicidal. Now go get me more donuts."
Or, as Star Trek fans might call it, the Captain Kirk Syndrome.
Yes, we saw a fair amount of that in TOS.
--
Dorothy J. Heydt
Vallejo, California
djheydt at gmail dot com
Ted Nolan <tednolan>
2017-01-12 19:19:43 UTC
Reply
Permalink
Raw Message
Post by David Johnston
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
The whole "through inaction cause humans to come to harm" thing is
basically intended to generate problems. There's no reason to have it.
There was an in-story justification in one of the shorts. Something to the
effect a malicious robot could drop an anvil towards a human's head, knowing
full well he was fast enough to get there and catch it, and then choosing
not to. The action caused no harm..

But yeah, the laws were good story generators.
--
------
columbiaclosings.com
What's not in Columbia anymore..
Gutless Umbrella Carrying Sissy
2017-01-12 18:31:45 UTC
Reply
Permalink
Raw Message
Post by Ted Nolan <tednolan>
Post by David Johnston
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules on
how humans interact with robots and AI. The report draws on
Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
The whole "through inaction cause humans to come to harm" thing
is basically intended to generate problems. There's no reason
to have it.
There was an in-story justification in one of the shorts.
Something to the effect a malicious robot could drop an anvil
towards a human's head, knowing full well he was fast enough to
get there and catch it, and then choosing not to. The action
caused no harm..
The decision not to is an action.
Post by Ted Nolan <tednolan>
But yeah, the laws were good story generators.
And that's the purpose.
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Kevrob
2017-01-12 21:12:18 UTC
Reply
Permalink
Raw Message
Post by Ted Nolan <tednolan>
Post by David Johnston
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
The whole "through inaction cause humans to come to harm" thing is
basically intended to generate problems. There's no reason to have it.
There was an in-story justification in one of the shorts. Something to the
effect a malicious robot could drop an anvil towards a human's head, knowing
full well he was fast enough to get there and catch it, and then choosing
not to. The action caused no harm..
No PHYSICAL harm. Maybe there needs to be a Law 1.1a: "nor shall
it by its action cause psychological harm, or cause a human to
harm itself"? You could hurt yourself pretty badly trying to avoid
a plummeting anvil, and you might hope to be wearing brown pants!
Post by Ted Nolan <tednolan>
But yeah, the laws were good story generators.
Indeed.

The Zeroth law logically leads to a robo-dictatorship, where we
all get treated like pampered pets. Creepy.

Did Asimov ever write about anyone hacking Calvin's positronic
brains? If and when we get autonomous bots, that's what I'd worry
about. We've already had malware infecting automated household
appliances. Imagine a network of Microvacs infecting the Galactic
AC!

Kevin R

Lynn McGuire
2017-01-12 19:13:26 UTC
Reply
Permalink
Raw Message
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
<snipped>

Yes, you do want *any* human telling *any* robot what to do. "Get off me" is the most common phrase screamed at industrial robots.
The second most common phrase is "let me go". Sadly, industrial robots rarely have hearing capability.

Lynn
Gutless Umbrella Carrying Sissy
2017-01-12 18:30:22 UTC
Reply
Permalink
Raw Message
Post by Lynn McGuire
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules on
how humans interact with robots and AI. The report draws on
Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do.
You don't read Freefall, do you?
--
Terry Austin

Vacation photos from Iceland:
https://plus.google.com/u/0/collection/QaXQkB

"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek

Jesus forgives sinners, not criminals.
Lynn McGuire
2017-01-12 19:35:46 UTC
Reply
Permalink
Raw Message
Post by Gutless Umbrella Carrying Sissy
Post by Lynn McGuire
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules on
how humans interact with robots and AI. The report draws on
Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do.
You don't read Freefall, do you?
Yes, I do.

My Mechanical Engineering magazine had an article about industrial robots five or ten years ago (my time sense is gone). The
statistics on people getting run over or grabbed and assembled by industrial robots was grim.

Imagine 200 to 500 lb robots running around the place. The avoidance algorithm for robots will need to be without fault. Better
than the human variant. And no grabbing.

Lynn
Dorothy J Heydt
2017-01-12 20:19:01 UTC
Reply
Permalink
Raw Message
Post by Gutless Umbrella Carrying Sissy
Post by Lynn McGuire
In article
Post by a***@gmail.com
The EU has come out with a report in which they have rules on
how humans interact with robots and AI. The report draws on
Asimov's three laws of robotics. Asimov was ahead of his
time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do.
You don't read Freefall, do you?
Yes, I do. Note that the robot problem there is in the process
of solving itself.
--
Dorothy J. Heydt
Vallejo, California
djheydt at gmail dot com
Dorothy J Heydt
2017-01-12 20:18:23 UTC
Reply
Permalink
Raw Message
Post by Lynn McGuire
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do. "Get off
me" is the most common phrase screamed at industrial robots.
The second most common phrase is "let me go". Sadly, industrial robots
rarely have hearing capability.
Awww. And my WIP practically begins with a couple of
AIs-who-are-not-very-I being instructed to grasp and hold loose
cargo, which they do, not realizing that the "cargo" is three
humans who are trying to attack and loot the transport on which
the bots are riding. They are not, as the protagonist notes at
the time, the sharpest relays in the rack.
--
Dorothy J. Heydt
Vallejo, California
djheydt at gmail dot com
David Johnston
2017-01-12 21:09:27 UTC
Reply
Permalink
Raw Message
Post by Lynn McGuire
Post by Dorothy J Heydt
Post by a***@gmail.com
The EU has come out with a report in which they have rules on how humans
interact with robots and AI. The report draws on Asimov's three laws of
robotics. Asimov was ahead of his time.
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human telling
*any* robot what to do.
<snipped>
Yes, you do want *any* human telling *any* robot what to do. "Get off
me" is the most common phrase screamed at industrial robots. The second
most common phrase is "let me go". Sadly, industrial robots rarely have
hearing capability.
You do not however, want orders by unauthorized users taking equal
priority with orders by owners.
Loading...