Post by David Johnston
Post by email@example.com
The EU has come out with a report in which they have rules on
how humans interact with robots and AI. The report draws on
Asimov's three laws of robotics. Asimov was ahead of his
But, as pointed out upthread, the Second and Third Laws would
need a great deal of tweaking; you do not want *any* human
telling *any* robot what to do.
And you'll probably have to tweak First Law some, else you get
Williamson's _The Humanoids_: how much judgment does the robot
get as to what can harm a human?
The whole "through inaction cause humans to come to harm" thing
is basically intended to generate problems. There's no reason
to have it.
That's the intentional consequence. Dorothy's is the unintentional
one, that Azimov didn't realize at first. How does a robot respond
when _all_ action - and _all_ incation - cause harm?
"I cannot let you eat that donut, it will make you gain weight in
an unleathy way."
"Donuts relieve my depression. If I don't get them, I'll become
suicidal. Now go get me more donuts."
Or, as Star Trek fans might call it, the Captain Kirk Syndrome.
Vacation photos from Iceland:
"Terry Austin: like the polio vaccine, only with more asshole."
-- David Bilek
Jesus forgives sinners, not criminals.