Post by Lynn McGuire Post by Wolffan Post by Lynn McGuire
Questionable Content: 3907: Socially Radioactive
You know, I always wondered what happens when a Crushbot falls on
another bot ? The question brought forward by Philip Dicks excellent
and strange "Do Androids Dream of Electric Sheep" can extended by this
question. Do dead androids dream ?
Post by Wolffan
An AI which runs a nuke is almost cool. A _depressed_ AI running a nuke is
NOT cool. Nope, nope, nope. Get Lemon back to her main job, fast.
TTBOMK, the only sentient AIs shown in the strip up this point are robots, and
Station. The 'substrate' which is shown to be running Roko is about the size
of an old audio cassette.
Have AIs been shown in any other 'integrated' roles? AI cars, for example?
If I was building an AI for a specific role, I'd want to hardwire it to be
ecstatic about fulfilling that role, and not capable of depression.
Of course, its possible that in the QC universe, sapience is an emergent
quality, and you can't get it without the full package, including the capability
for existential dread and depression.
And as in many classic works of SF (e.g., "A Logic Named Joe" and
_The Moon is a Harsh Mistress_), sentience can arise in complex
AIs without the intention -- and to the surprise -- of whoever
designed the AI.
Oh, yes, and "Q.U.R.," in which robots get neurotic when they're
given humanoid chassis, when something specifically functional
for their jobs would make them happier.
Maybe the AI running the nuke was supposed to be only as smart as,
say, a dog, happy to do its job nonstop (since it doesn't need to
sleep), its only reward the praise of its master. If that wasn't
the intention of its designer, somebody has made a serious
mistake, or at least has suffered unforseen consequences, and
we're getting into serious civil rights issues here.
Cf. _Freefall_, in which civil rights for AIs (both mechanical
and biological) are an important theme.
Dorothy J. Heydt
djheydt at gmail dot com