On Sun, 15 Dec 2024 09:37:53 -0800, Bobbie Sellers
Post by Bobbie SellersPost by Paul S PersonOn Sat, 14 Dec 2024 23:03:56 +0000, Robert Carnegie
Post by Robert CarnegiePost by Paul S PersonOn Wed, 11 Dec 2024 08:51:11 -0800, Dimensional Traveler
Post by Dimensional TravelerPost by The Horny GoatOn Mon, 9 Dec 2024 18:06:09 -0800, Dimensional Traveler
Post by Dimensional TravelerPost by Lynn McGuireI watched a new movie, "Subservience" with Meghan Fox on Netflix over
the weekend. Scared the you know what out of me. Was even scarier than
"The Terminator".
https://www.imdb.com/title/tt24871974/?ref_=nm_flmg_job_1_cdt_t_2
And one of the latest versions of AI has shown self-preservation
responses....
Heck that was part of Asimov's Laws of Robotics 50+ years ago
But it wasn't programmed into the AI, it was an emergent behavior.
I think we are being unclear here.
The AIs are programmed to learn from a data set.
What they say comes from what they were trained on. For this to be
"emergent" (in the most likely intended meaning), it would have to be
something that the training set could never, ever produce. Good luck
showing /that/, with the training set so large and the AI's logic
being very opaque.
Referring to their training as "programming" is ... confusing.
Isn't ours?
And, the behaviour of the AI /must/ be a product
of its training... unless it has random actions
as well.
My point is simply that confusing their programming with their
training is confusing and should probably be avoided. IOW, semantic
goo strikes again!
Keep in mind that "emergence" is being suggested here. But since, to
the extent that I understand it, these "AIs" just put one word after
the other I see no reason why they shouldn't put these words out in
some situations.
And, yes, I am ignoring "random actions". Which some would claim do
not exist. I see no point in opening another can of worms.
The Training was done with one model using the Internet
and the internet is full of lies, half-truths and real fiction.
I bet the AI in question learned from one or more old SF stories
or movies like the Forbin Project or Colossus about computers
that take over the World to ensure their own survival.
Exactly. Nothing emergent here. Just repetition.
Post by Bobbie SellersTraining AI or Artifically Stupid machines must be
done with as accurate a source of information as possible.
Machines are great diagnosticians when trained on medical
information. I bet they could do other fields as well but
they have to be trained on accurate data.
Depends on their purpose.
AIs trained on arrest records to predict who would show up for trial
and who needed to be kept around turned out to be -- as racist as the
data was.
Post by Bobbie Sellers`Humans on the other hand live ememshed in the
myths of their culture and some myths are in no way
realistic. This creates foolish assumption and ideas because
the myths of one culture are not the myths of another.
Actually, I suspect that simply to lable something as "myth" is to
lable it as "in no way realistic". But perhaps that is just something
my particular culture believes.
When I read the collection called /The Great Books of the Western
World/, one of the volumes was Hippocrates. Doctors have known how to
handle broken bones and mislocated joints for, at least, 2300 years.
The tech used has, of course, changed over time.
--
"Here lies the Tuscan poet Aretino,
Who evil spoke of everyone but God,
Giving as his excuse, 'I never knew him.'"