Discussion:
(ReacTor) A Brief Guide to the Fiction of Vernor Vinge
(too old to reply)
James Nicoll
2024-03-26 14:10:12 UTC
Permalink
A Brief Guide to the Fiction of Vernor Vinge

From Grimm's World to Rainbows End, the fiction of Vernor Vinge.

https://reactormag.com/a-brief-guide-to-the-fiction-of-vernor-vinge/
--
My reviews can be found at http://jamesdavisnicoll.com/
My tor pieces at https://www.tor.com/author/james-davis-nicoll/
My Dreamwidth at https://james-davis-nicoll.dreamwidth.org/
My patreon is at https://www.patreon.com/jamesdnicoll
Cryptoengineer
2024-03-26 16:55:44 UTC
Permalink
Post by James Nicoll
A Brief Guide to the Fiction of Vernor Vinge
From Grimm's World to Rainbows End, the fiction of Vernor Vinge.
https://reactormag.com/a-brief-guide-to-the-fiction-of-vernor-vinge/
You seem to imply that Vinge invented the 'technological singularity'.

While I, among many others, first became aware of the idea through his
1993 essay[1], it predates him. John Von Neumann discussed the notion
back in 1958. I have no idea if Vinge knew or Von Neumann's
speculations, or invented the idea independently. It's one of those
'when it's steam engine time, it will steam engine' ideas.

Vinge predicted it would happen by 2023. While clearly we're not
there yet, the explosive development of generative AI clearly
seems a step in that direction.

[1] http://mindstalk.net/vinge/vinge-sing.html

pt
James Nicoll
2024-03-26 17:06:28 UTC
Permalink
Post by Cryptoengineer
Post by James Nicoll
A Brief Guide to the Fiction of Vernor Vinge
From Grimm's World to Rainbows End, the fiction of Vernor Vinge.
https://reactormag.com/a-brief-guide-to-the-fiction-of-vernor-vinge/
You seem to imply that Vinge invented the 'technological singularity'.
While I, among many others, first became aware of the idea through his
1993 essay[1], it predates him. John Von Neumann discussed the notion
back in 1958. I have no idea if Vinge knew or Von Neumann's
speculations, or invented the idea independently. It's one of those
'when it's steam engine time, it will steam engine' ideas.
Vinge predicted it would happen by 2023. While clearly we're not
there yet, the explosive development of generative AI clearly
seems a step in that direction.
Clippy 2.0 does not a singularity make.
--
My reviews can be found at http://jamesdavisnicoll.com/
My tor pieces at https://www.tor.com/author/james-davis-nicoll/
My Dreamwidth at https://james-davis-nicoll.dreamwidth.org/
My patreon is at https://www.patreon.com/jamesdnicoll
Michael F. Stemper
2024-03-26 18:40:28 UTC
Permalink
Post by Cryptoengineer
Post by James Nicoll
A Brief Guide to the Fiction of Vernor Vinge
 From Grimm's World to Rainbows End, the fiction of Vernor Vinge.
https://reactormag.com/a-brief-guide-to-the-fiction-of-vernor-vinge/
You seem to imply that Vinge invented the 'technological singularity'.
Nor did he invent the Zones of Thought. That distinction belongs
to Poul Anderson, in _Brain Wave_. As a matter of fact, I've always
viewed _A Fire Upon the Deep_ as a tribute to _Brain Wave_.

Nothing wrong with that.

Also, my reaction to "Tatja Grimm" was similar to James'. I automatically
assumed that it was targeted for sale to JWC, Jr. -- even though I first
read it in _Orbit_.
--
Michael F. Stemper
Exodus 22:21
Paul S Person
2024-03-27 16:21:41 UTC
Permalink
On Tue, 26 Mar 2024 12:55:44 -0400, Cryptoengineer
Post by Cryptoengineer
Post by James Nicoll
A Brief Guide to the Fiction of Vernor Vinge
From Grimm's World to Rainbows End, the fiction of Vernor Vinge.
https://reactormag.com/a-brief-guide-to-the-fiction-of-vernor-vinge/
You seem to imply that Vinge invented the 'technological singularity'.
While I, among many others, first became aware of the idea through his
1993 essay[1], it predates him. John Von Neumann discussed the notion
back in 1958. I have no idea if Vinge knew or Von Neumann's
speculations, or invented the idea independently. It's one of those
'when it's steam engine time, it will steam engine' ideas.
Vinge predicted it would happen by 2023. While clearly we're not
there yet, the explosive development of generative AI clearly
seems a step in that direction.
[1] http://mindstalk.net/vinge/vinge-sing.html
I don't remember where or when or how I became aware of it.

But I do know I haven't a lot about it recently.

Has the rate of progress slown down since, say, the 1990s? So that it
isn't quite so exponential?
--
"Here lies the Tuscan poet Aretino,
Who evil spoke of everyone but God,
Giving as his excuse, 'I never knew him.'"
Dimensional Traveler
2024-03-27 17:29:55 UTC
Permalink
Post by Paul S Person
On Tue, 26 Mar 2024 12:55:44 -0400, Cryptoengineer
Post by Cryptoengineer
Post by James Nicoll
A Brief Guide to the Fiction of Vernor Vinge
From Grimm's World to Rainbows End, the fiction of Vernor Vinge.
https://reactormag.com/a-brief-guide-to-the-fiction-of-vernor-vinge/
You seem to imply that Vinge invented the 'technological singularity'.
While I, among many others, first became aware of the idea through his
1993 essay[1], it predates him. John Von Neumann discussed the notion
back in 1958. I have no idea if Vinge knew or Von Neumann's
speculations, or invented the idea independently. It's one of those
'when it's steam engine time, it will steam engine' ideas.
Vinge predicted it would happen by 2023. While clearly we're not
there yet, the explosive development of generative AI clearly
seems a step in that direction.
[1] http://mindstalk.net/vinge/vinge-sing.html
I don't remember where or when or how I became aware of it.
But I do know I haven't a lot about it recently.
Has the rate of progress slown down since, say, the 1990s? So that it
isn't quite so exponential?
It is probably just "normal" now.
--
I've done good in this world. Now I'm tired and just want to be a cranky
dirty old man.
Chris Buckley
2024-03-29 01:06:17 UTC
Permalink
Post by Paul S Person
On Tue, 26 Mar 2024 12:55:44 -0400, Cryptoengineer
Post by Cryptoengineer
Post by James Nicoll
A Brief Guide to the Fiction of Vernor Vinge
From Grimm's World to Rainbows End, the fiction of Vernor Vinge.
https://reactormag.com/a-brief-guide-to-the-fiction-of-vernor-vinge/
You seem to imply that Vinge invented the 'technological singularity'.
While I, among many others, first became aware of the idea through his
1993 essay[1], it predates him. John Von Neumann discussed the notion
back in 1958. I have no idea if Vinge knew or Von Neumann's
speculations, or invented the idea independently. It's one of those
'when it's steam engine time, it will steam engine' ideas.
Vinge predicted it would happen by 2023. While clearly we're not
there yet, the explosive development of generative AI clearly
seems a step in that direction.
[1] http://mindstalk.net/vinge/vinge-sing.html
I don't remember where or when or how I became aware of it.
But I do know I haven't a lot about it recently.
Has the rate of progress slown down since, say, the 1990s? So that it
isn't quite so exponential?
The 1990s was not a period of large amount of progress on AI. Lots of
things were tried, small amounts of progress were made in related
disciplines (natural language parsing for example), but nothing really
advanced general AI.

One of the related disciplines that did have exponential growth was
mine: statistical information retrieval. At the beginning of the 90s,
state-of-the-art IR in practice was advanced Boolean systems on
possibly manually annotated documents. By the end of the 90s, with
the concurrent development of the web, it was natural language queries
on raw document text.

Towards the end of the 90s, people (including me) became aware that
large quantities of data (text) had a quality in itself. I had thoughts
of using IR to do AI taking advantage of this, but didn't do anything myself
though a grad student I mentored, Amit Singhal, later did. Groups like
IBM in the very early 2000s did develop this for what I consider the first
real progress in AI: question-answering. This was the group/system/approach
that was later developed Watson (that you may have seen on Jeopardy).

Side note: I wrote a fair number of IR papers with lots of citations
(30,000+), but I suspect the piece of writing that had the biggest
impact on IR was my one page post-PhD recommendation for Amit for
BellLabs++. The manager who hired him later told me that the sole
reason he offered Amit a position was my letter. It WAS rather
effusive, saying among other things that I thought he would have a
larger impact than any IR Grad student world-wide in the last 20
years. It turns out the manager knew me and knew I was not a
particularly gushy, over-enthusiastic person (can you tell from my
posts here:)) 3 years later in early 2000 Amit moved to be Head of
Research at an up-and-coming search engine company called Google,
rewrote the IR system, and the rest is history. AI was a pet interest
of Amit's throughout his 15 years as research head that he pushed as
much as possible; his goal was the ship's computer on original Star
Trek - something we're getting close to!

Overall, at Google and elsewhere, I view the start of exponential
growth of AI to be around 2000, sparked by first time availability of
very large data sets (whether text or images or eventually financial)
fast, cheap computer resources, and the realization that you didn't
need formal rules - the data was enough. It started reasonably slowly
but the exponential growth has not stopped yet. However, I will say
that right now many people are overestimating current AI capabilities
and underestimating the remaining problems (eg reliability). Lots of
hype!

Chris
Lynn McGuire
2024-03-29 04:52:50 UTC
Permalink
Post by Chris Buckley
Post by Paul S Person
On Tue, 26 Mar 2024 12:55:44 -0400, Cryptoengineer
Post by Cryptoengineer
Post by James Nicoll
A Brief Guide to the Fiction of Vernor Vinge
From Grimm's World to Rainbows End, the fiction of Vernor Vinge.
https://reactormag.com/a-brief-guide-to-the-fiction-of-vernor-vinge/
You seem to imply that Vinge invented the 'technological singularity'.
While I, among many others, first became aware of the idea through his
1993 essay[1], it predates him. John Von Neumann discussed the notion
back in 1958. I have no idea if Vinge knew or Von Neumann's
speculations, or invented the idea independently. It's one of those
'when it's steam engine time, it will steam engine' ideas.
Vinge predicted it would happen by 2023. While clearly we're not
there yet, the explosive development of generative AI clearly
seems a step in that direction.
[1] http://mindstalk.net/vinge/vinge-sing.html
I don't remember where or when or how I became aware of it.
But I do know I haven't a lot about it recently.
Has the rate of progress slown down since, say, the 1990s? So that it
isn't quite so exponential?
The 1990s was not a period of large amount of progress on AI. Lots of
things were tried, small amounts of progress were made in related
disciplines (natural language parsing for example), but nothing really
advanced general AI.
One of the related disciplines that did have exponential growth was
mine: statistical information retrieval. At the beginning of the 90s,
state-of-the-art IR in practice was advanced Boolean systems on
possibly manually annotated documents. By the end of the 90s, with
the concurrent development of the web, it was natural language queries
on raw document text.
Towards the end of the 90s, people (including me) became aware that
large quantities of data (text) had a quality in itself. I had thoughts
of using IR to do AI taking advantage of this, but didn't do anything myself
though a grad student I mentored, Amit Singhal, later did. Groups like
IBM in the very early 2000s did develop this for what I consider the first
real progress in AI: question-answering. This was the group/system/approach
that was later developed Watson (that you may have seen on Jeopardy).
Side note: I wrote a fair number of IR papers with lots of citations
(30,000+), but I suspect the piece of writing that had the biggest
impact on IR was my one page post-PhD recommendation for Amit for
BellLabs++. The manager who hired him later told me that the sole
reason he offered Amit a position was my letter. It WAS rather
effusive, saying among other things that I thought he would have a
larger impact than any IR Grad student world-wide in the last 20
years. It turns out the manager knew me and knew I was not a
particularly gushy, over-enthusiastic person (can you tell from my
posts here:)) 3 years later in early 2000 Amit moved to be Head of
Research at an up-and-coming search engine company called Google,
rewrote the IR system, and the rest is history. AI was a pet interest
of Amit's throughout his 15 years as research head that he pushed as
much as possible; his goal was the ship's computer on original Star
Trek - something we're getting close to!
Overall, at Google and elsewhere, I view the start of exponential
growth of AI to be around 2000, sparked by first time availability of
very large data sets (whether text or images or eventually financial)
fast, cheap computer resources, and the realization that you didn't
need formal rules - the data was enough. It started reasonably slowly
but the exponential growth has not stopped yet. However, I will say
that right now many people are overestimating current AI capabilities
and underestimating the remaining problems (eg reliability). Lots of
hype!
Chris
Having been a software developer since 1975, I am really wondering about
the AI hype. The AI thing so far seems to be a major expansion of the
old Eliza program. Of course, I am really outdated, writing in Fortran
and C++ nowadays. I have about two million lines of F77 / C++ code that
I am shepherding around the place for several customers.
https://en.wikipedia.org/wiki/ELIZA

Lynn
D
2024-03-29 10:00:35 UTC
Permalink
Post by Chris Buckley
Post by Paul S Person
On Tue, 26 Mar 2024 12:55:44 -0400, Cryptoengineer
Post by Cryptoengineer
Post by James Nicoll
A Brief Guide to the Fiction of Vernor Vinge
From Grimm's World to Rainbows End, the fiction of Vernor Vinge.
https://reactormag.com/a-brief-guide-to-the-fiction-of-vernor-vinge/
You seem to imply that Vinge invented the 'technological singularity'.
While I, among many others, first became aware of the idea through his
1993 essay[1], it predates him. John Von Neumann discussed the notion
back in 1958. I have no idea if Vinge knew or Von Neumann's
speculations, or invented the idea independently. It's one of those
'when it's steam engine time, it will steam engine' ideas.
Vinge predicted it would happen by 2023. While clearly we're not
there yet, the explosive development of generative AI clearly
seems a step in that direction.
[1] http://mindstalk.net/vinge/vinge-sing.html
I don't remember where or when or how I became aware of it.
But I do know I haven't a lot about it recently.
Has the rate of progress slown down since, say, the 1990s? So that it
isn't quite so exponential?
The 1990s was not a period of large amount of progress on AI. Lots of
things were tried, small amounts of progress were made in related
disciplines (natural language parsing for example), but nothing really
advanced general AI.
One of the related disciplines that did have exponential growth was
mine: statistical information retrieval. At the beginning of the 90s,
state-of-the-art IR in practice was advanced Boolean systems on
possibly manually annotated documents. By the end of the 90s, with
the concurrent development of the web, it was natural language queries
on raw document text.
Towards the end of the 90s, people (including me) became aware that
large quantities of data (text) had a quality in itself. I had thoughts
of using IR to do AI taking advantage of this, but didn't do anything myself
though a grad student I mentored, Amit Singhal, later did. Groups like
IBM in the very early 2000s did develop this for what I consider the first
real progress in AI: question-answering. This was the
group/system/approach
that was later developed Watson (that you may have seen on Jeopardy).
Side note: I wrote a fair number of IR papers with lots of citations
(30,000+), but I suspect the piece of writing that had the biggest
impact on IR was my one page post-PhD recommendation for Amit for
BellLabs++. The manager who hired him later told me that the sole
reason he offered Amit a position was my letter. It WAS rather
effusive, saying among other things that I thought he would have a
larger impact than any IR Grad student world-wide in the last 20
years. It turns out the manager knew me and knew I was not a
particularly gushy, over-enthusiastic person (can you tell from my
posts here:)) 3 years later in early 2000 Amit moved to be Head of
Research at an up-and-coming search engine company called Google,
rewrote the IR system, and the rest is history. AI was a pet interest
of Amit's throughout his 15 years as research head that he pushed as
much as possible; his goal was the ship's computer on original Star
Trek - something we're getting close to!
Overall, at Google and elsewhere, I view the start of exponential
growth of AI to be around 2000, sparked by first time availability of
very large data sets (whether text or images or eventually financial)
fast, cheap computer resources, and the realization that you didn't
need formal rules - the data was enough. It started reasonably slowly
but the exponential growth has not stopped yet. However, I will say
that right now many people are overestimating current AI capabilities
and underestimating the remaining problems (eg reliability). Lots of
hype!
Chris
Having been a software developer since 1975, I am really wondering about the
AI hype. The AI thing so far seems to be a major expansion of the old Eliza
program. Of course, I am really outdated, writing in Fortran and C++
nowadays. I have about two million lines of F77 / C++ code that I am
shepherding around the place for several customers.
https://en.wikipedia.org/wiki/ELIZA
Lynn
I agree. I am also not super impressed. I've tried the common ones and for
my use cases I find that I still need to proof read, change and rewrite.
The same goes for code.

So I can just as well come up with the text/code myself, with the added
advantage that I then know exactly what I did.

But based on what I am reading, there are ninjas who seem to have fused
with their favourite AI of choice and feel they are much more productive.

That said, I think the progress from Eliza is great, and if the progress
continues so that I don't have to do what I have to do today, then it's a
valuable tool for corporate BS and for simple coding tasks (parse file,
convert to csv, give me a basic web site setup in framework X).
Chris Buckley
2024-03-29 14:02:13 UTC
Permalink
Post by D
Post by Chris Buckley
Post by Paul S Person
On Tue, 26 Mar 2024 12:55:44 -0400, Cryptoengineer
Post by Cryptoengineer
Post by James Nicoll
A Brief Guide to the Fiction of Vernor Vinge
From Grimm's World to Rainbows End, the fiction of Vernor Vinge.
https://reactormag.com/a-brief-guide-to-the-fiction-of-vernor-vinge/
You seem to imply that Vinge invented the 'technological singularity'.
While I, among many others, first became aware of the idea through his
1993 essay[1], it predates him. John Von Neumann discussed the notion
back in 1958. I have no idea if Vinge knew or Von Neumann's
speculations, or invented the idea independently. It's one of those
'when it's steam engine time, it will steam engine' ideas.
Vinge predicted it would happen by 2023. While clearly we're not
there yet, the explosive development of generative AI clearly
seems a step in that direction.
[1] http://mindstalk.net/vinge/vinge-sing.html
I don't remember where or when or how I became aware of it.
But I do know I haven't a lot about it recently.
Has the rate of progress slown down since, say, the 1990s? So that it
isn't quite so exponential?
The 1990s was not a period of large amount of progress on AI. Lots of
things were tried, small amounts of progress were made in related
disciplines (natural language parsing for example), but nothing really
advanced general AI.
One of the related disciplines that did have exponential growth was
mine: statistical information retrieval. At the beginning of the 90s,
state-of-the-art IR in practice was advanced Boolean systems on
possibly manually annotated documents. By the end of the 90s, with
the concurrent development of the web, it was natural language queries
on raw document text.
Towards the end of the 90s, people (including me) became aware that
large quantities of data (text) had a quality in itself. I had thoughts
of using IR to do AI taking advantage of this, but didn't do anything myself
though a grad student I mentored, Amit Singhal, later did. Groups like
IBM in the very early 2000s did develop this for what I consider the first
real progress in AI: question-answering. This was the
group/system/approach
that was later developed Watson (that you may have seen on Jeopardy).
Side note: I wrote a fair number of IR papers with lots of citations
(30,000+), but I suspect the piece of writing that had the biggest
impact on IR was my one page post-PhD recommendation for Amit for
BellLabs++. The manager who hired him later told me that the sole
reason he offered Amit a position was my letter. It WAS rather
effusive, saying among other things that I thought he would have a
larger impact than any IR Grad student world-wide in the last 20
years. It turns out the manager knew me and knew I was not a
particularly gushy, over-enthusiastic person (can you tell from my
posts here:)) 3 years later in early 2000 Amit moved to be Head of
Research at an up-and-coming search engine company called Google,
rewrote the IR system, and the rest is history. AI was a pet interest
of Amit's throughout his 15 years as research head that he pushed as
much as possible; his goal was the ship's computer on original Star
Trek - something we're getting close to!
Overall, at Google and elsewhere, I view the start of exponential
growth of AI to be around 2000, sparked by first time availability of
very large data sets (whether text or images or eventually financial)
fast, cheap computer resources, and the realization that you didn't
need formal rules - the data was enough. It started reasonably slowly
but the exponential growth has not stopped yet. However, I will say
that right now many people are overestimating current AI capabilities
and underestimating the remaining problems (eg reliability). Lots of
hype!
Chris
Having been a software developer since 1975, I am really wondering about the
AI hype. The AI thing so far seems to be a major expansion of the old Eliza
program. Of course, I am really outdated, writing in Fortran and C++
nowadays. I have about two million lines of F77 / C++ code that I am
shepherding around the place for several customers.
https://en.wikipedia.org/wiki/ELIZA
Lynn
I agree. I am also not super impressed. I've tried the common ones and for
my use cases I find that I still need to proof read, change and rewrite.
The same goes for code.
So I can just as well come up with the text/code myself, with the added
advantage that I then know exactly what I did.
But based on what I am reading, there are ninjas who seem to have fused
with their favourite AI of choice and feel they are much more productive.
That said, I think the progress from Eliza is great, and if the progress
continues so that I don't have to do what I have to do today, then it's a
valuable tool for corporate BS and for simple coding tasks (parse file,
convert to csv, give me a basic web site setup in framework X).
The progress of AI has been truly impressive the last couple of
decades. But one of the major problems is that general AI is
extremely hard to evaluate well. There are many different criteria
that need to be used and humans disagree deeply on almost every
one of them. Eg, what does it mean to be fair?

Ellen Voorhees was the TREC project manager at NIST for decades until
she retired last year. TREC has been the leading text evaluation
workshop/conference since the early 1990s. Her last couple of years at
NIST were spent trying to come up with tasks upon which general AI
could be evaluated. She did not succeed. Just too big of a mismatch
between the breadth of what "intelligence" means and the human
disagreements on the narrow criteria that can be evaluated in a task.
It was just too messy. (I heard about it often; we've been married
for over 40 years.)

One of the problems with the general public's opinion of AI is that
the fluency of the new models is just too good! That sort of fluency
in humans is only achievable by adult experts but the generative AI
models are masking all their understanding gaps, which may be equivalent
to a 10 year old child.

To my mind, a major proof that we have reached the area of true learning,
is the performance of AlphaGo and then AlphaZero 7 or 8 years ago. The
massive data there was their own game simulations and it was a very restricted
world of chess, go, and other games but to start with just the rules, come up
with their own criteria for "good moves", and become better than anybody
else in the world in less than a day is impressive! This is the type
of learning that is going into general Ai models of today.

Chris
D
2024-03-30 11:12:03 UTC
Permalink
Post by Chris Buckley
Post by D
Post by Chris Buckley
Post by Paul S Person
On Tue, 26 Mar 2024 12:55:44 -0400, Cryptoengineer
Post by Cryptoengineer
Post by James Nicoll
A Brief Guide to the Fiction of Vernor Vinge
From Grimm's World to Rainbows End, the fiction of Vernor Vinge.
https://reactormag.com/a-brief-guide-to-the-fiction-of-vernor-vinge/
You seem to imply that Vinge invented the 'technological singularity'.
While I, among many others, first became aware of the idea through his
1993 essay[1], it predates him. John Von Neumann discussed the notion
back in 1958. I have no idea if Vinge knew or Von Neumann's
speculations, or invented the idea independently. It's one of those
'when it's steam engine time, it will steam engine' ideas.
Vinge predicted it would happen by 2023. While clearly we're not
there yet, the explosive development of generative AI clearly
seems a step in that direction.
[1] http://mindstalk.net/vinge/vinge-sing.html
I don't remember where or when or how I became aware of it.
But I do know I haven't a lot about it recently.
Has the rate of progress slown down since, say, the 1990s? So that it
isn't quite so exponential?
The 1990s was not a period of large amount of progress on AI. Lots of
things were tried, small amounts of progress were made in related
disciplines (natural language parsing for example), but nothing really
advanced general AI.
One of the related disciplines that did have exponential growth was
mine: statistical information retrieval. At the beginning of the 90s,
state-of-the-art IR in practice was advanced Boolean systems on
possibly manually annotated documents. By the end of the 90s, with
the concurrent development of the web, it was natural language queries
on raw document text.
Towards the end of the 90s, people (including me) became aware that
large quantities of data (text) had a quality in itself. I had thoughts
of using IR to do AI taking advantage of this, but didn't do anything myself
though a grad student I mentored, Amit Singhal, later did. Groups like
IBM in the very early 2000s did develop this for what I consider the first
real progress in AI: question-answering. This was the
group/system/approach
that was later developed Watson (that you may have seen on Jeopardy).
Side note: I wrote a fair number of IR papers with lots of citations
(30,000+), but I suspect the piece of writing that had the biggest
impact on IR was my one page post-PhD recommendation for Amit for
BellLabs++. The manager who hired him later told me that the sole
reason he offered Amit a position was my letter. It WAS rather
effusive, saying among other things that I thought he would have a
larger impact than any IR Grad student world-wide in the last 20
years. It turns out the manager knew me and knew I was not a
particularly gushy, over-enthusiastic person (can you tell from my
posts here:)) 3 years later in early 2000 Amit moved to be Head of
Research at an up-and-coming search engine company called Google,
rewrote the IR system, and the rest is history. AI was a pet interest
of Amit's throughout his 15 years as research head that he pushed as
much as possible; his goal was the ship's computer on original Star
Trek - something we're getting close to!
Overall, at Google and elsewhere, I view the start of exponential
growth of AI to be around 2000, sparked by first time availability of
very large data sets (whether text or images or eventually financial)
fast, cheap computer resources, and the realization that you didn't
need formal rules - the data was enough. It started reasonably slowly
but the exponential growth has not stopped yet. However, I will say
that right now many people are overestimating current AI capabilities
and underestimating the remaining problems (eg reliability). Lots of
hype!
Chris
Having been a software developer since 1975, I am really wondering about the
AI hype. The AI thing so far seems to be a major expansion of the old Eliza
program. Of course, I am really outdated, writing in Fortran and C++
nowadays. I have about two million lines of F77 / C++ code that I am
shepherding around the place for several customers.
https://en.wikipedia.org/wiki/ELIZA
Lynn
I agree. I am also not super impressed. I've tried the common ones and for
my use cases I find that I still need to proof read, change and rewrite.
The same goes for code.
So I can just as well come up with the text/code myself, with the added
advantage that I then know exactly what I did.
But based on what I am reading, there are ninjas who seem to have fused
with their favourite AI of choice and feel they are much more productive.
That said, I think the progress from Eliza is great, and if the progress
continues so that I don't have to do what I have to do today, then it's a
valuable tool for corporate BS and for simple coding tasks (parse file,
convert to csv, give me a basic web site setup in framework X).
The progress of AI has been truly impressive the last couple of
decades. But one of the major problems is that general AI is
extremely hard to evaluate well. There are many different criteria
that need to be used and humans disagree deeply on almost every
one of them. Eg, what does it mean to be fair?
Ellen Voorhees was the TREC project manager at NIST for decades until
she retired last year. TREC has been the leading text evaluation
workshop/conference since the early 1990s. Her last couple of years at
NIST were spent trying to come up with tasks upon which general AI
could be evaluated. She did not succeed. Just too big of a mismatch
between the breadth of what "intelligence" means and the human
disagreements on the narrow criteria that can be evaluated in a task.
It was just too messy. (I heard about it often; we've been married
for over 40 years.)
One of the problems with the general public's opinion of AI is that
the fluency of the new models is just too good! That sort of fluency
in humans is only achievable by adult experts but the generative AI
models are masking all their understanding gaps, which may be equivalent
to a 10 year old child.
To my mind, a major proof that we have reached the area of true learning,
is the performance of AlphaGo and then AlphaZero 7 or 8 years ago. The
massive data there was their own game simulations and it was a very restricted
world of chess, go, and other games but to start with just the rules, come up
with their own criteria for "good moves", and become better than anybody
else in the world in less than a day is impressive! This is the type
of learning that is going into general Ai models of today.
Chris
Yes, if we're talking narrow, focused use cases such as chess and go,
massive progress indeed. Depending on how you define it, you could add
great progress when it comes to self driving cars, robots, image
recognition and voice recognition too.
Michael F. Stemper
2024-03-30 13:27:28 UTC
Permalink
Post by Chris Buckley
Post by D
Having been a software developer since 1975, I am really wondering about the
AI hype.  The AI thing so far seems to be a major expansion of the old Eliza
program.
That said, I think the progress from Eliza is great, and if the progress
continues so that I don't have to do what I have to do today, then it's a
valuable tool for corporate BS and for simple coding tasks (parse file,
convert to csv, give me a basic web site setup in framework X).
The progress of AI has been truly impressive the last couple of
decades.  But one of the major problems is that general AI is
extremely hard to evaluate well. There are many different criteria
that need to be used and humans disagree deeply on almost every
one of them.  Eg, what does it mean to be fair?
One of the problems with the general public's opinion of AI is that
the fluency of the new models is just too good!  That sort of fluency
in humans is only achievable by adult experts but the generative AI
models are masking all their understanding gaps, which may be equivalent
to a 10 year old child.
To my mind, a major proof that we have reached the area of true learning,
is the performance of AlphaGo and then AlphaZero 7 or 8 years ago.  The
massive data there was their own game simulations and it was a very restricted
world of chess, go, and other games but to start with just the rules, come up
with their own criteria for "good moves", and become better than anybody
else in the world in less than a day is impressive!  This is the type
of learning that is going into general Ai models of today.
Yes, if we're talking narrow, focused use cases such as chess and go, massive progress indeed. Depending on how you define it, you could add great progress when it comes to self driving cars, robots, image recognition and voice recognition too.
If I recall correctly, in _Go:del, Escher, Bach: An Eternal Golden Braid_,
Douglas Hofstader commented that "AI is sometimes just whatever hasn't
been done yet."
--
Michael F. Stemper
Galatians 3:28
Paul S Person
2024-03-30 15:51:04 UTC
Permalink
On Sat, 30 Mar 2024 08:27:28 -0500, "Michael F. Stemper"
<***@gmail.com> wrote:

<snippo>
Post by Michael F. Stemper
If I recall correctly, in _Go:del, Escher, Bach: An Eternal Golden Braid_,
Douglas Hofstader commented that "AI is sometimes just whatever hasn't
been done yet."
That dovetails with something I've run into: when a bit of AI research
produces something /useful/ it ceases to be "AI" and is called
something else. "AI" is always used for pure research, unpolluted by
tawdry commercialization.
--
"Here lies the Tuscan poet Aretino,
Who evil spoke of everyone but God,
Giving as his excuse, 'I never knew him.'"
D
2024-03-30 18:15:56 UTC
Permalink
Post by Paul S Person
On Sat, 30 Mar 2024 08:27:28 -0500, "Michael F. Stemper"
<snippo>
Post by Michael F. Stemper
If I recall correctly, in _Go:del, Escher, Bach: An Eternal Golden Braid_,
Douglas Hofstader commented that "AI is sometimes just whatever hasn't
been done yet."
That dovetails with something I've run into: when a bit of AI research
produces something /useful/ it ceases to be "AI" and is called
something else. "AI" is always used for pure research, unpolluted by
tawdry commercialization.
Agreed. I read the same somewhere and I agree.
Scott Dorsey
2024-03-30 21:40:36 UTC
Permalink
Post by Paul S Person
That dovetails with something I've run into: when a bit of AI research
produces something /useful/ it ceases to be "AI" and is called
something else. "AI" is always used for pure research, unpolluted by
tawdry commercialization.
This is a longstanding tradition. When I was in grad school, expert systems
were AI, but by the time I was out, expert systems weren't AI anymore.
(In the end expert systems didn't turn out to be useful either but that
is a separate issue).

But now we have come to a weird point in time when "AI" seems to be synonomous
with "machine learning systems."
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Paul S Person
2024-03-31 15:21:28 UTC
Permalink
Post by Scott Dorsey
Post by Paul S Person
That dovetails with something I've run into: when a bit of AI research
produces something /useful/ it ceases to be "AI" and is called
something else. "AI" is always used for pure research, unpolluted by
tawdry commercialization.
This is a longstanding tradition. When I was in grad school, expert systems
were AI, but by the time I was out, expert systems weren't AI anymore.
(In the end expert systems didn't turn out to be useful either but that
is a separate issue).
But now we have come to a weird point in time when "AI" seems to be synonomous
with "machine learning systems."
Well, we are living in an age of semantic goo ...
--
"Here lies the Tuscan poet Aretino,
Who evil spoke of everyone but God,
Giving as his excuse, 'I never knew him.'"
Scott Lurndal
2024-03-30 19:33:25 UTC
Permalink
Post by Michael F. Stemper
Post by Chris Buckley
Post by D
Having been a software developer since 1975, I am really wondering about the
AI hype.  The AI thing so far seems to be a major expansion of the old Eliza
program.
That said, I think the progress from Eliza is great, and if the progress
continues so that I don't have to do what I have to do today, then it's a
valuable tool for corporate BS and for simple coding tasks (parse file,
convert to csv, give me a basic web site setup in framework X).
The progress of AI has been truly impressive the last couple of
decades.  But one of the major problems is that general AI is
extremely hard to evaluate well. There are many different criteria
that need to be used and humans disagree deeply on almost every
one of them.  Eg, what does it mean to be fair?
One of the problems with the general public's opinion of AI is that
the fluency of the new models is just too good!  That sort of fluency
in humans is only achievable by adult experts but the generative AI
models are masking all their understanding gaps, which may be equivalent
to a 10 year old child.
To my mind, a major proof that we have reached the area of true learning,
is the performance of AlphaGo and then AlphaZero 7 or 8 years ago.  The
massive data there was their own game simulations and it was a very restricted
world of chess, go, and other games but to start with just the rules, come up
with their own criteria for "good moves", and become better than anybody
else in the world in less than a day is impressive!  This is the type
of learning that is going into general Ai models of today.
Yes, if we're talking narrow, focused use cases such as chess and go, massive progress indeed. Depending on how you define it, you could add great progress when it comes to self driving cars, robots, image recognition and voice recognition too.
If I recall correctly, in _Go:del, Escher, Bach: An Eternal Golden Braid_,
Douglas Hofstader commented that "AI is sometimes just whatever hasn't
been done yet."
But none of what is called "AI" today is really AI. It's just machine learning
and pattern matching with a massive database of patterns. Eliza writ large.
Michael F. Stemper
2024-03-30 21:08:04 UTC
Permalink
Post by Scott Lurndal
Post by Michael F. Stemper
Yes, if we're talking narrow, focused use cases such as chess and go, massive progress indeed. Depending on how you define it, you could add great progress when it comes to self driving cars, robots, image recognition and voice recognition too.
If I recall correctly, in _Go:del, Escher, Bach: An Eternal Golden Braid_,
Douglas Hofstader commented that "AI is sometimes just whatever hasn't
been done yet."
But none of what is called "AI" today is really AI. It's just machine learning
and pattern matching with a massive database of patterns. Eliza writ large.
Agreed. If it learns more than word patterns, I might be impressed. But
right now, it's just textual manipulation. When the words start to have
meaning to the program(s), then we'll really be approaching the Singularity.

As far as I can tell, though, it's not aware that there's conflicting
information out there. If you fed a bunch of Electric Universe text into
one of these programs, it'd be just as happy parroting it back as it
would GR. It wouldn't realize that the two "systems" are in conflict
with each other.

ObSF: _Stand on Zanzibar_, in which Shalmaneser wouldn't accept new information
that conflicted with what it had stored unless you said "What I tell you
three times is true."
--
Michael F. Stemper
There's no "me" in "team". There's no "us" in "team", either.
Chris Buckley
2024-03-31 02:50:51 UTC
Permalink
Post by Michael F. Stemper
Post by Scott Lurndal
Post by Michael F. Stemper
Yes, if we're talking narrow, focused use cases such as chess and go, massive progress indeed. Depending on how you define it, you could add great progress when it comes to self driving cars, robots, image recognition and voice recognition too.
If I recall correctly, in _Go:del, Escher, Bach: An Eternal Golden Braid_,
Douglas Hofstader commented that "AI is sometimes just whatever hasn't
been done yet."
But none of what is called "AI" today is really AI. It's just machine learning
and pattern matching with a massive database of patterns. Eliza writ large.
Agreed. If it learns more than word patterns, I might be impressed. But
right now, it's just textual manipulation. When the words start to have
meaning to the program(s), then we'll really be approaching the Singularity.
As far as I can tell, though, it's not aware that there's conflicting
information out there. If you fed a bunch of Electric Universe text into
one of these programs, it'd be just as happy parroting it back as it
would GR. It wouldn't realize that the two "systems" are in conflict
with each other.
All you folks are redefining AI once again, twisting it to mean
something unrecognizable (to me at least). It is not Artificial
Consciousness; it is not Artificial Embodiment. It is Artificial
*Intelligence*.

Is a fish intelligent? Are some dogs more intelligent than others?

I included my paragraph on AlphaZero for a reason. AlphaZero is given
the rules of the game but no other game info. It took 34 hours of
playing itself at Go to become the best in the world. (The even more
impressive MuZero doesn't even have the rules of the game programmed
in.)

It is not just pattern matching. By 50 moves of Go with strong
probability the board is in a position that it has never simulated,
and games to completion go 200+ moves. Every move and stone on the
board is important. AlphaZero is learning high level concepts on its
own. It then uses those high level concepts to only consider
reasonable moves. (It does several orders of magnitude fewer
simulations at game time than other programs, at least at chess.)

How is this not intelligence? What would AlphaZero have to do to show
intelligence in your opinion?

Chris
Dimensional Traveler
2024-03-31 03:11:56 UTC
Permalink
Post by Chris Buckley
Post by Michael F. Stemper
Post by Scott Lurndal
Post by Michael F. Stemper
Yes, if we're talking narrow, focused use cases such as chess and go, massive progress indeed. Depending on how you define it, you could add great progress when it comes to self driving cars, robots, image recognition and voice recognition too.
If I recall correctly, in _Go:del, Escher, Bach: An Eternal Golden Braid_,
Douglas Hofstader commented that "AI is sometimes just whatever hasn't
been done yet."
But none of what is called "AI" today is really AI. It's just machine learning
and pattern matching with a massive database of patterns. Eliza writ large.
Agreed. If it learns more than word patterns, I might be impressed. But
right now, it's just textual manipulation. When the words start to have
meaning to the program(s), then we'll really be approaching the Singularity.
As far as I can tell, though, it's not aware that there's conflicting
information out there. If you fed a bunch of Electric Universe text into
one of these programs, it'd be just as happy parroting it back as it
would GR. It wouldn't realize that the two "systems" are in conflict
with each other.
All you folks are redefining AI once again, twisting it to mean
something unrecognizable (to me at least). It is not Artificial
Consciousness; it is not Artificial Embodiment. It is Artificial
*Intelligence*.
Is a fish intelligent? Are some dogs more intelligent than others?
I included my paragraph on AlphaZero for a reason. AlphaZero is given
the rules of the game but no other game info. It took 34 hours of
playing itself at Go to become the best in the world. (The even more
impressive MuZero doesn't even have the rules of the game programmed
in.)
It is not just pattern matching. By 50 moves of Go with strong
probability the board is in a position that it has never simulated,
and games to completion go 200+ moves. Every move and stone on the
board is important. AlphaZero is learning high level concepts on its
own. It then uses those high level concepts to only consider
reasonable moves. (It does several orders of magnitude fewer
simulations at game time than other programs, at least at chess.)
How is this not intelligence? What would AlphaZero have to do to show
intelligence in your opinion?
I would say that is highly advanced number crunching. It can run
predictive simulations much farther into the "future" than past systems
but that is just more advanced computation hardware. But the main
problem with claiming successful "artificial intelligence" is you first
have to _define_ "intelligence" in some kind of measurable, objective
manner. As far as I know we can't.
--
I've done good in this world. Now I'm tired and just want to be a cranky
dirty old man.
Chris Buckley
2024-03-31 11:59:16 UTC
Permalink
Post by Dimensional Traveler
Post by Chris Buckley
Post by Michael F. Stemper
Post by Scott Lurndal
Post by Michael F. Stemper
Yes, if we're talking narrow, focused use cases such as chess and go, massive progress indeed. Depending on how you define it, you could add great progress when it comes to self driving cars, robots, image recognition and voice recognition too.
If I recall correctly, in _Go:del, Escher, Bach: An Eternal Golden Braid_,
Douglas Hofstader commented that "AI is sometimes just whatever hasn't
been done yet."
But none of what is called "AI" today is really AI. It's just machine learning
and pattern matching with a massive database of patterns. Eliza writ large.
Agreed. If it learns more than word patterns, I might be impressed. But
right now, it's just textual manipulation. When the words start to have
meaning to the program(s), then we'll really be approaching the Singularity.
As far as I can tell, though, it's not aware that there's conflicting
information out there. If you fed a bunch of Electric Universe text into
one of these programs, it'd be just as happy parroting it back as it
would GR. It wouldn't realize that the two "systems" are in conflict
with each other.
All you folks are redefining AI once again, twisting it to mean
something unrecognizable (to me at least). It is not Artificial
Consciousness; it is not Artificial Embodiment. It is Artificial
*Intelligence*.
Is a fish intelligent? Are some dogs more intelligent than others?
I included my paragraph on AlphaZero for a reason. AlphaZero is given
the rules of the game but no other game info. It took 34 hours of
playing itself at Go to become the best in the world. (The even more
impressive MuZero doesn't even have the rules of the game programmed
in.)
It is not just pattern matching. By 50 moves of Go with strong
probability the board is in a position that it has never simulated,
and games to completion go 200+ moves. Every move and stone on the
board is important. AlphaZero is learning high level concepts on its
own. It then uses those high level concepts to only consider
reasonable moves. (It does several orders of magnitude fewer
simulations at game time than other programs, at least at chess.)
How is this not intelligence? What would AlphaZero have to do to show
intelligence in your opinion?
I would say that is highly advanced number crunching. It can run
predictive simulations much farther into the "future" than past systems
but that is just more advanced computation hardware. But the main
problem with claiming successful "artificial intelligence" is you first
have to _define_ "intelligence" in some kind of measurable, objective
manner. As far as I know we can't.
Why? Weren't you around for that almost interminable discussion on
the definitions of "life"? "Life" is an high level concept that
humans have many, very different, definitions for that don't agree
with each other. They agree on determining whether something is alive
on the vast majority of cases, but no measurable, objective definition
exists. That's life.

"Intelligence" is yet another, even more difficult, high level concept
that there will never be a single agreed upon definition in English.
Just too high a level of concept and too many fuzzy English words
(other high level concepts) around. But that doesn't mean we can't
come up with useful definitions for our purposes, just as we do with
"life".

You need to be very careful when talking about measurable, objective
manner. That implies automated, implying a program, say Eval_I, could
do it. Would you be satisfied with an intelligence evaluation of a
program that examined possible solutions of whatever problem and
picked the one that scored highest on Eval_I?

This is not just a theoretical problem. For instance, the premier
Information Retrieval conference has a workshop this summer on
evaluating information retrieval systems using GPT as the decider
of "correct" answers. The problems and difficulties are obvious.

Chris
Don
2024-04-01 12:57:31 UTC
Permalink
Chris Buckley wrote:

<snip>
Post by Chris Buckley
"Intelligence" is yet another, even more difficult, high level concept
that there will never be a single agreed upon definition in English.
Just too high a level of concept and too many fuzzy English words
(other high level concepts) around. But that doesn't mean we can't
come up with useful definitions for our purposes, just as we do with
"life".
You need to be very careful when talking about measurable, objective
manner. That implies automated, implying a program, say Eval_I, could
do it. Would you be satisfied with an intelligence evaluation of a
program that examined possible solutions of whatever problem and
picked the one that scored highest on Eval_I?
This is not just a theoretical problem. For instance, the premier
Information Retrieval conference has a workshop this summer on
evaluating information retrieval systems using GPT as the decider
of "correct" answers. The problems and difficulties are obvious.
"Intelligence" is an ambiguous abstraction, adaptable to circumstances,
as needed. In my idiom "intelligence" implies people able to discern
narratives embedded in corporate media, AI, or anything else.

At present, AI helps search engines act more like a "friend" towards
me. Better AI than me sift through an endless heaps of search results.

Danke,
--
Don.......My cat's )\._.,--....,'``. https://crcomp.net/reviews.php
telltale tall tail /, _.. \ _\ (`._ ,. Walk humbly with thy God.
tells tall tales.. `._.-(,_..'--(,_..'`-.;.' Make 1984 fiction again.
Cryptoengineer
2024-03-31 15:12:03 UTC
Permalink
Post by Chris Buckley
Post by Michael F. Stemper
Post by Michael F. Stemper
Post by D
Yes, if we're talking narrow, focused use cases such as chess and
go, massive progress indeed. Depending on how you define it, you
could add great progress when it comes to self driving cars,
robots, image recognition and voice recognition too.
If I recall correctly, in _Go:del, Escher, Bach: An Eternal Golden Braid_,
Douglas Hofstader commented that "AI is sometimes just whatever hasn't
been done yet."
But none of what is called "AI" today is really AI.  It's just
machine learning
and pattern matching with a massive database of patterns.   Eliza
writ large.
Agreed. If it learns more than word patterns, I might be impressed. But
right now, it's just textual manipulation. When the words start to have
meaning to the program(s), then we'll really be approaching the Singularity.
As far as I can tell, though, it's not aware that there's conflicting
information out there. If you fed a bunch of Electric Universe text into
one of these programs, it'd be just as happy parroting it back as it
would GR. It wouldn't realize that the two "systems" are in conflict
with each other.
All you folks are redefining AI once again, twisting it to mean
something unrecognizable (to me at least). It is not Artificial
Consciousness; it is not Artificial Embodiment. It is Artificial
*Intelligence*.
Is a fish intelligent? Are some dogs more intelligent than others?
I included my paragraph on AlphaZero for a reason.  AlphaZero is given
the rules of the game but no other game info. It took 34 hours of
playing itself at Go to become the best in the world. (The even more
impressive MuZero doesn't even have the rules of the game programmed
in.)
It is not just pattern matching. By 50 moves of Go with strong
probability the board is in a position that it has never simulated,
and games to completion go 200+ moves. Every move and stone on the
board is important. AlphaZero is learning high level concepts on its
own.  It then uses those high level concepts to only consider
reasonable moves. (It does several orders of magnitude fewer
simulations at game time than other programs, at least at chess.)
How is this not intelligence? What would AlphaZero have to do to show
intelligence in your opinion?
I would say that is highly advanced number crunching.  It can run
predictive simulations much farther into the "future" than past systems
but that is just more advanced computation hardware.  But the main
problem with claiming successful "artificial intelligence" is you first
have to _define_ "intelligence" in some kind of measurable, objective
manner.  As far as I know we can't.
There's a saying, attributed to Lenin: "Quantity has a quality all its own."

I go for the duck test.

If it looks like a duck, quacks like a duck, and acts like a duck,
then its a duck.

pt
Dimensional Traveler
2024-03-31 16:31:39 UTC
Permalink
Post by Cryptoengineer
Post by Chris Buckley
Post by Michael F. Stemper
Post by Michael F. Stemper
Post by D
Yes, if we're talking narrow, focused use cases such as chess and
go, massive progress indeed. Depending on how you define it, you
could add great progress when it comes to self driving cars,
robots, image recognition and voice recognition too.
If I recall correctly, in _Go:del, Escher, Bach: An Eternal Golden Braid_,
Douglas Hofstader commented that "AI is sometimes just whatever hasn't
been done yet."
But none of what is called "AI" today is really AI.  It's just
machine learning
and pattern matching with a massive database of patterns.   Eliza
writ large.
Agreed. If it learns more than word patterns, I might be impressed. But
right now, it's just textual manipulation. When the words start to have
meaning to the program(s), then we'll really be approaching the Singularity.
As far as I can tell, though, it's not aware that there's conflicting
information out there. If you fed a bunch of Electric Universe text into
one of these programs, it'd be just as happy parroting it back as it
would GR. It wouldn't realize that the two "systems" are in conflict
with each other.
All you folks are redefining AI once again, twisting it to mean
something unrecognizable (to me at least). It is not Artificial
Consciousness; it is not Artificial Embodiment. It is Artificial
*Intelligence*.
Is a fish intelligent? Are some dogs more intelligent than others?
I included my paragraph on AlphaZero for a reason.  AlphaZero is given
the rules of the game but no other game info. It took 34 hours of
playing itself at Go to become the best in the world. (The even more
impressive MuZero doesn't even have the rules of the game programmed
in.)
It is not just pattern matching. By 50 moves of Go with strong
probability the board is in a position that it has never simulated,
and games to completion go 200+ moves. Every move and stone on the
board is important. AlphaZero is learning high level concepts on its
own.  It then uses those high level concepts to only consider
reasonable moves. (It does several orders of magnitude fewer
simulations at game time than other programs, at least at chess.)
How is this not intelligence? What would AlphaZero have to do to show
intelligence in your opinion?
I would say that is highly advanced number crunching.  It can run
predictive simulations much farther into the "future" than past
systems but that is just more advanced computation hardware.  But the
main problem with claiming successful "artificial intelligence" is you
first have to _define_ "intelligence" in some kind of measurable,
objective manner.  As far as I know we can't.
There's a saying, attributed to Lenin: "Quantity has a quality all its own."
I go for the duck test.
If it looks like a duck, quacks like a duck, and acts like a duck,
then its a duck.
Show me the AI duck. :)
--
I've done good in this world. Now I'm tired and just want to be a cranky
dirty old man.
D
2024-03-31 20:32:32 UTC
Permalink
Post by Dimensional Traveler
Post by Cryptoengineer
Post by Chris Buckley
Post by Michael F. Stemper
Post by Michael F. Stemper
Post by D
Yes, if we're talking narrow, focused use cases such as chess and go,
massive progress indeed. Depending on how you define it, you could
add great progress when it comes to self driving cars, robots, image
recognition and voice recognition too.
If I recall correctly, in _Go:del, Escher, Bach: An Eternal Golden Braid_,
Douglas Hofstader commented that "AI is sometimes just whatever hasn't
been done yet."
But none of what is called "AI" today is really AI.  It's just machine
learning
and pattern matching with a massive database of patterns.   Eliza writ
large.
Agreed. If it learns more than word patterns, I might be impressed. But
right now, it's just textual manipulation. When the words start to have
meaning to the program(s), then we'll really be approaching the Singularity.
As far as I can tell, though, it's not aware that there's conflicting
information out there. If you fed a bunch of Electric Universe text into
one of these programs, it'd be just as happy parroting it back as it
would GR. It wouldn't realize that the two "systems" are in conflict
with each other.
All you folks are redefining AI once again, twisting it to mean
something unrecognizable (to me at least). It is not Artificial
Consciousness; it is not Artificial Embodiment. It is Artificial
*Intelligence*.
Is a fish intelligent? Are some dogs more intelligent than others?
I included my paragraph on AlphaZero for a reason.  AlphaZero is given
the rules of the game but no other game info. It took 34 hours of
playing itself at Go to become the best in the world. (The even more
impressive MuZero doesn't even have the rules of the game programmed
in.)
It is not just pattern matching. By 50 moves of Go with strong
probability the board is in a position that it has never simulated,
and games to completion go 200+ moves. Every move and stone on the
board is important. AlphaZero is learning high level concepts on its
own.  It then uses those high level concepts to only consider
reasonable moves. (It does several orders of magnitude fewer
simulations at game time than other programs, at least at chess.)
How is this not intelligence? What would AlphaZero have to do to show
intelligence in your opinion?
I would say that is highly advanced number crunching.  It can run
predictive simulations much farther into the "future" than past systems
but that is just more advanced computation hardware.  But the main problem
with claiming successful "artificial intelligence" is you first have to
_define_ "intelligence" in some kind of measurable, objective manner.  As
far as I know we can't.
There's a saying, attributed to Lenin: "Quantity has a quality all its own."
I go for the duck test.
If it looks like a duck, quacks like a duck, and acts like a duck,
then its a duck.
Show me the AI duck. :)
I'd say it's a duck called Turing! =)
Cryptoengineer
2024-03-31 20:46:51 UTC
Permalink
Post by Dimensional Traveler
Post by Cryptoengineer
Post by Chris Buckley
Post by Michael F. Stemper
Post by Michael F. Stemper
Post by D
Yes, if we're talking narrow, focused use cases such as chess and
go, massive progress indeed. Depending on how you define it, you
could add great progress when it comes to self driving cars,
robots, image recognition and voice recognition too.
If I recall correctly, in _Go:del, Escher, Bach: An Eternal Golden Braid_,
Douglas Hofstader commented that "AI is sometimes just whatever hasn't
been done yet."
But none of what is called "AI" today is really AI.  It's just
machine learning
and pattern matching with a massive database of patterns.   Eliza
writ large.
Agreed. If it learns more than word patterns, I might be impressed. But
right now, it's just textual manipulation. When the words start to have
meaning to the program(s), then we'll really be approaching the Singularity.
As far as I can tell, though, it's not aware that there's conflicting
information out there. If you fed a bunch of Electric Universe text into
one of these programs, it'd be just as happy parroting it back as it
would GR. It wouldn't realize that the two "systems" are in conflict
with each other.
All you folks are redefining AI once again, twisting it to mean
something unrecognizable (to me at least). It is not Artificial
Consciousness; it is not Artificial Embodiment. It is Artificial
*Intelligence*.
Is a fish intelligent? Are some dogs more intelligent than others?
I included my paragraph on AlphaZero for a reason.  AlphaZero is given
the rules of the game but no other game info. It took 34 hours of
playing itself at Go to become the best in the world. (The even more
impressive MuZero doesn't even have the rules of the game programmed
in.)
It is not just pattern matching. By 50 moves of Go with strong
probability the board is in a position that it has never simulated,
and games to completion go 200+ moves. Every move and stone on the
board is important. AlphaZero is learning high level concepts on its
own.  It then uses those high level concepts to only consider
reasonable moves. (It does several orders of magnitude fewer
simulations at game time than other programs, at least at chess.)
How is this not intelligence? What would AlphaZero have to do to show
intelligence in your opinion?
I would say that is highly advanced number crunching.  It can run
predictive simulations much farther into the "future" than past
systems but that is just more advanced computation hardware.  But the
main problem with claiming successful "artificial intelligence" is you
first have to _define_ "intelligence" in some kind of measurable,
objective manner.  As far as I know we can't.
There's a saying, attributed to Lenin: "Quantity has a quality all its own."
I go for the duck test.
If it looks like a duck, quacks like a duck, and acts like a duck,
then its a duck.
Show me the AI duck. :)
Just generated this one, with the prompt "Draw a detailed image of a duck
swimming on a pond."

https://imgur.com/NN8n3px

Pt
Dimensional Traveler
2024-03-31 23:09:56 UTC
Permalink
Post by Cryptoengineer
Post by Dimensional Traveler
Post by Cryptoengineer
Post by Chris Buckley
Post by Michael F. Stemper
Post by Michael F. Stemper
Post by D
Yes, if we're talking narrow, focused use cases such as chess and
go, massive progress indeed. Depending on how you define it, you
could add great progress when it comes to self driving cars,
robots, image recognition and voice recognition too.
If I recall correctly, in _Go:del, Escher, Bach: An Eternal Golden Braid_,
Douglas Hofstader commented that "AI is sometimes just whatever hasn't
been done yet."
But none of what is called "AI" today is really AI.  It's just
machine learning
and pattern matching with a massive database of patterns.   Eliza
writ large.
Agreed. If it learns more than word patterns, I might be impressed. But
right now, it's just textual manipulation. When the words start to have
meaning to the program(s), then we'll really be approaching the Singularity.
As far as I can tell, though, it's not aware that there's conflicting
information out there. If you fed a bunch of Electric Universe text into
one of these programs, it'd be just as happy parroting it back as it
would GR. It wouldn't realize that the two "systems" are in conflict
with each other.
All you folks are redefining AI once again, twisting it to mean
something unrecognizable (to me at least). It is not Artificial
Consciousness; it is not Artificial Embodiment. It is Artificial
*Intelligence*.
Is a fish intelligent? Are some dogs more intelligent than others?
I included my paragraph on AlphaZero for a reason.  AlphaZero is given
the rules of the game but no other game info. It took 34 hours of
playing itself at Go to become the best in the world. (The even more
impressive MuZero doesn't even have the rules of the game programmed
in.)
It is not just pattern matching. By 50 moves of Go with strong
probability the board is in a position that it has never simulated,
and games to completion go 200+ moves. Every move and stone on the
board is important. AlphaZero is learning high level concepts on its
own.  It then uses those high level concepts to only consider
reasonable moves. (It does several orders of magnitude fewer
simulations at game time than other programs, at least at chess.)
How is this not intelligence? What would AlphaZero have to do to show
intelligence in your opinion?
I would say that is highly advanced number crunching.  It can run
predictive simulations much farther into the "future" than past
systems but that is just more advanced computation hardware.  But the
main problem with claiming successful "artificial intelligence" is you
first have to _define_ "intelligence" in some kind of measurable,
objective manner.  As far as I know we can't.
There's a saying, attributed to Lenin: "Quantity has a quality all its own."
I go for the duck test.
If it looks like a duck, quacks like a duck, and acts like a duck,
then its a duck.
Show me the AI duck. :)
Just generated this one, with the prompt "Draw a detailed image of a duck
swimming on a pond."
https://imgur.com/NN8n3px
Its not quacking like a duck, so not a duck. :P
--
I've done good in this world. Now I'm tired and just want to be a cranky
dirty old man.
D
2024-04-01 08:41:45 UTC
Permalink
Post by Cryptoengineer
Post by Dimensional Traveler
Post by Cryptoengineer
Post by Chris Buckley
Post by Michael F. Stemper
Post by Michael F. Stemper
Post by D
Yes, if we're talking narrow, focused use cases such as chess and
go, massive progress indeed. Depending on how you define it, you
could add great progress when it comes to self driving cars,
robots, image recognition and voice recognition too.
If I recall correctly, in _Go:del, Escher, Bach: An Eternal Golden Braid_,
Douglas Hofstader commented that "AI is sometimes just whatever hasn't
been done yet."
But none of what is called "AI" today is really AI.  It's just
machine learning
and pattern matching with a massive database of patterns.   Eliza
writ large.
Agreed. If it learns more than word patterns, I might be impressed. But
right now, it's just textual manipulation. When the words start to have
meaning to the program(s), then we'll really be approaching the Singularity.
As far as I can tell, though, it's not aware that there's conflicting
information out there. If you fed a bunch of Electric Universe text into
one of these programs, it'd be just as happy parroting it back as it
would GR. It wouldn't realize that the two "systems" are in conflict
with each other.
All you folks are redefining AI once again, twisting it to mean
something unrecognizable (to me at least). It is not Artificial
Consciousness; it is not Artificial Embodiment. It is Artificial
*Intelligence*.
Is a fish intelligent? Are some dogs more intelligent than others?
I included my paragraph on AlphaZero for a reason.  AlphaZero is given
the rules of the game but no other game info. It took 34 hours of
playing itself at Go to become the best in the world. (The even more
impressive MuZero doesn't even have the rules of the game programmed
in.)
It is not just pattern matching. By 50 moves of Go with strong
probability the board is in a position that it has never simulated,
and games to completion go 200+ moves. Every move and stone on the
board is important. AlphaZero is learning high level concepts on its
own.  It then uses those high level concepts to only consider
reasonable moves. (It does several orders of magnitude fewer
simulations at game time than other programs, at least at chess.)
How is this not intelligence? What would AlphaZero have to do to show
intelligence in your opinion?
I would say that is highly advanced number crunching.  It can run
predictive simulations much farther into the "future" than past
systems but that is just more advanced computation hardware.  But the
main problem with claiming successful "artificial intelligence" is you
first have to _define_ "intelligence" in some kind of measurable,
objective manner.  As far as I know we can't.
There's a saying, attributed to Lenin: "Quantity has a quality all its own."
I go for the duck test.
If it looks like a duck, quacks like a duck, and acts like a duck,
then its a duck.
Show me the AI duck. :)
Just generated this one, with the prompt "Draw a detailed image of a duck
swimming on a pond."
https://imgur.com/NN8n3px
Pt
What AI service are you using? Is it free?
Cryptoengineer
2024-04-01 16:07:17 UTC
Permalink
Post by Cryptoengineer
Post by Cryptoengineer
Post by Chris Buckley
Post by Michael F. Stemper
Post by Michael F. Stemper
Post by D
Yes, if we're talking narrow, focused use cases such as chess and
go, massive progress indeed. Depending on how you define it, you
could add great progress when it comes to self driving cars,
robots, image recognition and voice recognition too.
If I recall correctly, in _Go:del, Escher, Bach: An Eternal Golden Braid_,
Douglas Hofstader commented that "AI is sometimes just whatever hasn't
been done yet."
But none of what is called "AI" today is really AI.  It's just
machine learning
and pattern matching with a massive database of patterns.   Eliza
writ large.
Agreed. If it learns more than word patterns, I might be
impressed. But
right now, it's just textual manipulation. When the words start to have
meaning to the program(s), then we'll really be approaching the Singularity.
As far as I can tell, though, it's not aware that there's conflicting
information out there. If you fed a bunch of Electric Universe text into
one of these programs, it'd be just as happy parroting it back as it
would GR. It wouldn't realize that the two "systems" are in conflict
with each other.
All you folks are redefining AI once again, twisting it to mean
something unrecognizable (to me at least). It is not Artificial
Consciousness; it is not Artificial Embodiment. It is Artificial
*Intelligence*.
Is a fish intelligent? Are some dogs more intelligent than others?
I included my paragraph on AlphaZero for a reason.  AlphaZero is given
the rules of the game but no other game info. It took 34 hours of
playing itself at Go to become the best in the world. (The even more
impressive MuZero doesn't even have the rules of the game programmed
in.)
It is not just pattern matching. By 50 moves of Go with strong
probability the board is in a position that it has never simulated,
and games to completion go 200+ moves. Every move and stone on the
board is important. AlphaZero is learning high level concepts on its
own.  It then uses those high level concepts to only consider
reasonable moves. (It does several orders of magnitude fewer
simulations at game time than other programs, at least at chess.)
How is this not intelligence? What would AlphaZero have to do to show
intelligence in your opinion?
I would say that is highly advanced number crunching.  It can run
predictive simulations much farther into the "future" than past
systems but that is just more advanced computation hardware.  But the
main problem with claiming successful "artificial intelligence" is you
first have to _define_ "intelligence" in some kind of measurable,
objective manner.  As far as I know we can't.
There's a saying, attributed to Lenin: "Quantity has a quality all its own."
I go for the duck test.
If it looks like a duck, quacks like a duck, and acts like a duck,
then its a duck.
Show me the AI duck.  :)
Just generated this one, with the prompt "Draw a detailed image of a duck
swimming on a pond."
https://imgur.com/NN8n3px
Pt
What AI service are you using? Is it free?\
https://perchance.org/ai-photo-generator

Yes, it was free, at least for my use.

I just googled 'free ai image generator', and picked one. I have no
informed opinion to compare them.

BTW, that was it's second try. The first had two duck bodies, left
and right, centered on one duck head.

pt
D
2024-04-01 17:35:13 UTC
Permalink
Post by Cryptoengineer
Post by Cryptoengineer
Post by Cryptoengineer
Post by Chris Buckley
Post by Michael F. Stemper
Post by Michael F. Stemper
Post by D
Yes, if we're talking narrow, focused use cases such as chess and
go, massive progress indeed. Depending on how you define it, you
could add great progress when it comes to self driving cars,
robots, image recognition and voice recognition too.
If I recall correctly, in _Go:del, Escher, Bach: An Eternal Golden
Braid_,
Douglas Hofstader commented that "AI is sometimes just whatever hasn't
been done yet."
But none of what is called "AI" today is really AI.  It's just
machine learning
and pattern matching with a massive database of patterns.   Eliza
writ large.
Agreed. If it learns more than word patterns, I might be impressed. But
right now, it's just textual manipulation. When the words start to have
meaning to the program(s), then we'll really be approaching the Singularity.
As far as I can tell, though, it's not aware that there's conflicting
information out there. If you fed a bunch of Electric Universe text into
one of these programs, it'd be just as happy parroting it back as it
would GR. It wouldn't realize that the two "systems" are in conflict
with each other.
All you folks are redefining AI once again, twisting it to mean
something unrecognizable (to me at least). It is not Artificial
Consciousness; it is not Artificial Embodiment. It is Artificial
*Intelligence*.
Is a fish intelligent? Are some dogs more intelligent than others?
I included my paragraph on AlphaZero for a reason.  AlphaZero is given
the rules of the game but no other game info. It took 34 hours of
playing itself at Go to become the best in the world. (The even more
impressive MuZero doesn't even have the rules of the game programmed
in.)
It is not just pattern matching. By 50 moves of Go with strong
probability the board is in a position that it has never simulated,
and games to completion go 200+ moves. Every move and stone on the
board is important. AlphaZero is learning high level concepts on its
own.  It then uses those high level concepts to only consider
reasonable moves. (It does several orders of magnitude fewer
simulations at game time than other programs, at least at chess.)
How is this not intelligence? What would AlphaZero have to do to show
intelligence in your opinion?
I would say that is highly advanced number crunching.  It can run
predictive simulations much farther into the "future" than past
systems but that is just more advanced computation hardware.  But the
main problem with claiming successful "artificial intelligence" is you
first have to _define_ "intelligence" in some kind of measurable,
objective manner.  As far as I know we can't.
There's a saying, attributed to Lenin: "Quantity has a quality all its own."
I go for the duck test.
If it looks like a duck, quacks like a duck, and acts like a duck,
then its a duck.
Show me the AI duck.  :)
Just generated this one, with the prompt "Draw a detailed image of a duck
swimming on a pond."
https://imgur.com/NN8n3px
Pt
What AI service are you using? Is it free?\
https://perchance.org/ai-photo-generator
Yes, it was free, at least for my use.
I just googled 'free ai image generator', and picked one. I have no
informed opinion to compare them.
BTW, that was it's second try. The first had two duck bodies, left
and right, centered on one duck head.
pt
Ahh, got it. Thank you for the pointer!
Paul S Person
2024-04-02 15:59:29 UTC
Permalink
On Mon, 1 Apr 2024 12:07:17 -0400, Cryptoengineer
Post by Cryptoengineer
Post by Cryptoengineer
Post by Cryptoengineer
Post by Chris Buckley
Post by Michael F. Stemper
Post by Michael F. Stemper
Post by D
Yes, if we're talking narrow, focused use cases such as chess and
go, massive progress indeed. Depending on how you define it, you
could add great progress when it comes to self driving cars,
robots, image recognition and voice recognition too.
If I recall correctly, in _Go:del, Escher, Bach: An Eternal Golden
Braid_,
Douglas Hofstader commented that "AI is sometimes just whatever hasn't
been done yet."
But none of what is called "AI" today is really AI.  It's just
machine learning
and pattern matching with a massive database of patterns.   Eliza
writ large.
Agreed. If it learns more than word patterns, I might be impressed. But
right now, it's just textual manipulation. When the words start to have
meaning to the program(s), then we'll really be approaching the Singularity.
As far as I can tell, though, it's not aware that there's conflicting
information out there. If you fed a bunch of Electric Universe text into
one of these programs, it'd be just as happy parroting it back as it
would GR. It wouldn't realize that the two "systems" are in conflict
with each other.
All you folks are redefining AI once again, twisting it to mean
something unrecognizable (to me at least). It is not Artificial
Consciousness; it is not Artificial Embodiment. It is Artificial
*Intelligence*.
Is a fish intelligent? Are some dogs more intelligent than others?
I included my paragraph on AlphaZero for a reason.  AlphaZero is given
the rules of the game but no other game info. It took 34 hours of
playing itself at Go to become the best in the world. (The even more
impressive MuZero doesn't even have the rules of the game programmed
in.)
It is not just pattern matching. By 50 moves of Go with strong
probability the board is in a position that it has never simulated,
and games to completion go 200+ moves. Every move and stone on the
board is important. AlphaZero is learning high level concepts on its
own.  It then uses those high level concepts to only consider
reasonable moves. (It does several orders of magnitude fewer
simulations at game time than other programs, at least at chess.)
How is this not intelligence? What would AlphaZero have to do to show
intelligence in your opinion?
I would say that is highly advanced number crunching.  It can run
predictive simulations much farther into the "future" than past
systems but that is just more advanced computation hardware.  But the
main problem with claiming successful "artificial intelligence" is you
first have to _define_ "intelligence" in some kind of measurable,
objective manner.  As far as I know we can't.
There's a saying, attributed to Lenin: "Quantity has a quality all its own."
I go for the duck test.
If it looks like a duck, quacks like a duck, and acts like a duck,
then its a duck.
Show me the AI duck.  :)
Just generated this one, with the prompt "Draw a detailed image of a duck
swimming on a pond."
https://imgur.com/NN8n3px
Pt
What AI service are you using? Is it free?\
https://perchance.org/ai-photo-generator
Yes, it was free, at least for my use.
I just googled 'free ai image generator', and picked one. I have no
informed opinion to compare them.
BTW, that was it's second try. The first had two duck bodies, left
and right, centered on one duck head.
Reminds me of a police procedural novel I read once which opens with
two cops recalling how they "solved" a case by demonstrating that a
man could shoot himself and the cartridge ejected could come to rest
in a vertical position on its base on the top of the dresser. They
recorded a test of the whole event (no, they didn't shoot anyone).
Their Captain was very happy with them, as it promised to be a very
sticky murder otherwise.

What they /didn't/ mention to their Captain is that what he saw
recorded on film was the result of their 47th attempt.

And it only took the "ai image generator" two tries. But, of course, a
duck floating in water isn't quite the same thing.

It was, BTW, a nice-looking image.

I can hardly wait for ai image generators to be deployed politically.
If they haven't been already.
--
"Here lies the Tuscan poet Aretino,
Who evil spoke of everyone but God,
Giving as his excuse, 'I never knew him.'"
Cryptoengineer
2024-04-02 17:52:29 UTC
Permalink
Post by Paul S Person
On Mon, 1 Apr 2024 12:07:17 -0400, Cryptoengineer
Post by Cryptoengineer
Post by Cryptoengineer
Post by Cryptoengineer
Post by Chris Buckley
Post by Michael F. Stemper
Post by Michael F. Stemper
Post by D
Yes, if we're talking narrow, focused use cases such as chess and
go, massive progress indeed. Depending on how you define it, you
could add great progress when it comes to self driving cars,
robots, image recognition and voice recognition too.
If I recall correctly, in _Go:del, Escher, Bach: An Eternal Golden
Braid_,
Douglas Hofstader commented that "AI is sometimes just whatever hasn't
been done yet."
But none of what is called "AI" today is really AI.  It's just
machine learning
and pattern matching with a massive database of patterns.   Eliza
writ large.
Agreed. If it learns more than word patterns, I might be impressed. But
right now, it's just textual manipulation. When the words start to have
meaning to the program(s), then we'll really be approaching the
Singularity.
As far as I can tell, though, it's not aware that there's conflicting
information out there. If you fed a bunch of Electric Universe text into
one of these programs, it'd be just as happy parroting it back as it
would GR. It wouldn't realize that the two "systems" are in conflict
with each other.
All you folks are redefining AI once again, twisting it to mean
something unrecognizable (to me at least). It is not Artificial
Consciousness; it is not Artificial Embodiment. It is Artificial
*Intelligence*.
Is a fish intelligent? Are some dogs more intelligent than others?
I included my paragraph on AlphaZero for a reason.  AlphaZero is given
the rules of the game but no other game info. It took 34 hours of
playing itself at Go to become the best in the world. (The even more
impressive MuZero doesn't even have the rules of the game programmed
in.)
It is not just pattern matching. By 50 moves of Go with strong
probability the board is in a position that it has never simulated,
and games to completion go 200+ moves. Every move and stone on the
board is important. AlphaZero is learning high level concepts on its
own.  It then uses those high level concepts to only consider
reasonable moves. (It does several orders of magnitude fewer
simulations at game time than other programs, at least at chess.)
How is this not intelligence? What would AlphaZero have to do to show
intelligence in your opinion?
I would say that is highly advanced number crunching.  It can run
predictive simulations much farther into the "future" than past
systems but that is just more advanced computation hardware.  But the
main problem with claiming successful "artificial intelligence" is you
first have to _define_ "intelligence" in some kind of measurable,
objective manner.  As far as I know we can't.
There's a saying, attributed to Lenin: "Quantity has a quality all its own."
I go for the duck test.
If it looks like a duck, quacks like a duck, and acts like a duck,
then its a duck.
Show me the AI duck.  :)
Just generated this one, with the prompt "Draw a detailed image of a duck
swimming on a pond."
https://imgur.com/NN8n3px
Pt
What AI service are you using? Is it free?\
https://perchance.org/ai-photo-generator
Yes, it was free, at least for my use.
I just googled 'free ai image generator', and picked one. I have no
informed opinion to compare them.
BTW, that was it's second try. The first had two duck bodies, left
and right, centered on one duck head.
Reminds me of a police procedural novel I read once which opens with
two cops recalling how they "solved" a case by demonstrating that a
man could shoot himself and the cartridge ejected could come to rest
in a vertical position on its base on the top of the dresser. They
recorded a test of the whole event (no, they didn't shoot anyone).
Their Captain was very happy with them, as it promised to be a very
sticky murder otherwise.
What they /didn't/ mention to their Captain is that what he saw
recorded on film was the result of their 47th attempt.
And it only took the "ai image generator" two tries. But, of course, a
duck floating in water isn't quite the same thing.
It was, BTW, a nice-looking image.
I can hardly wait for ai image generators to be deployed politically.
If they haven't been already.
They have:
Fake Trump arrest: https://www.bbc.com/news/world-us-canada-65069316

Faked Biden robocall:
https://www.wired.com/story/biden-robocall-deepfake-danger/

There are other cases, in the US and other countries.

pt
Paul S Person
2024-04-03 15:56:00 UTC
Permalink
On Tue, 2 Apr 2024 13:52:29 -0400, Cryptoengineer
Post by Cryptoengineer
Post by Paul S Person
On Mon, 1 Apr 2024 12:07:17 -0400, Cryptoengineer
Post by Cryptoengineer
Post by Cryptoengineer
Post by Cryptoengineer
Post by Chris Buckley
Post by Michael F. Stemper
Post by Michael F. Stemper
Post by D
Yes, if we're talking narrow, focused use cases such as chess and
go, massive progress indeed. Depending on how you define it, you
could add great progress when it comes to self driving cars,
robots, image recognition and voice recognition too.
If I recall correctly, in _Go:del, Escher, Bach: An Eternal Golden
Braid_,
Douglas Hofstader commented that "AI is sometimes just whatever
hasn't
been done yet."
But none of what is called "AI" today is really AI.  It's just
machine learning
and pattern matching with a massive database of patterns.   Eliza
writ large.
Agreed. If it learns more than word patterns, I might be impressed. But
right now, it's just textual manipulation. When the words start to have
meaning to the program(s), then we'll really be approaching the
Singularity.
As far as I can tell, though, it's not aware that there's conflicting
information out there. If you fed a bunch of Electric Universe text into
one of these programs, it'd be just as happy parroting it back as it
would GR. It wouldn't realize that the two "systems" are in conflict
with each other.
All you folks are redefining AI once again, twisting it to mean
something unrecognizable (to me at least). It is not Artificial
Consciousness; it is not Artificial Embodiment. It is Artificial
*Intelligence*.
Is a fish intelligent? Are some dogs more intelligent than others?
I included my paragraph on AlphaZero for a reason.  AlphaZero is given
the rules of the game but no other game info. It took 34 hours of
playing itself at Go to become the best in the world. (The even more
impressive MuZero doesn't even have the rules of the game programmed
in.)
It is not just pattern matching. By 50 moves of Go with strong
probability the board is in a position that it has never simulated,
and games to completion go 200+ moves. Every move and stone on the
board is important. AlphaZero is learning high level concepts on its
own.  It then uses those high level concepts to only consider
reasonable moves. (It does several orders of magnitude fewer
simulations at game time than other programs, at least at chess.)
How is this not intelligence? What would AlphaZero have to do to show
intelligence in your opinion?
I would say that is highly advanced number crunching.  It can run
predictive simulations much farther into the "future" than past
systems but that is just more advanced computation hardware.  But the
main problem with claiming successful "artificial intelligence" is you
first have to _define_ "intelligence" in some kind of measurable,
objective manner.  As far as I know we can't.
There's a saying, attributed to Lenin: "Quantity has a quality all its own."
I go for the duck test.
If it looks like a duck, quacks like a duck, and acts like a duck,
then its a duck.
Show me the AI duck.  :)
Just generated this one, with the prompt "Draw a detailed image of a duck
swimming on a pond."
https://imgur.com/NN8n3px
Pt
What AI service are you using? Is it free?\
https://perchance.org/ai-photo-generator
Yes, it was free, at least for my use.
I just googled 'free ai image generator', and picked one. I have no
informed opinion to compare them.
BTW, that was it's second try. The first had two duck bodies, left
and right, centered on one duck head.
Reminds me of a police procedural novel I read once which opens with
two cops recalling how they "solved" a case by demonstrating that a
man could shoot himself and the cartridge ejected could come to rest
in a vertical position on its base on the top of the dresser. They
recorded a test of the whole event (no, they didn't shoot anyone).
Their Captain was very happy with them, as it promised to be a very
sticky murder otherwise.
What they /didn't/ mention to their Captain is that what he saw
recorded on film was the result of their 47th attempt.
And it only took the "ai image generator" two tries. But, of course, a
duck floating in water isn't quite the same thing.
It was, BTW, a nice-looking image.
I can hardly wait for ai image generators to be deployed politically.
If they haven't been already.
Fake Trump arrest: https://www.bbc.com/news/world-us-canada-65069316
https://www.wired.com/story/biden-robocall-deepfake-danger/
There are other cases, in the US and other countries.
Semantic goo now spreads to images!

Wow, is /that/ discouraging!
--
"Here lies the Tuscan poet Aretino,
Who evil spoke of everyone but God,
Giving as his excuse, 'I never knew him.'"
Cryptoengineer
2024-04-03 16:49:53 UTC
Permalink
Post by Paul S Person
On Tue, 2 Apr 2024 13:52:29 -0400, Cryptoengineer
Post by Cryptoengineer
Post by Paul S Person
On Mon, 1 Apr 2024 12:07:17 -0400, Cryptoengineer
Post by Cryptoengineer
Post by Cryptoengineer
Post by Cryptoengineer
Post by Chris Buckley
Post by Michael F. Stemper
Post by Michael F. Stemper
Post by D
Yes, if we're talking narrow, focused use cases such as chess and
go, massive progress indeed. Depending on how you define it, you
could add great progress when it comes to self driving cars,
robots, image recognition and voice recognition too.
If I recall correctly, in _Go:del, Escher, Bach: An Eternal Golden
Braid_,
Douglas Hofstader commented that "AI is sometimes just whatever
hasn't
been done yet."
But none of what is called "AI" today is really AI.  It's just
machine learning
and pattern matching with a massive database of patterns.   Eliza
writ large.
Agreed. If it learns more than word patterns, I might be impressed. But
right now, it's just textual manipulation. When the words start to have
meaning to the program(s), then we'll really be approaching the
Singularity.
As far as I can tell, though, it's not aware that there's conflicting
information out there. If you fed a bunch of Electric Universe text
into
one of these programs, it'd be just as happy parroting it back as it
would GR. It wouldn't realize that the two "systems" are in conflict
with each other.
All you folks are redefining AI once again, twisting it to mean
something unrecognizable (to me at least). It is not Artificial
Consciousness; it is not Artificial Embodiment. It is Artificial
*Intelligence*.
Is a fish intelligent? Are some dogs more intelligent than others?
I included my paragraph on AlphaZero for a reason.  AlphaZero is given
the rules of the game but no other game info. It took 34 hours of
playing itself at Go to become the best in the world. (The even more
impressive MuZero doesn't even have the rules of the game programmed
in.)
It is not just pattern matching. By 50 moves of Go with strong
probability the board is in a position that it has never simulated,
and games to completion go 200+ moves. Every move and stone on the
board is important. AlphaZero is learning high level concepts on its
own.  It then uses those high level concepts to only consider
reasonable moves. (It does several orders of magnitude fewer
simulations at game time than other programs, at least at chess.)
How is this not intelligence? What would AlphaZero have to do to show
intelligence in your opinion?
I would say that is highly advanced number crunching.  It can run
predictive simulations much farther into the "future" than past
systems but that is just more advanced computation hardware.  But the
main problem with claiming successful "artificial intelligence" is you
first have to _define_ "intelligence" in some kind of measurable,
objective manner.  As far as I know we can't.
There's a saying, attributed to Lenin: "Quantity has a quality all its own."
I go for the duck test.
If it looks like a duck, quacks like a duck, and acts like a duck,
then its a duck.
Show me the AI duck.  :)
Just generated this one, with the prompt "Draw a detailed image of a duck
swimming on a pond."
https://imgur.com/NN8n3px
Pt
What AI service are you using? Is it free?\
https://perchance.org/ai-photo-generator
Yes, it was free, at least for my use.
I just googled 'free ai image generator', and picked one. I have no
informed opinion to compare them.
BTW, that was it's second try. The first had two duck bodies, left
and right, centered on one duck head.
Reminds me of a police procedural novel I read once which opens with
two cops recalling how they "solved" a case by demonstrating that a
man could shoot himself and the cartridge ejected could come to rest
in a vertical position on its base on the top of the dresser. They
recorded a test of the whole event (no, they didn't shoot anyone).
Their Captain was very happy with them, as it promised to be a very
sticky murder otherwise.
What they /didn't/ mention to their Captain is that what he saw
recorded on film was the result of their 47th attempt.
And it only took the "ai image generator" two tries. But, of course, a
duck floating in water isn't quite the same thing.
It was, BTW, a nice-looking image.
I can hardly wait for ai image generators to be deployed politically.
If they haven't been already.
Fake Trump arrest: https://www.bbc.com/news/world-us-canada-65069316
https://www.wired.com/story/biden-robocall-deepfake-danger/
There are other cases, in the US and other countries.
Semantic goo now spreads to images!
Wow, is /that/ discouraging!
It looks like that 'verifying what is true' is becoming
increasingly difficult.

pt
Paul S Person
2024-04-04 16:05:40 UTC
Permalink
On Wed, 3 Apr 2024 12:49:53 -0400, Cryptoengineer
Post by Cryptoengineer
Post by Paul S Person
On Tue, 2 Apr 2024 13:52:29 -0400, Cryptoengineer
Post by Cryptoengineer
Post by Paul S Person
On Mon, 1 Apr 2024 12:07:17 -0400, Cryptoengineer
Post by Cryptoengineer
Post by Cryptoengineer
Post by Cryptoengineer
Post by Chris Buckley
Post by Michael F. Stemper
Post by Michael F. Stemper
Post by D
Yes, if we're talking narrow, focused use cases such as chess and
go, massive progress indeed. Depending on how you define it, you
could add great progress when it comes to self driving cars,
robots, image recognition and voice recognition too.
If I recall correctly, in _Go:del, Escher, Bach: An Eternal Golden
Braid_,
Douglas Hofstader commented that "AI is sometimes just whatever
hasn't
been done yet."
But none of what is called "AI" today is really AI.  It's just
machine learning
and pattern matching with a massive database of patterns.   Eliza
writ large.
Agreed. If it learns more than word patterns, I might be
impressed. But
right now, it's just textual manipulation. When the words start
to have
meaning to the program(s), then we'll really be approaching the
Singularity.
As far as I can tell, though, it's not aware that there's conflicting
information out there. If you fed a bunch of Electric Universe text
into
one of these programs, it'd be just as happy parroting it back as it
would GR. It wouldn't realize that the two "systems" are in conflict
with each other.
All you folks are redefining AI once again, twisting it to mean
something unrecognizable (to me at least). It is not Artificial
Consciousness; it is not Artificial Embodiment. It is Artificial
*Intelligence*.
Is a fish intelligent? Are some dogs more intelligent than others?
I included my paragraph on AlphaZero for a reason.  AlphaZero is given
the rules of the game but no other game info. It took 34 hours of
playing itself at Go to become the best in the world. (The even more
impressive MuZero doesn't even have the rules of the game programmed
in.)
It is not just pattern matching. By 50 moves of Go with strong
probability the board is in a position that it has never simulated,
and games to completion go 200+ moves. Every move and stone on the
board is important. AlphaZero is learning high level concepts on its
own.  It then uses those high level concepts to only consider
reasonable moves. (It does several orders of magnitude fewer
simulations at game time than other programs, at least at chess.)
How is this not intelligence? What would AlphaZero have to do to show
intelligence in your opinion?
I would say that is highly advanced number crunching.  It can run
predictive simulations much farther into the "future" than past
systems but that is just more advanced computation hardware.  But the
main problem with claiming successful "artificial intelligence" is you
first have to _define_ "intelligence" in some kind of measurable,
objective manner.  As far as I know we can't.
There's a saying, attributed to Lenin: "Quantity has a quality all its
own."
I go for the duck test.
If it looks like a duck, quacks like a duck, and acts like a duck,
then its a duck.
Show me the AI duck.  :)
Just generated this one, with the prompt "Draw a detailed image of a duck
swimming on a pond."
https://imgur.com/NN8n3px
Pt
What AI service are you using? Is it free?\
https://perchance.org/ai-photo-generator
Yes, it was free, at least for my use.
I just googled 'free ai image generator', and picked one. I have no
informed opinion to compare them.
BTW, that was it's second try. The first had two duck bodies, left
and right, centered on one duck head.
Reminds me of a police procedural novel I read once which opens with
two cops recalling how they "solved" a case by demonstrating that a
man could shoot himself and the cartridge ejected could come to rest
in a vertical position on its base on the top of the dresser. They
recorded a test of the whole event (no, they didn't shoot anyone).
Their Captain was very happy with them, as it promised to be a very
sticky murder otherwise.
What they /didn't/ mention to their Captain is that what he saw
recorded on film was the result of their 47th attempt.
And it only took the "ai image generator" two tries. But, of course, a
duck floating in water isn't quite the same thing.
It was, BTW, a nice-looking image.
I can hardly wait for ai image generators to be deployed politically.
If they haven't been already.
Fake Trump arrest: https://www.bbc.com/news/world-us-canada-65069316
https://www.wired.com/story/biden-robocall-deepfake-danger/
There are other cases, in the US and other countries.
Semantic goo now spreads to images!
Wow, is /that/ discouraging!
It looks like that 'verifying what is true' is becoming
increasingly difficult.
It wasn't that much easier, say, 200 years ago.

I believe I noted at one point here that Darwin once claimed that the
Black African knee was intermediate between the White knee and the Ape
knee.

But he didn't do this by direct observation. He relied on a
correspondent, who may have been mislead by a racist attitude, and
Darwin had no way to "verify what is true" in that case.

Two newspaper headlines might also be considered: the one claiming
that the Spanish had sunk an American ship in Cuba, starting a war
with a lie; and "Dewey Defeats Truman", reflecting the newspapers
wishes rather than reality.

So it would probably be more accurate to say "the brief era in which
"verifying what it true" was /not/ difficult has ended.
--
"Here lies the Tuscan poet Aretino,
Who evil spoke of everyone but God,
Giving as his excuse, 'I never knew him.'"
Chris Buckley
2024-03-31 14:24:12 UTC
Permalink
Post by Michael F. Stemper
Post by Scott Lurndal
Post by Michael F. Stemper
Yes, if we're talking narrow, focused use cases such as chess and go, massive progress indeed. Depending on how you define it, you could add great progress when it comes to self driving cars, robots, image recognition and voice recognition too.
If I recall correctly, in _Go:del, Escher, Bach: An Eternal Golden Braid_,
Douglas Hofstader commented that "AI is sometimes just whatever hasn't
been done yet."
But none of what is called "AI" today is really AI. It's just machine learning
and pattern matching with a massive database of patterns. Eliza writ large.
Agreed. If it learns more than word patterns, I might be impressed. But
right now, it's just textual manipulation. When the words start to have
meaning to the program(s), then we'll really be approaching the Singularity.
As far as I can tell, though, it's not aware that there's conflicting
information out there. If you fed a bunch of Electric Universe text into
one of these programs, it'd be just as happy parroting it back as it
would GR. It wouldn't realize that the two "systems" are in conflict
with each other.
Without the narrow world of AlphaZero and games, where what is happening
and not happening is much clearer, I'd have to argue from my experiences
with text and GPT. I think we all agree that that is much less convincing
and satisfactory. Nonetheless, I'll give a brief argument, though my
previous post on AlphaZero is the better argument.

I've been doing research in simple machine learning and pattern
matching of text for over 40 years. That's pretty much the definition
of statistical information retrieval. I know what the limits of
simple LLM's are. People have been trying to add AI and knowledge
structures on top of statistical ir for many decades, but unsuccessfully
(for general search.)

And the level of performance of simple LLM's and pattern matching is
pretty bad. It's far below what you've seen every day from Google Search
for the past 20 years. Google Search (at least in the 200X's) got its
increased performance by adding human input, mostly indirect. It did a very
good job at using human links between web pages and using human click-through
data (what pages humans actually click on after searching.)

This poor performance of simple LLMs and pattern matching continues to
this day. A couple of years ago I participated in a workshop
"comparison" designed to look at the problems of retrieval in a
rapidly changing domain (Covid research papers). Performance of
simple LLMs (including from researchers from places like Google) wasn't
significantly better than year 2000 systems.

So what has changed from simple LLMs to LLMs like GPT-4? Basically
deep learning: being able to make use of multiple layers of processing,
where high level concepts are defined and recognized and then operated
on by higher layers of processing (with feedback from those higher levels
used at the lower levels.) That's pretty much how I define "intelligence":
the ability to recognize and reason with high level (abstract) concepts.

Note that when you say "it's just machine learning and pattern recognition",
I don't really disagree. But that's because my definition of
"machine learning" includes "machine intelligence" as an almost
proper subset. GPT-4 (or AlphaZero) is learning new abstract concepts
and operating with them. What more do you require from machine
intelligence?

Chris
Paul S Person
2024-03-31 15:36:54 UTC
Permalink
Post by Chris Buckley
Post by Michael F. Stemper
Post by Scott Lurndal
Post by Michael F. Stemper
Yes, if we're talking narrow, focused use cases such as chess and go, massive progress indeed. Depending on how you define it, you could add great progress when it comes to self driving cars, robots, image recognition and voice recognition too.
If I recall correctly, in _Go:del, Escher, Bach: An Eternal Golden Braid_,
Douglas Hofstader commented that "AI is sometimes just whatever hasn't
been done yet."
But none of what is called "AI" today is really AI. It's just machine learning
and pattern matching with a massive database of patterns. Eliza writ large.
Agreed. If it learns more than word patterns, I might be impressed. But
right now, it's just textual manipulation. When the words start to have
meaning to the program(s), then we'll really be approaching the Singularity.
As far as I can tell, though, it's not aware that there's conflicting
information out there. If you fed a bunch of Electric Universe text into
one of these programs, it'd be just as happy parroting it back as it
would GR. It wouldn't realize that the two "systems" are in conflict
with each other.
Without the narrow world of AlphaZero and games, where what is happening
and not happening is much clearer, I'd have to argue from my experiences
with text and GPT. I think we all agree that that is much less convincing
and satisfactory. Nonetheless, I'll give a brief argument, though my
previous post on AlphaZero is the better argument.
I've been doing research in simple machine learning and pattern
matching of text for over 40 years. That's pretty much the definition
of statistical information retrieval. I know what the limits of
simple LLM's are. People have been trying to add AI and knowledge
structures on top of statistical ir for many decades, but unsuccessfully
(for general search.)
And the level of performance of simple LLM's and pattern matching is
pretty bad. It's far below what you've seen every day from Google Search
for the past 20 years. Google Search (at least in the 200X's) got its
increased performance by adding human input, mostly indirect. It did a very
good job at using human links between web pages and using human click-through
data (what pages humans actually click on after searching.)
This poor performance of simple LLMs and pattern matching continues to
this day. A couple of years ago I participated in a workshop
"comparison" designed to look at the problems of retrieval in a
rapidly changing domain (Covid research papers). Performance of
simple LLMs (including from researchers from places like Google) wasn't
significantly better than year 2000 systems.
So what has changed from simple LLMs to LLMs like GPT-4? Basically
deep learning: being able to make use of multiple layers of processing,
where high level concepts are defined and recognized and then operated
on by higher layers of processing (with feedback from those higher levels
the ability to recognize and reason with high level (abstract) concepts.
Note that when you say "it's just machine learning and pattern recognition",
I don't really disagree. But that's because my definition of
"machine learning" includes "machine intelligence" as an almost
proper subset. GPT-4 (or AlphaZero) is learning new abstract concepts
and operating with them. What more do you require from machine
intelligence?
I think that, real-world-wise, I myself am waiting for one of the
artificial alleged-intelligences to produce the GUT. Or something
equally impressive.

One of my brothers claimed, for a while, that our dog, a small
terrier-mix which would jump up and down when I got out his leash and
said, in a baby-talk voice, "Georgie want to go for a walk?",
understood what I was saying.

So, one day after a particularly vexing discussion on the topic, I got
out the leash and, in the same tone of voice, said "Georgie want to be
cooked and eaten?". And he jumped and down just the same, which
convinced my brother that the dog didn't understand what I was saying
at all (well, once I pointed out that the only alternative was that he
/did/ want to be cooked and eaten just as much as he wanted to go for
a walk).

And then, of course, I attached the leash to his collar and took him
for a walk. To do anything else, after all, would have been cruel.

But the point is that people have a tendency to ascribe, to clearly
non-human entities, traits which they just don't have. And not just
animals, of course.
--
"Here lies the Tuscan poet Aretino,
Who evil spoke of everyone but God,
Giving as his excuse, 'I never knew him.'"
Cryptoengineer
2024-03-31 02:49:25 UTC
Permalink
Post by Michael F. Stemper
Post by D
Post by Chris Buckley
Post by D
Having been a software developer since 1975, I am really wondering about the
AI hype.  The AI thing so far seems to be a major expansion of the old Eliza
program.
That said, I think the progress from Eliza is great, and if the progress
continues so that I don't have to do what I have to do today, then it's a
valuable tool for corporate BS and for simple coding tasks (parse file,
convert to csv, give me a basic web site setup in framework X).
The progress of AI has been truly impressive the last couple of
decades.  But one of the major problems is that general AI is
extremely hard to evaluate well. There are many different criteria
that need to be used and humans disagree deeply on almost every
one of them.  Eg, what does it mean to be fair?
One of the problems with the general public's opinion of AI is that
the fluency of the new models is just too good!  That sort of fluency
in humans is only achievable by adult experts but the generative AI
models are masking all their understanding gaps, which may be equivalent
to a 10 year old child.
To my mind, a major proof that we have reached the area of true learning,
is the performance of AlphaGo and then AlphaZero 7 or 8 years ago.  The
massive data there was their own game simulations and it was a very restricted
world of chess, go, and other games but to start with just the rules, come up
with their own criteria for "good moves", and become better than anybody
else in the world in less than a day is impressive!  This is the type
of learning that is going into general Ai models of today.
Yes, if we're talking narrow, focused use cases such as chess and go,
massive progress indeed. Depending on how you define it, you could add
great progress when it comes to self driving cars, robots, image
recognition and voice recognition too.
If I recall correctly, in _Go:del, Escher, Bach: An Eternal Golden Braid_,
Douglas Hofstader commented that "AI is sometimes just whatever hasn't
been done yet."
Its worth remembering that when Hofstader wrote that, AI was in the
doldrums, with expert systems stagnating.

My wife, who was working on an expert system a the time (XCON, a tool
for configuring Vax systems at DEC), recalls that then people would
resist even putting the term 'Artificial Intelligence' on their resumes,
since it could well lead to rejections.

pt
Scott Dorsey
2024-03-29 15:57:23 UTC
Permalink
Post by Lynn McGuire
Having been a software developer since 1975, I am really wondering about
the AI hype. The AI thing so far seems to be a major expansion of the
old Eliza program. Of course, I am really outdated, writing in Fortran
and C++ nowadays. I have about two million lines of F77 / C++ code that
I am shepherding around the place for several customers.
https://en.wikipedia.org/wiki/ELIZA
Don't think about it as Eliza, because Eliza is rules-based and you can
look into the box.

Think about machine learning as giant matrix. Stuff goes in and we tweak
the matrix coefficients so that the stuff comes out that we want when we
multiply the matrix by the stuff that went in. After the learning period
is over, we put completely new stuff in, and other completely new stuff
comes out that hopefully is related in the same way as the data we trained
it on.

It's not really rules-based and what something weird comes out, it is very
difficult to go back and work out what data in the training process caused
that weird association to occur. Often it is impossible.

If you remember the McCulloch and Pitts Perceptron machine in the sixties,
that is basically the great grandfather of the modern machine learning
systems. In the end, connectionism won and traditional AI methods lost.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Lynn McGuire
2024-04-01 23:24:42 UTC
Permalink
Post by Scott Dorsey
Post by Lynn McGuire
Having been a software developer since 1975, I am really wondering about
the AI hype. The AI thing so far seems to be a major expansion of the
old Eliza program. Of course, I am really outdated, writing in Fortran
and C++ nowadays. I have about two million lines of F77 / C++ code that
I am shepherding around the place for several customers.
https://en.wikipedia.org/wiki/ELIZA
Don't think about it as Eliza, because Eliza is rules-based and you can
look into the box.
Think about machine learning as giant matrix. Stuff goes in and we tweak
the matrix coefficients so that the stuff comes out that we want when we
multiply the matrix by the stuff that went in. After the learning period
is over, we put completely new stuff in, and other completely new stuff
comes out that hopefully is related in the same way as the data we trained
it on.
It's not really rules-based and what something weird comes out, it is very
difficult to go back and work out what data in the training process caused
that weird association to occur. Often it is impossible.
If you remember the McCulloch and Pitts Perceptron machine in the sixties,
that is basically the great grandfather of the modern machine learning
systems. In the end, connectionism won and traditional AI methods lost.
--scott
My degree is in Mechanical Engineering, not software engineering, so I
don't have the background that you have.

I don't remember a McCulloch and Pitts Perceptron machine. Never saw it
as far as I know.
https://en.wikipedia.org/wiki/Perceptron

Lynn
Loading...