Post by Paul S PersonOn Tue, 26 Mar 2024 12:55:44 -0400, Cryptoengineer
Post by CryptoengineerPost by James NicollA Brief Guide to the Fiction of Vernor Vinge
From Grimm's World to Rainbows End, the fiction of Vernor Vinge.
https://reactormag.com/a-brief-guide-to-the-fiction-of-vernor-vinge/
You seem to imply that Vinge invented the 'technological singularity'.
While I, among many others, first became aware of the idea through his
1993 essay[1], it predates him. John Von Neumann discussed the notion
back in 1958. I have no idea if Vinge knew or Von Neumann's
speculations, or invented the idea independently. It's one of those
'when it's steam engine time, it will steam engine' ideas.
Vinge predicted it would happen by 2023. While clearly we're not
there yet, the explosive development of generative AI clearly
seems a step in that direction.
[1] http://mindstalk.net/vinge/vinge-sing.html
I don't remember where or when or how I became aware of it.
But I do know I haven't a lot about it recently.
Has the rate of progress slown down since, say, the 1990s? So that it
isn't quite so exponential?
The 1990s was not a period of large amount of progress on AI. Lots of
things were tried, small amounts of progress were made in related
disciplines (natural language parsing for example), but nothing really
advanced general AI.
One of the related disciplines that did have exponential growth was
mine: statistical information retrieval. At the beginning of the 90s,
state-of-the-art IR in practice was advanced Boolean systems on
possibly manually annotated documents. By the end of the 90s, with
the concurrent development of the web, it was natural language queries
on raw document text.
Towards the end of the 90s, people (including me) became aware that
large quantities of data (text) had a quality in itself. I had thoughts
of using IR to do AI taking advantage of this, but didn't do anything myself
though a grad student I mentored, Amit Singhal, later did. Groups like
IBM in the very early 2000s did develop this for what I consider the first
real progress in AI: question-answering. This was the group/system/approach
that was later developed Watson (that you may have seen on Jeopardy).
Side note: I wrote a fair number of IR papers with lots of citations
(30,000+), but I suspect the piece of writing that had the biggest
impact on IR was my one page post-PhD recommendation for Amit for
BellLabs++. The manager who hired him later told me that the sole
reason he offered Amit a position was my letter. It WAS rather
effusive, saying among other things that I thought he would have a
larger impact than any IR Grad student world-wide in the last 20
years. It turns out the manager knew me and knew I was not a
particularly gushy, over-enthusiastic person (can you tell from my
posts here:)) 3 years later in early 2000 Amit moved to be Head of
Research at an up-and-coming search engine company called Google,
rewrote the IR system, and the rest is history. AI was a pet interest
of Amit's throughout his 15 years as research head that he pushed as
much as possible; his goal was the ship's computer on original Star
Trek - something we're getting close to!
Overall, at Google and elsewhere, I view the start of exponential
growth of AI to be around 2000, sparked by first time availability of
very large data sets (whether text or images or eventually financial)
fast, cheap computer resources, and the realization that you didn't
need formal rules - the data was enough. It started reasonably slowly
but the exponential growth has not stopped yet. However, I will say
that right now many people are overestimating current AI capabilities
and underestimating the remaining problems (eg reliability). Lots of
hype!
Chris