Stochastic Parrots: Frequently Unasked Questions

46 points - last Wednesday at 8:34 PM

Source

Comments

hellohello2 today at 1:28 AM
"Text generated by an LM is not grounded in communicative intent, any model of the world, or any model of the reader’s state of mind."

Modelling text describing the world is not modelling (some aspect) of the world?

Modelling the probability that a reader likes or dislike a piece of text is not modelling (some aspect) of a reader's state of mind?

siegecraft today at 3:35 AM
> Most things we historically do with computing are not well approximated by extruding synthetic text.

I don't understand this point. I feel like almost everything associated with computing is extruding synthetic text.

NooneAtAll3 today at 3:38 AM
> Another common trope in the discourse around this phrase is to claim that stochastic parrot is an insult (or even a slur). On one reading, that would require LLMs to be the kind of thing that can take or feel offense, which they clearly aren’t.

isn't that circular reasoning?

"I can call anyone not smart enough to take offense because as I said those anyone aren't smart enough to take offense"?

(also disregarding that being offended has been shifted into "protection of the (perceived) weak (or of the group of your allegiance)" rather than "protection of self" for quite some time now)

---

but generally I always felt that this tension around the phrase was somewhat of perscriptive/descriptive difference, or maybe "level of detail in the model" type

just because there is knowledge of a more full understanding of the process doesn't mean other descriptions/modeling of the process are invalid or unuseful

newtonian gravity doesn't describe time dilation - and yet most of the time it is enough to use only it, so it's successfully studied in schools and undergrads

if output of LLM can be modeled (by intuition) as "some other being" for many practical uses *and model works* - then automatical blaming others for "using less precise model" and warning about it feels... strange

loandbehold today at 2:50 AM
Sounds like increasing capabilities of LLMs over last 5 years proved her 2021 paper wrong but instead of admitting that she had been wrong she's trying to change/reinterpret what she wrote in 2021.
libraryofbabel today at 1:25 AM
It would have been nice to see some version of “I am very surprised by how far LLMs have come since I wrote the stochastic parrots paper, here is how I have revised my thinking.” But there is nothing like that and the author is just doubling down or trying to correct perceived “misinterpretations” of her work.

Meanwhile you have multiple Fields Medalists (Tau, Gowers) saying they’re very impressed by LLMs’ mathematical reasoning, something that the stochastic parrots thesis (if it has any empirically-predictive content at all) would predict was impossible. I doubt Tau and Gowers thought much of LLMs a few years ago either. But they changed their minds. Who do you want to listen to?

I think it’s time to retire the Stochastic Parrots metaphor. A few years ago a lot of us didn’t think LLMs would ever be capable of doing what they can do now. I certainly didn’t. But new methods of training (RLVR) changed the game and took LLMs far beyond just reducing cross entropy on huge corpuses of text. And so we changed our opinions. Shame Emily Bender hasn’t too.

Sigh.

getnormality today at 3:48 AM
I think "stochastic parrot" misses the mark as a characterization of LLMs, but so does "artificial intelligence." They're both somewhat helpful and somewhat misleading in complementary ways.

Maybe that's the best one can do when describing something very new and strange. A series of vivid, incompatible metaphors might be the best guide for a while. "Intelligence" as we normally understand it is a significant overstatement, while "parrot" is a massive understatement.

leonidasv today at 1:22 AM
What a hill to die on.
tibbar today at 3:25 AM
I mean, we're pretty deep into Westworld/Blade Runner-style scifi at this point. It's actually a crazy, mind-bending question to try to grasp what is going on with chatclaudini at this point. Regardless of what labels we choose or properties we choose to affirm, we're far too deep into uncanny valley for it to be very helpful.
_wire_ last Wednesday at 9:10 PM
Lovely article well worth attention by virtue of its regard for the cultural traits of terminology and its inflections, while also debunking the pervasive lore that "AI" devices are doing anything but the merest resemblance of thinking.

It's rare to read an author who can directly face Brandolini's Law of misinformation asymmetry and not only hold his own against the bullshit but overcome it.

gyanchawdhary today at 1:15 AM
[dead]
radkZ today at 1:26 AM
This is the first submission since a year that gives me some hope for humanity. It shows that linguistics is not obsolete. Maybe the last people capable of thinking will be linguists.