Auto-generated description: A webpage titled TfE: On Post-Searlean Critiques of LLMs from Deontologistics features an article discussing criticisms of large language models, with a sidebar containing categories like Books, Papers, and Talks.

The widely-referenced “stochastic parrots” paper from five years ago is no out of date. In it, Emily Bender, Timnit Gebru, et al. argue that LLMs remix patterns in text without genuine understanding. This has knock‑on effects for how we (should) use and trust them. It’s a familiar argument, using the same approach as John Searle’s famous Chinese Room argument about ‘black box’ symbol‑shuffling without understanding.

I don’t know Pete Wolfendale, but I have just discovered that he is an independent philosopher based in Newcastle‑upon‑Tyne, so I should probably look him up. In this post he pushes back on the idea that LLMs are “stochastic parrots” and uses that as a way into some bigger questions about what we mean by “meaning” in the first place.

I think it’s maybe worth summarising why I think post-Searlean critics of AI such as Emily Bender are wrong to dismiss the outputs of LLMs as meaningless. Though it’s perhaps best to begin with a basic discussion of what’s at stake in these debates.

Much like the term ‘consciousness’, the term ‘meaning’ often plays proxy for other ideas. For example, saying systems can’t be conscious is often a way of saying they’ll never display certain degrees of intelligence or agency, without addressing the underlying capacities.

Similarly, being able to simply say ‘but they don’t even know what they’re saying’ is a simple way to foreclose further debate about the communicative and reasoning capacities of LLMs without having to pick apart the lower level processes underpinning communication and reasoning.

While Wolfenden agrees that current systems don’t have beliefs, intentions, or a stable view of the world, nor does he think that they’re “just” meaningless text generators:

  1. Grounding - a lot of human language isn’t directly tied to what we personally see or do. We can talk meaningfully about things like black holes, stock markets, or bowel cancer because we trust expert communities - not because we all have first‑hand access. So it’s at least plausible that LLMs pick up some real, socially grounded content from the human language they’re compressing.
  2. Intention - while he accepts that LLMs do not have inner communicative goals, Wolfenden suggests that this is not as much of a big deal as critics suggest. He compares the outputs of LLMs to rumours - i.e. statements produced by a diffuse social process. We can still interpret, question, and trace this back to wider patterns of usage. LLMs sit inside our language community in this way, even if they are not full participants.
  3. All-or-nothing thinking - Wolfenden argues that we should avoid binary thinking about minds and meaning. Humans also talk lazily, gossip, and say things we later revise. We can be asked for reasons for our views and have our views be reshaped, which is the difference between us and current models. There is still overlap here. As he says, LLMs are “in the game, even if they’re not strictly playing it.”

Source: DEONTOLOGISTICS