"LLMs are like your brother-in-law: they know a bit about everything - but you should probably double-check."
I've been using this line for a while to describe large language models (LLMs). It's funny because it's true - and now it's scientifically proven.
The latest paper by Apple researchers - The Illusion of Thinking - reminds us that LLMs aren't really "thinking." They simulate reasoning, often convincingly, but without understanding. The illusion comes from statistical pattern-matching across massive training datasets - something no human could replicate in scale or speed.
Yet let's not dismiss them too quickly.
While their reasoning is shallow and fallible, their recall across domains is staggering. They've digested more material than any individual ever could. That alone makes them useful - if used wisely.
- Great at surfacing connections
- Prone to confident nonsense
- Not thinking, but patterning
We don't need to romanticize LLMs. We just need to understand their strengths and limitations. Like your brother-in-law at a trivia night - amusing, sometimes brilliant, occasionally way off.
What matters now isn't whether LLMs "think," but how we think about them.
https://lnkd.in/gynEMB24
Architect of Thinking OS™ | Inventor of Refusal-First Cognition | Built the Seatbelt for AI — Eliminates Fines, Ensures Explainability, Stops Drift
3 months ago
Strong lens, Stephane Hamel 🇨🇦! Pattern ≠ premise, and that’s where most systems drift.
𝗧𝗵𝗶𝗻𝗸𝗶𝗻𝗴 𝗢𝗦™ was built on a different claim: it doesn’t ask LLMs to “think,” it installs the decision architecture that governs when they’re allowed to contribute and how their output is validated before it’s trusted.
That’s not romanticizing or rejecting LLMs.
It’s enforcing judgment before the recall engine runs.
Otherwise, confident patterning turns into institutional error at scale.