Profile picture of Stephane Hamel
Stephane Hamel
Digital marketing & analytics shaped by data governance, privacy and ethics | Educator · Speaker · Consultant
Follow me
Generated by linktime
June 8, 2025
"LLMs are like your brother-in-law: they know a bit about everything - but you should probably double-check." I've been using this line for a while to describe large language models (LLMs). It's funny because it's true - and now it's scientifically proven. The latest paper by Apple researchers - The Illusion of Thinking - reminds us that LLMs aren't really "thinking." They simulate reasoning, often convincingly, but without understanding. The illusion comes from statistical pattern-matching across massive training datasets - something no human could replicate in scale or speed. Yet let's not dismiss them too quickly. While their reasoning is shallow and fallible, their recall across domains is staggering. They've digested more material than any individual ever could. That alone makes them useful - if used wisely. - Great at surfacing connections - Prone to confident nonsense - Not thinking, but patterning We don't need to romanticize LLMs. We just need to understand their strengths and limitations. Like your brother-in-law at a trivia night - amusing, sometimes brilliant, occasionally way off. What matters now isn't whether LLMs "think," but how we think about them. https://lnkd.in/gynEMB24
Stay updated
Subscribe to receive my future LinkedIn posts in your mailbox.

By clicking "Subscribe", you agree to receive emails from linktime.co.
You can unsubscribe at any time.

20 Likes
June 8, 2025
Discussion about this post
Patrick McFadden
Architect of Thinking OS™ | Inventor of Refusal-First Cognition | Built the Seatbelt for AI — Eliminates Fines, Ensures Explainability, Stops Drift
3 months ago
Strong lens, Stephane Hamel 🇨🇦! Pattern ≠ premise, and that’s where most systems drift. 𝗧𝗵𝗶𝗻𝗸𝗶𝗻𝗴 𝗢𝗦™ was built on a different claim: it doesn’t ask LLMs to “think,” it installs the decision architecture that governs when they’re allowed to contribute and how their output is validated before it’s trusted. That’s not romanticizing or rejecting LLMs.  It’s enforcing judgment before the recall engine runs.  Otherwise, confident patterning turns into institutional error at scale.
Profile picture of Steve Waterhouse, CD, CISSP
Steve Waterhouse, CD, CISSP
Conférencier, consultant en cybersécurité, vétéran et chroniqueur média en cybersécurité
3 months ago
Oh yeah
Profile picture of Matt Lillig
Matt Lillig
Data Strategy & Analytics Product Leader | From Startups to Enterprise, Turning Insights into Action
3 months ago
Trust the AI or trust the human? Waymo and Tesla would like to know! 😁