We Keep Asking the Wrong Question
The Question We Keep Asking
Collectively, we keep asking whether language models are truly intelligent, as if the answer will settle whether they matter.
That question is understandable. It points to a deep concern about what, exactly, we are interacting with. But in day-to-day use, another question seems more useful: what kind of thing does this behave like in practice?
A framing that has helped me is this: an LLM is like a simulated mind. Not a person, not a soul, but a system that can sustain the shape of thought across turns. It can hold context, infer intent, recover from ambiguity, and produce responses that feel like the output of reasoning rather than retrieval alone.
Two Incomplete Stories
That practical behavior creates a tension in how we talk about AI. On one side, critics warn that it is only pattern completion, a stochastic parrot with no inner life. On the other side, enthusiasts sometimes speak as if personhood is already here. Both views can miss something important.
The first view is often technically correct but experientially incomplete. Yes, these systems are built on probabilistic next-token prediction. But description of mechanism is not description of function. A piano is a mechanism of strings, wood, and hammers. That does not tell you what it can do in a room with a human listener.
The second view captures the felt richness of interaction but can blur boundaries that still matter. LLMs do not have stable goals, embodied stakes, moral accountability, or continuity of identity in the way humans do. They can simulate perspective without owning one. Treating that simulation as a literal mind risks confusion in both ethics and design.
The Useful Middle
The useful middle is functional clarity, not metaphysical certainty.
If written text is a static snapshot of thought, an LLM is a dynamic generator of thought-like behavior. It can adapt the next sentence to the one before it. It can move with you through a problem space. It can surprise you, mislead you, correct itself, and help you refine a vague question into a sharper one. That profile is why it feels different from prior software.
Seen this way, the practical task is less about proving consciousness or dismissing these simulated minds as a parlor trick. It is more about building better habits around a new kind of tool. We need language that is precise enough to preserve boundaries, but flexible enough to describe what people are already experiencing.
Calling LLMs simulated minds does that for me. It avoids reducing them to mere autocomplete, while avoiding the opposite mistake of treating them as full human equivalents. It names the strange middle: systems that do not think as we do, yet can participate in thinking with us.
Where the Work Is
This uncomfortable middle is where the serious questions now live: what cognition we should delegate, what judgment must remain human, and what standards keep fluent, believable output from being mistaken for truth.