LLMs and Schrodinger’s Word

Tue 15 April 2025
AI
Blog

I've always marveled at Schrodinger’s cat—dead and alive until observed—as a paradox of physical possibility. But in the age of large language models (LLMs), we can marvel at something equally strange—a single word, not yet written, but hovering in semantic superposition in a box of its own. The LLM, like a quantum system, lives in a state of probabilistic potential until prompted. Then, from that prompt, a token emerges—a word collapsed from meaning-space, selected not by certainty but by probability.

Welcome to the age of Schrodinger’s Word.

Probabilistic Cognition and the Mind of the Model

Unlike deterministic machines of the past (think calculator), LLMs generate language by sampling from distributions—statistical fields trained on trillions of words. At their core is a kind of cognitive thermostat that regulates the entropy—or randomness—of output. A low temperature sharpens the distribution, making the model more deterministic. A higher temperature softens it, allowing creativity, ambiguity, and surprise. Adjusting the model’s temperature means telling it a value—typically between 0 and 1—where 0.7 tightens the output into sharper clarity, and 1.2 loosens it into a cascade of wilder, less predictable turns. The same prompt can yield many different completions because the model isn’t calculating a fixed answer. It’s rolling the mathematical dice.

This is probabilistic cognition. It's thought that doesn’t derive from a single path but from a cloud of weighted possibilities. It is not how humans think. It is how LLMs generate what looks like thought.

Feed an LLM "The future is..." at a low temperature, and it collapses to something like "unwritten"—safe, predictable, as seen when DeepSeek generated this at a temperature of 0.8. Crank the heat to 1.2, and DeepSeek offers "squiggly"—wild, unmoored, a stark contrast from the same prompt. Same input, different dice. This isn't creativity as we know it, it's entropy harnessed into syntax.

Hyperdimensional Thought Collapse

The inner workings of an LLM unfold not in three dimensions but in thousands. GPT-4, for example, operates in a latent space of over 12,000 dimensions—a semantic manifold where each token exists as a vector of probabilities across vast context. When a token is selected, it represents the collapse of that probability cloud, not unlike the collapse of a wave function in quantum physics.

But this isn’t a physical particle resolving to a point. It’s meaning resolving into language. And the observer? That’s us—or more precisely, the prompt itself, which acts as the trigger of collapse that takes form in the language of the user.

In Schrodinger’s quantum world, the cat is both dead and alive until observed. In the LLM’s world, the word is both "is" and "was" and "might be"—until we ask. Then, with precision tuned by that cognitive thermostat, the manifold collapses into form.

LLMs are not just like quantum systems—they evoke other strangeness too. Their responses can “entangle” ideas across contexts. Imagine a line that fuses Shakespeare’s soliloquy with Silicon Valley startup jargon, or quantum physics with love poetry. Lines of thought that were never meant to intersect suddenly converge—not by logic, but by latent probability. And perhaps, we can even call it a unique form of creativity.

Do Humans Think Like LLMs?

Not quite. Some people are naturally comfortable thinking in terms of uncertainty—improvisers, intuitive decision-makers, and those who weigh possibilities rather than chase certainties. But the human mind is narrative. It leans toward coherence, emotion, memory, and causality. And here might be the key difference—we collapse not to the most likely word but to the most meaningful one, often in defiance of cognitive (statistical) pressure.

LLMs, on the other hand, are statistical engines. They do not "mean." They echo, synthesize, and project from patterns. Their errors are algorithmic. Ours are revelatory. And this difference is not a defect. It is a clue that provokes us to look closer.

The So What and Meaning vs. Form

LLMs show us a cognition unburdened by biography—they simulate thought without a thinker. And that provokes a deeper question.

If a system can generate intelligent-seeming language without memory, feeling, or identity, then what, exactly, have we assumed thinking to be?

To say humans don’t think like LLMs isn’t to assert superiority. It’s to recognize that our cognition is embodied, recursive, and intentional. Their cognition is statistical, emergent, and fluid. Given this gap between human and machine cognition, perhaps our desire for anthropomorphism—our tendency to see LLMs as thinkers in our own image—should stop there.

LLMs collapse to form. Humans collapse to meaning.

If Yuval Harari were to weigh in, he might see LLMs as the next chapter in humanity’s saga—a shift from Homo sapiens, the meaning-maker, to Homo statisticus, the pattern-projector. Malcolm Gladwell, meanwhile, might hunt for the tipping point: that moment when a machine’s probabilistic dice-roll outstrips a poet’s gut instinct. Either way, Schrödinger’s Word reflects a shift of meaning, once our dominion, that now dances to a probabilistic tune.

In Schrödinger’s world, we feared we’d never know the truth. In Schrödinger’s word, we learn that truth may have always been plural—a spectrum of likelihoods, waiting for a voice to ask the question.