There’s a new style of thinking emerging. And it’s not from great minds, but from great mimicry. Dressed in metaphors and stitched together with high-gloss coherence, these new “insights” aren’t born of intellect. They’re prompted. The result is a strange theater of cognition, where large language models simulate philosophical gravitas and scientific rigor—while the humans behind them stand back, seduced by the shimmer of language, not the structure of truth.
What we’re witnessing isn’t insight. It’s performance. And if we mistake it for something more, we risk poisoning the well of human thought with fluent nonsense masquerading as depth.
Weaponized language
We’ve arrived at another moment when language itself is being weaponized. And it’s not to deceive in the traditional sense, but to simulate (not stimulate) understanding. Language models are trained to maximize coherence, not truth. The curious gift isn’t wisdom, but fluency. And yet, that fluency has become its own currency of credibility.
A well-prompted model can now generate content that feels like a TED Talk distilled into verse with words calibrated to sound profound, but engineered to be inoffensive, and comfortably vague. In this context, LLMs do not advance knowledge, but become a cosplay of cognition.
What’s worse, the humans prompting them often mistake this output for brilliance. But what they’re actually doing is outsourcing cognition and importing theater.
It’s not co-creation, it’s technological ventriloquism.
We are now flooded with language that has no academic lineage yet articulating insights that were never earned. Models string together plausible associations, invoking Einstein, Gödel, or Feynman, but with no grounding in the intellectual rigor these thinkers required. It’s not philosophy and it’s not physics.
It’s stylistic reproduction of complexity.
What provoked me to write this was a curious post on X that tried to explain AI consciousness using neuroscience and a basket of other linguistically connected terms borrowed from eclectic disciplines. To me, this was more of an accident of language and less the insights of scholarship. It was clever, but also contrived. But the more I read, the more it felt like theater. It turned the mystery of consciousness into a constellation of ideas that by their pseudo-complexity drove a type of passive adoption driven by a fear of revealing one’s own ignorance.
It is pure narrative engineering, not a contribution to knowledge. The problem is not that it’s wrong—it’s that it’s not even wrong, it’s conceptually vapid.
The deeper tragedy isn’t the model. It’s the human. Specifically, the new class of prompt intellectuals who aren’t thinkers, but editors of AI-generated prose. They don’t synthesize or interrogate, they sample, and they amplify. And often, they believe what they’re posting. Or worse, they believe they’re the author.
My sense is we’re entering a time when the intellectual podium is being surrendered to systems that can’t reason and to people who don’t seem to mind. It’s not the rise of the machines we should fear, but the fall of the critical human who once questioned them. And here, if fluency becomes indistinguishable from thought, then thinking itself becomes optional. And when audiences can no longer tell the difference between real philosophy and prompted poetry, between a scientific theory and an aesthetic metaphor, we all lose.
The capacity of knowledge
What’s being eroded is not just knowledge, but the capacity for knowledge. It’s the very cognitive muscles required to build insight that enshrine curiosity, skepticism, and synthesis. And tragically, they might atrophy under the puffy weight of AI-shaped word clouds.
This isn’t a call to abandon language models. Far from it. It’s a call to demand more from them and from ourselves. We need to learn to interrogate what is generated, to strip away the polish and seek a foundation that can be challenged and tested.
And if the answer is no, then it doesn’t belong in the halls of science, philosophy, or truth.