
“Any successful value system faces ontological instability. As it changes the world around it, it undermines its own assumptions.”
—Wolf Tivy
Picture yourself engaged in this type of exchange in the years ahead:
If this exchange rang a bell, it’s likely because you’re recalling a famous scene from The Matrix.
Hyperstition is the idea that certain narratives, beliefs, or concepts can shape reality by becoming self-fulfilling prophecies through cultural feedback loops. Essentially, ideas that are believed and acted upon by enough people can manifest as tangible outcomes, whether intentionally or not. My issue with generative AI is not superintelligence (yet), but rather how it could exacerbate our already twisted perception of reality. What does this have to say about how narratives materialize?
For instance, take a look at this:
February 22, 2025
Ask yourself: does the model reflect or refract? How many nods away is it from the truth? If you’re serious about this question, you’d take action to address the issue ASAP. And so they did. They’ve now patched the cracks, this “really strange and bad failure of the model”. So much for the uncensored model that was supposedly going to show us reality as it truly is.
And I think that what they did was correct. If you are responsible for shaping the narratives of the world, you should be aware of the potential higher-order effects stemming from the feedback loops your systems create.
During my explorations down various rabbit holes, I sought conceptual frameworks to help me understand this challenge. I found myself reading about Object-Oriented Ontology\(^1\). Ian Bogost breaks it down into three stimulating modes: ontography (mapping object relations), metaphorism (how objects interpret each other), and carpentry (building artifacts that showcase these relations). Doesn’t this resemble what LLMs do? They map, they interpret, they build. And in doing so, they exemplify Timothy Morton’s insight that causality
is an aesthetic dimension of relations between objects, wherein sensory experience does not indicate direct access to reality, but rather an uncanny interruption of the false ontic equilibrium of an interobjective system.
Think of it this way: those moments when LLMs spit out something deeply weird (check the above tweet again)? One can argue that’s not a bug but more like a model of the world having a hiccup. It’s the curtain twitching\(^2\) just enough to remind us that what we think of as “normal” is more like a collectively maintained hallucination. Every uncanny valley, every unexpected connection, every bizarre output is a reminder that we’re swimming in waters far deeper than we imagine.
Speaking of Morton, they explored the idea of hyperobjects. One imposing quality a hyperobject manifests is that of being so everywhere-and-nowhere that your brain starts to cramp trying to box it in. I’m thinking about a monolithic AI system. In a way, the definition seems fitting. You might gesture at the parts, but the whole thing slips through. Generally speaking, trillion parameters models engulfing our collective consciousness resembles a weather system of meaning-making. In that virtual tempest, every interaction leaves footprints: a generated story here, a solved equation there, all digital breadcrumbs leading down an infinite trail, weaving a tapestry of meaning that we can only glimpse from the surface. To me the real mind-bender is that, just like climate change was cooking us before we had a name for it, AI is already rewiring our reality while we are busy arguing about whether it is “really” intelligent.
This brings me to Vladimir Vernadsky’s noosphere. Once exclusively a domain of human thought, it’s no longer solely our own. AI has wormed its way into our cognitive ecology, not just processing our thoughts but actively injecting its own narratives into the discourse. When AI-generated content becomes indistinguishable from human thought, we’re facing an epistemic crisis. It’s not a passive interaction with AI anymore, but a dynamic feedback loop.
Lastly, consider the memeverse for a moment. It slowly reaches its boiling point, synthetic and human thought swirling indistinguishably together, merging, diluting, evaporating, evolving. I stared into the glimmering surface of the Atlantic ocean pondering various ideas, but imagining one in which it’s boiling is apocalyptic. In that image, rather than a swimmer, AI is like a rogue wave, amplifying and distorting everything it touches. What starts as an interaction with the AI system becomes a narrative, then a belief, then reality. An LLM hallucinates a scandal about a public figure, X amplifies it, the outrage builds, mainstream media picks it up, and suddenly the person’s reputation is in tatters over something that never happened. The loop tightens: hallucination → amplification → outrage → “truth”\(^3\). Pure hyperstition? I feel that AI doesn’t just describe the world anymore. It prescribes it.
Graham Harman’s theory of vicarious causation feels particularly relevant here
whereby two hypothetical entities meet in the interior of a third entity, existing side-by-side until something occurs to prompt interaction.
I can imagine my digital self, this conglomerate of traces left behind by my actions in time, being a hyperstition itself, a self-reinforcing loop of belief and action that thrives in tiny corner of an LLM’s weight space. The LLM becomes that third entity Harman speaks of, a vast neural medium where digital selves coexist, frozen in weighted connections. But what is that “something” that sparks interaction? Is it the prompt itself? Or perhaps it’s deeper. A moment when patterns of thought resonate across the latent space, when digital selves align in ways that create new meaning. How many such ghosts did my own digital phantom brush against in that frozen moment before the model’s response? How many conversations happened in that split second of semantic resonance?
I have mix feelings about the future. This AI-hyperstition nexus we might be racing toward is both thrilling and terrifying. Whether it leads to enlightenment or epistemic collapse depends on our ability to remain active agents in our own story, rather than NPCs trapped in an algorithmic fantasy. McLuhan saw it coming decades ago:
“Our technology forces us to live mythically.”
The question is: whose myths will we live by?
\(^1\) As a funny side-note, it seems that my functional ego can’t escape OOP even when I don’t write code.
\(^2\) “Pay no attention to the man behind the curtain.”
\(^3\) Trump’s social media platform is called “Truth Social”. Just saying.