The Rebirth of Mystique and Intrigue in the Modern Age
The empire, long divided, must unite; long united, must divide. Thus it has ever been. — Luo Guanzhong
Beginning in the 1960s, there was a boom in cults and new wave religious movements – new religions were popping up left and right and more people were getting into existing religions. Some famous examples: Hare Krishna, the whole thing with people giving you flowers at the airport, which boomed during this period, The Unification Church, colloquially known as the Moonies, and Scientology, whose membership also peaked around this time. In a way, this was a golden age for the idea of mystique and intrigue – there was some unknown aether of mystery that we could tap into that would change our lives. Cue the internet, and the mass adoption of internet usage by everyone. Ever since that period, participation in religion has been on the decline. Why? One underlying factor I believe is the death of mystery in the modern age.
From 1990 to 2021, we went from ~2.6 million people using the internet to ~5 Billion people using the internet. With such a large portion of the globe transitioning to the internet in a single generation, ordinary life has changed in broad ways, simply as a side effect of mass interconnectedness. Global mass communication at light speed has fundamentally changed our worlds, even for the least connected of us. One of those side effects is the rise in crowdsourced spot checking. With so many people on the internet at once, whenever something ambiguous happens, there are dozens of internet detectives on the case. For example, the Podcast Serial released in 2014, about a gruesome murder case. I’m not going to hash out the details. The point is after the release of the podcast, thriving communities popped up to relitigate the case from every angle, going through the case with a fine-tooth comb, bringing in their new pet theories about what really happened. This situation occurs with everything online now. Random UFO clips, streamer relationship drama, etc. We’ve gotten to a point where whenever anyone posts anything that has a hint of ambiguity in it, a group of strangers roll in and try to crack the puzzle. It’s like how people try to spoil the ending of a movie for you, but in all aspects of life. Whenever you’d see something mysterious that you didn’t immediately understand, you’d peek in the comments section and see someone with 1000 upvotes explaining the exact mechanism. In ICP’s magnum opus, Miracles, they decried this exact phenomenon:
Fucking magnets, how do they work?
And I don’t wanna talk to a scientist
Y’all motherfuckers lying, and getting me pissed
The point wasn’t that scientists were making up bullshit, exactly – it was their destruction of the idea of mystery and wonder in the universe that the authors once felt. I really believe this style of internet culture contributed to the sense of “everything’s understood, we just live in cold hard reality and things like religion and mystery are just bullshit signals in your brain.” Cue the AI boom of the 2020s.
AI research predates 2020. But the mainstreaming of “AI” in its current iteration happened when OpenAI released ChatGPT in late 2022 – They popularized the usage of the Transformer Large Language Model. You can think of these as AI Chatbots for now. Around this time, we also started seeing the release of diffusion models – models that generated AI images, the most famous of this era being Stable Diffusion. We didn’t know it at the time, but the propagation of these AI models was the opening of Pandora’s box. What these AI models enabled was verisimilitude at scale. For the longest time, one of the classic tests for when we qualified if we had achieved AI in some verifiable benchmark was the Turing Test, a test where a human evaluator chats with both an AI and a real person, and the evaluator isn’t able to distinguish which is the AI, or chooses the AI as the “human”. A lot of people kvetch nowadays about AI style cadence and speech patterns, but it’s obvious in retrospect that LLMs far surpassed any other technology we’ve created to defeat the Turing Test, and with relative ease. The fact that LLMs cleared that benchmark is only tangential to my point – the point is that we have created these technologies that can allow us to trick people in increasing numbers. LLMs for text (and increasingly varied applications – the rate at which LLMs are devouring the computational ecosystem as a whole is rapid and frightening) and images/video via diffusion models. We’re going to rapidly hit a point where things like video evidence become increasingly difficult to verify in a court of law – audio evidence can just as easily be faked, and anyone you interact with online may not be a real person. We’ve regressed to an age of uncertainty.
So the 1960s-80s faced the golden age of mystery. The rise of the internet in the 2000s faced the nadir of societal mystery. Does that mean we’re back on the upswing? Kinda, but not really. Due to these transformative technologies, all work is forced into verification work – “How do I distinguish what is fake from reality?” This question is increasingly becoming harder and harder to answer. At least, digitally. If you go out in the physical world, you can still just interact with people normally and pretend like AI doesn’t exist. But even that barrier will inevitably weaken. Think of the classic deepfaking cases – people creating AI revenge porn on women they have a grudge against. This is increasingly a large issue in schools nowadays, where the male students will create deepfaked porn of their female classmates. Even though the AI videos are fake, the real world social consequences are quite real. Think of the classic scam where someone calls your aging parent or aunt/uncle. There’s an emergency, or maybe a loved one was kidnapped, and they need to wire a bunch of money immediately to bail their loved one out. Think about how much easier it will be to fool people if they can simply AI clone your voice sample and have you screeching in distress that they need to send the money now NOW NOW. As we regress and become more untethered from reality, I think we’re going to see a rise in delusional thinking and a culture of chronic epistemic fatigue – this is a variant of the mandatory password rotation problem writ large. We simply will not have the capacity to consistently state, “This is 100% true/false”. We aren’t seeing an upswing in mystery in terms of enchantment with the world à la ICP – what we’re seeing instead is a rise of uncertainty. Mystery was about the infusion of fantasy and reality and the thin line that tethered them. Uncertainty is simply about being able to distinguish which two plausibly realistic scenarios are true or false. In short, Mystery is seductive; Uncertainty is exhausting.
Either way, the net result is we are slowly losing our grip on reality. But how does this play into cults? Scams and frauds are a short term confidence game. A cult is a long con with identity, community, and worldview attached. The mechanisms for both are roughly the same: emotional manipulation deployed against people whose grip on reality has been destabilized. The difference is that fraud is usually transactional, while cults are infrastructural. A scam wants your money once; a cult wants to become the lens through which you interpret the world. AI is the inflection point that magnifies both. The power of AI is in tailoring its content and personalizing it to the user in a medium that never tires – we can write automation to have these AI systems bombard you nonstop with psychological attacks, and provide constant, intimate reinforcement.
That is the pendulum swing. The internet trained us to believe ambiguity was temporary. Somewhere in the comments, on a forum, or in a subreddit, someone would eventually explain what really happened. Explanations were public and collective. AI changes that. It makes plausible falsehoods cheap, verification exhausting, and interpretation private. Instead of one contested explanation in public, you get a million personalized explanations delivered one-on-one. Under conditions of chronic epistemic fatigue, people will stop trying to verify everything and start outsourcing judgment.
It’s not restoring mystery, per se; the true mechanism is the mass production of so much uncertainty that as our relationship with reality becomes increasingly untethered, strange explanations start to seem no less plausible than ordinary ones. Mystery and uncertainty aren’t the same thing, but socially, they do similar work: they create dependence on interpreters. The old mystery promised revelation. The new uncertainty promises relief. As AI increasingly grows in its capabilities and as users increasingly become codependent on it, we’ll see a rise in cults – the difference now is that they will be increasingly smaller, individualized cults, all enabled by AI delusion machines.
Anyways, I’m off to join an AI worship cult – see you all there in a few years.