The Rebirth of Mystique and Intrigue in the Modern Age

The empire, long divided, must unite; long united, must divide. Thus it has ever been. — Romance of the Three Kingdoms

There was this period of time from the 1960s where there was a gold rush on cults and new wave religious movements – new religions were popping up left and right and more people were getting into existing religions. Some famous examples, Hare Krishna, the whole thing with people giving you flowers at the airport, which boomed during this period, The Unification Church, colloquially known as the Moonies, and Scientology, whose membership also peaked around this time. I don’t know the historical reasons for why this happened, but one thing I want to point out is that during this time, people were more open to the idea of mystique and intrigue – there was some unknown idea that they could tap into and it would change their life. Cue the internet, and the mass adoption of internet usage by everyone. Ever since that period, participation in religion has been on the decline. Why? One underlying factor I believe is the death of mystery in the modern age.

From 1990 to 2021, we went from ~2.6 million people using the internet to ~5 Billion people using the internet. With such a large portion of the globe transitioning to the internet in a single generation, we’ve have pretty broad changes in our lives as a side effect. Global mass communication at light speed has pretty much fundamentally changed our worlds, even for the least connected of us. One of those side effects is the rise in crowd sourced spot checking. With so many people on the internet at once, whenever anything happens, any event with ambiguity occurs, there are dozens of internet detectives on the case. For example, the Podcast Serial released in 2014, about a gruesome murder case. I’m not going to hash out the details. The point is after the release of the podcast, thriving communities popped up to relitigate the case from every angle and for participants to announce their own position on what really happened. We saw something similar about this documentary Making a Murderer that released on Netflix in 2015. Same rise in community that combed through the case with fine comb bringing in their new pet theories about what really happened and walking through every aspect of the case. This situation occurs with everything online now. Random UFO clips, streamer relationship drama, etc. Basically, we got to a point where whenever anyone posted anything that had a hint of ambiguity in it, a group of people would roll in and try to crack the puzzle. It’s like how people try to spoil the ending of a movie for you, but in all aspects of information. Whenever you’d see something mysterious that you didn’t immediately understand, you’d peek in the comments section and see someone with 1000 upvotes explaining the exact mechanism. In ICP’s magnum opus, Miracles, they decried this exact phenomenon:

Fucking magnets, how do they work?
And I don’t wanna talk to a scientist
Y’all motherfuckers lying, and getting me pissed

The point wasn’t that scientists were making up bullshit, exactly – it was their destruction of the idea of mystery and wonder in the universe that the authors once felt. I really believe this style of internet culture contributed to the sense of “everything’s understood, we just live in cold hard reality and things like religion and mystery are just bullshit signals in your brain.” Cue the AI boom of the 2020’s.

AI research obviously predates 2020. But I think it’s pretty safe to say the mainstreaming of “AI” in its current iteration happened when OpenAI released ChatGPT in late 2022. They of course popularized the usage of the Transformer Large Language Model. You can think of these as AI Chatbots for now. Around this time, we also started seeing the release of diffusion models – models that generated AI images. I believe the most famous around this era was Stable Diffusion. We didn’t know it at the time, but the propagation of these AI models were the opening of Pandora’s box. What these AI models enabled were verisimilitude at scale. For the longest time, one of the classic tests for when we qualified if we had achieved AI in some verifiable benchmark was the Turing Test, a test where a human evaluator chats with both an AI and a real person, and the evaluator isn’t able to distinguish which is the AI, or chooses the AI as the “human”. A lot of people kvetch nowadays about AI style cadence and speech patterns, but it’s obvious in retrospect that LLMs far surpassed any other technology we’ve created to defeat the Turing Test, and with ease. The fact that LLMs cleared that benchmark is only tangential to my point – the point is that we have created these technologies that can allow us to trick people at scale. LLMs for text (and increasingly varied applications – the rate at which LLMs are devouring the computational ecosystem as a whole is rapid and frightening) and images/video via diffusion models. We’re going to rapidly hit a point where things like video evidence become increasingly difficult to verify in a court of law – audio evidence can just as easily be faked, and anyone you interact with online may not be a real person. We’ve regressed to the age of mystery.

Due to these transformative technologies, all work is forced into verification work – “How do I distinguish what is fake from reality?” This question is increasingly becoming harder and harder to answer. At least, digitally. If you go out in the physical world, you can still just interact with people normally and pretend like AI doesn’t exist. But even that barrier will inevitably weaken. Think of the classic deepfaking cases – people creating AI revenge porn on women they have a grudge against. I read this is increasingly a large issue in schools nowadays, where the male students will create deepfaked porn of their female classmates. Even though the AI videos are fake, the real world social consequences are quite real. Think of the classic scam where someone calls your aging parent or aunt/uncle. There’s an emergency, or maybe a loved one was kidnapped, and they need to wire a bunch of money immediately to bail their loved one out. Think about how much easier it will be to fool people if they can simply AI clone your voice sample and have you screeching in distress that they need to send the money now NOW NOW. As we regress and become more untethered from reality, I think we’re going to see a rise in delusional thinking and an acceptance of mystery again. We simply will not have the capacity to consistently state, “This is 100% true/false”. As AI increasingly grows in its capabilities and as users increasingly become codependent on it, we’ll see a rise in crazy cults – the difference now is that they will be increasingly smaller, individualized cults, all enabled by AI delusion machines. Anyways, I’m off to join an AI worship cult – see you there in a couple of years.

Leave a Reply

Your email address will not be published. Required fields are marked *