The Divergence: Three Futures of AI Usage


I’ve been watching people use AI for about two years now and three patterns keep showing up. Not in the tools themselves – the tools are basically the same for everyone at this point, free or close to it – but in what happens to the person using them. Three different groups, three different trajectories, and I don’t think most people realize which bucket they’ve fallen into.

Riders


You know the type. They had more ideas than hours long before ChatGPT existed. Senior engineers with a mental backlog of projects they’d ship if they could clone themselves. Writers sitting on six half-finished essays because the research phase alone would eat a weekend. People whose bottleneck was never “what should I do” but “I physically cannot do all of it.”
For them, AI is just throughput. They’re not asking it to think. They already thought. They need it to execute while they move on to the next thing. And when the model gets something wrong, which it does, constantly, they catch it immediately, because they already knew the answer. They were just tired of typing it out.
A weird side effect is that their output volume starts looking suspicious to people who can’t evaluate the quality. Some middle manager sees someone shipping three projects simultaneously and assumes it must be AI slop, because that’s not a human-shaped amount of work. The best users of the tool get mistaken for people the tool replaced. Not much you can do about that except keep shipping.
I will say though, this advantage has an expiration date on it. The whole edge depends on AI still needing a competent human to steer. Once it doesn’t – and that day is coming, just unclear when – being the fast competent person with good taste stops mattering as much. The ceiling for this group is high right now, but it’s not permanent. Probably.

Outsourcers
The second group. I want to be careful here because the temptation is to look down on these people, but what they’re doing makes sense if you don’t look too hard at it.
Thinking is slow. It’s uncomfortable. Drafting a difficult email to your boss about a problem on the team requires sitting with that discomfort long enough to figure out what you actually want to say and how to say it without torching a relationship. That sucks. It takes time and emotional energy. Or you could paste the situation into Claude and get back a polished, diplomatic response in eight seconds. Of course people do that. Why wouldn’t they?
The problem is downstream and it’s slow enough that you don’t notice. You stop being able to tell when the AI’s response is actually good versus when it just sounds good. Those are different things, and the gap between them is where all the damage happens. I talked to a friend recently who’d been using AI to handle basically every written communication at work for months. Emails, Slack messages, documents, the lot. He was genuinely confused when a project went sideways because of miscommunication in a thread he’d “handled.” He read the AI’s output, it sounded fine, he sent it. But it didn’t actually say what needed to be said. He couldn’t tell because he’d stopped practicing the skill of telling.
Someone on Twitter put it well — it’s becoming a kind of addiction. You get that little hit of satisfaction from having dealt with something. Email sent, done, next. But you didn’t actually deal with it. You delegated the dealing to a machine that doesn’t know your boss, doesn’t understand the team dynamics, and doesn’t care if the advice is subtly wrong. Over time you start finding the effort of real thinking genuinely unpleasant. Same mechanism as any other dependency. Different target.
And honestly? This group is the loudest about how transformative AI is. Which tracks. It IS transformative for them. It lets them perform above their actual level. If you benefit most from an illusion, you’re going to be its biggest fan.


Synthesizers
Okay so this is the one I find most interesting and the one nobody talks about enough.
Magnus Carlsen grew up training against chess engines. Kasparov didn’t have that — he played humans his whole career and then famously lost to Deep Blue and that was kind of the story. But Carlsen’s generation had engines from the start. They didn’t become engines. What happened was weirder. They became the strongest human players who have ever lived because their intuition got calibrated against something superhuman. Carlsen sees patterns on the board that Kasparov literally couldn’t, not because he’s smarter, but because he spent formative years getting his ass kicked by machines that could see further than any human.
Same thing is happening with Go. The post-AlphaGo generation of players is disgusting. They play moves that would’ve seemed like obvious mistakes ten years ago but turn out to be deeply correct, because they’ve internalized positional ideas that only emerged from superhuman play.
That’s the third use of AI and it’s fundamentally different from the other two. You’re not using it to go faster. You’re not outsourcing your cognition. You’re using it to upgrade your cognition permanently. Sparring partner, not employee.
I do this with writing. I’ll ask Claude for fifteen different angles on something I’m trying to articulate, reject every single one, and in the process of rejecting them figure out what I actually meant. None of those fifteen angles end up in the piece. But the piece is better because I stress-tested my instincts against a machine that could generate alternatives faster than I could think of them. The output is mine. But the me that produced it was sharpened by the interaction.
The important thing that separates this from the Rider: the Rider was already good and AI made them faster. The Synthesizer is becoming good in ways that weren’t available before. And unlike the Rider’s speed advantage, this doesn’t get commoditized. When every chess player has access to Stockfish, Carlsen doesn’t lose his edge. The engine already changed him. That’s done. It’s internalized. Can’t be taken back.


The Tradeoff Nobody’s Making Consciously
Here’s the thing about calculators. Everyone brings up calculators when they talk about cognitive offloading. “We stopped doing mental math and it was fine!” Sure. It was fine. Multiplying 576 by 823 in your head is a party trick, not a life skill. Losing that ability cost nothing because the tool is always in your pocket and the situations where you’d need raw arithmetic without a device nearby basically don’t exist anymore.
But that’s not what people are offloading to AI. They’re offloading judgment. How to have a hard conversation. How to weigh a career decision. How to think through whether a relationship is working. This isn’t mental math. This is the core stuff. And it’s getting handed off with the same absent-minded shrug as every other technology trade we’ve made, except this time the skill being lost actually matters for your quality of life.
Nobody opened TikTok in 2019 thinking “I’m about to fry my attention span.” They were bored, they wanted a distraction, the algorithm was good at its job. Three years later half the internet can’t read anything longer than a paragraph without fidgeting. The damage crept in. By the time it was obvious, the capacity was already gone.
AI is doing that same thing to a higher faculty. Not attention. Judgment. The ability to sit with a difficult problem long enough that your own thinking actually kicks in, instead of reaching for the tool the second it gets uncomfortable.
What separates the three groups isn’t intelligence and it isn’t access. It’s whether they realize a tradeoff is happening. The Rider knows exactly what they’re handing off and what they’re keeping. The Synthesizer is deliberately choosing what stays human and using the machine to strengthen those things. The Outsourcer isn’t making a choice at all. They drifted into it. They’d call it being productive.

Why This Matters Now
Most technologies are convergent. They bring the floor up. Washing machines meant everyone had clean clothes, not just the rich. Calculators meant everyone could do their taxes. Search engines meant everyone had access to the same information, more or less.
AI might go the other direction. Not convergent but divergent. The ceiling gets higher for people who use it well and the floor stays where it was, or sinks. And the variable isn’t the tool, because the tool is the same for everyone. The variable is what happens on the human side of the screen.
Right now people are sorting themselves into these three groups without realizing that’s what’s happening. The Outsourcer doesn’t wake up one morning and decide to let their judgment atrophy. It happens one delegated email at a time, one pasted-in conflict at a time, one “eh, close enough” at a time. And by the time they might want to course-correct, the switching cost is brutal. You can’t synthesize if you’ve lost the domain knowledge that makes synthesis possible, and domain knowledge is exactly what goes first when you stop engaging with the hard parts of your work.
There’s a window where having this conversation matters. Where someone reading this might actually check themselves and ask what they’re giving up. That window won’t stay open.

To make a painfully unfunny rhetorical point, I posted my thesis and talking points into one of these models – Claude Opus 4.6 – this article was the result. Note that it sounds nothing like me. The points are well fleshed out, though – it covered the gist of what I wanted to say.

Leave a Reply

Your email address will not be published. Required fields are marked *