Second Order Effects of the AI Boom, pt 2

This is a companion piece to a previous post I wrote. It covers a similar topic but from a different perspective. Also, I actually wrote it this time instead of having an AI ghostwrite slop.

The ascension of AI applications has proceeded at such a rapid clip that most people haven’t had time to absorb and integrate AI into their lives. Instead, we’ve all just been bowled over by a deluge of AI output as everyone from individuals to corporations begin incorporating AI processes into their workflows. We’re seeing a lot of the second order effects play out live, now. Some are obvious, some are a little more subtle. But at this point it’s hard to say AI has no impact on your life. One second order effect that is increasingly common is the burden of identifying AI output being placed on everyone.

We’re seeing people larp as artists and do art commissions while actually just having AI diffusion models produce the artwork for them. We also see these AI models being used to generate media wholesale, from videos in ads, to AI generated music, as copy, people using AI to write for them, etc. Basically, the mainstream release of ChatGPT 3.5 marks the moment in history in which we went from authentic human content to all future content being of questionable provenance, much like how the first atomic bomb being dropped marked all future steel production with some low level radiation. Is that necessarily bad? No, but a lot of people don’t like AI crap being foisted upon them, and many AI users are intentionally trying to surreptitiously pass their work off as human generated – it’s the deception that’s the problem.

Open source is where this dynamic gets truly perverse: pull requests by new contributors are becoming increasingly useless. Prior to the AI coding boom, the PR itself served as proof of work to some extent, when contributing to open source. A maintainer could look at what was being submitted and figure out how much effort the contributor put into the work. Now, AI has made producing code cheap. Not producing good code, necessarily – that still requires some effort. But now contributors can disproportionately produce code that may or may not work as intended, or may even have hidden backdoors or so forth. The onus is entirely on the maintainer to review the contribution, even as AI makes it increasingly easier to spam open source projects with contributions. From the perspective of a maintainer, what is the purpose of even accepting contributions anymore? Previously, it was done to get new developers into the maintainer pool of the project, people who could eventually become part of the team and understood the project at a deep level. But now, if you’re getting dozens to hundreds of AI generated contributions, the juice is increasingly not worth the squeeze to actually facilitate shepherding in new contributors, since it’s becoming harder and harder to distinguish good faith contributions from garbage. If you’re going to put in the work to comb through and verify an AI contribution is good (which, note, is way more work than actually generating the AI contribution in the first place), you might as well just generate the code yourself using an AI agent so that at the very least you know the code doesn’t intentionally harbor vulnerabilities or backdoors that an outside contribution may plant.

The irony of these effects is that the new AI environment is increasingly placing more of a mental burden on us, at the same time that one of the greatest dangers being placed at the altar of AI is how they’re making us all dumb by outsourcing our thinking to them. Note that the type of cognitive load associated with AI checking stuff is basically the type of wary suspicion people in low trust society bear as a general prerequisite to participating in their respective environments. It’s a constant tense guard that they’re forced to keep up at all times in order to not get taken advantage of. The worst part is, it makes us enforcers of the problem as well. When we internalize the rules of a low trust society and act accordingly, we blame those who don’t accept or understand those rules for not participating. If you’re a victim to a scam? “Idiot should have known better.” As a result, these AIs are simultaneously making us think less and think more, but in increasingly bad ways. Really the worst of both worlds. Such is life.

Leave a Reply

Your email address will not be published. Required fields are marked *