We've been seeing a concerning trend across the industry: good engineers are burning out. Not because of layoffs. Not because AI is replacing them. Because they're drowning in other people's AI-generated garbage code.
We keep hearing the same story. Someone on the team uses an AI coding assistant to recreate Apollo from scratch. The Apollo GraphQL client. A battle-tested library that millions of developers depend on, rebuilt from nothing because the AI didn't know it existed and the developer didn't bother to check. Another engineer spends a full day untangling auto-generated spaghetti that a five-minute code review would have caught.
The frustration is real. But the underlying problem is not AI itself. It's a supervision problem. And it's solvable.
The Slop Is Real (and Measurable)
We should be honest about what's happening in codebases across the industry. "AI slop" is not just internet slang anymore. It's a measurable phenomenon.
CodeRabbit's analysis of 470 real-world GitHub pull requests found that AI-generated code produces 1.7x more issues than human-written code. GitClear's study of 211 million changed lines found code cloning grew 4x, duplicated blocks increased 8x, and refactoring dropped from 25% of code changes to less than 10%. The Stack Overflow 2025 Developer Survey showed developer trust in AI accuracy fell from 40% to 29%, with two-thirds saying they spend more time fixing "almost right" AI code than they save.
These numbers are real. But they describe a specific failure mode: unsupervised AI in the hands of people who have not adapted their workflow. That is a solvable problem. We know because we solved it.
We Use Agentic Coding Every Day. We're Faster.
At Swept AI, we build software through agentic coding tools every single day, and we're faster because of it.
The tools themselves aren't the problem. The METR study from July 2025 found that experienced developers believed AI made them 20% faster, but a controlled trial showed they were actually 19% slower. That perception gap explains everything about AI slop: teams accumulate it without noticing because the speed feels real even when the output is not.
We got past that trap by treating AI coding the way we treat any AI deployment: with supervision, clear baselines, and engineering judgment at every step. We know what good output looks like. We catch drift early. We do not ship code we have not verified, whether it came from a human or an LLM.
We move from babysitting AI, to having actual leverage because of it. Our engineers spend their time on architecture, design, and the hard problems that actually move the product forward, while AI handles the repetitive work under guardrails we've established. That is what a well-supervised AI workflow looks like.
Engineers Are Conductors Now (And That's a Good Thing)
The role of the software engineer has evolved, and honestly, it's a more interesting role than it was before. Engineers are conductors now.
A conductor does not play every instrument in the orchestra. They understand what each instrument should sound like, when something drifts off key, and how to bring the ensemble into harmony. The conductor's value is judgment, taste, and deep knowledge of the craft.
Code quality still matters. Architecture still matters. Design patterns, dependency management, security posture: all of it still matters. But now, instead of spending 70% of your time writing boilerplate, you spend it on the decisions that shape the system. When 41% of all code is AI-generated and 48% of that code contains security vulnerabilities, the ability to evaluate and direct output becomes the most valuable skill in the room.
Knowing what Apollo is and why you should not rebuild it from scratch is not trivia. It is the contextual knowledge that separates an engineer from a prompt operator. The best engineers we work with treat AI as a force multiplier for their expertise. They are building better software, faster, because they have learned the tools and put systems in place to catch mistakes before they compound.
Stop Babysitting. Start Supervising.
Here is the shift that changes everything: moving from reactive babysitting to proactive supervision.
Babysitting looks like this: generate code, review every line, fix the bugs, clean up the mess, repeat. It's exhausting. It's what's burning good engineers out across the industry. And it does not scale.
Supervision looks like this: establish guardrails before deployment, define quality baselines, build systems that detect drift, catch regressions, and surface problems before they compound. Then let the AI work within the boundaries you've set.
This is what we do at Swept, and it's what we build for our customers. The organizations getting this right are not reviewing AI output line by line. They are building supervision infrastructure that scales. Automated quality gates catch duplicated logic, unnecessary dependencies, and security vulnerabilities before they hit a pull request. Continuous monitoring detects when behavior drifts from established baselines. The AI operates within boundaries you define, not boundaries it invents.
The result: faster shipping, fewer defects, and engineers who spend their energy on work that matters instead of cleanup duty.
This Goes Beyond Code
Every organization deploying AI faces this same choice. Customer service leaders, executives, marketing teams: the principle is universal.
The Google DORA 2024 report found that increased AI adoption correlates with a 7.2% decrease in delivery stability. But that is the unsupervised baseline. Teams that build supervision into their AI workflows from day one, that define what good looks like before they deploy and monitor continuously, do not see those same regressions.
The question is not whether to use AI. That ship has sailed. The question is whether you have the infrastructure to use it well.
The Opportunity
The talented engineers leaving the industry should not have to spend their days cleaning up after AI tools wielded without judgment. No one should.
But the answer is not to abandon AI. It's to get better at it. The teams that learn the tools, build real supervision, and treat AI as an engineering discipline will ship faster, build better products, and keep their best people. We see it every day at Swept.
Slop is a symptom of a naive approach to using AI. Supervision is the cure. And if your team is drowning instead of thriving, the gap is closable.
