What companies get wrong when they replace people with AI

Photo by Kenny Eliason on Unsplash
You cut costs. You replaced a team — or a supplier — with AI. The numbers looked better for a quarter.
And now something’s off. Client feedback has a different tone. Work that used to land first time keeps coming back with questions. There’s a gap you can feel but can’t name.
I’ve been watching this play out across agencies and digital businesses since mid-2024. The pattern is the same almost every time: companies replace expertise when they should’ve replaced the gruntwork around the expertise.
Replacing the gruntwork means the repeatable stuff that eats hours but doesn’t need judgement. Writing the first draft of a status report. Pulling data into a template. Formatting deliverables. Chasing the same update from three people. That’s the work AI is genuinely good at — the stuff that kept your experts from doing the thinking you actually hired them for. That distinction is where most of the damage happens.
The “it looks fine” problem
Here’s what makes this so hard to catch. You replaced a specialist team or a supplier because you wanted to save the line item. Fair enough. The AI output came back and it looked… plausible. Maybe even good.
But you weren’t the expert. That’s why you hired one in the first place.
You don’t know what’s missing because you never had to. The specialist knew which recommendations actually moved the needle for your type of business. They knew what to flag before it turned into a problem. They carried years of context about your account, your clients, your market.
The AI doesn’t have that. It gives you something that reads like a plan. But it’s pattern-matched from the internet, not built from experience with your specific situation. And the gap between those two things is enormous — you just can’t see it until a client does.
I watched one agency cut a specialist supplier last year because the AI output looked comparable. Seemed like a clean saving. It was months before a client flagged that the work had gone flat — same structure, same format, but none of the nuance that used to make it land. By that point the relationship with the old supplier had gone cold, and rebuilding that capability in-house took longer and cost more than the original contract.
Your client spots the drop before you do. By then you’re on the back foot.
The decision you can’t undo
I’ve seen agency founders announce the switch publicly. Posted about the efficiency gains. Told the team. Maybe even let people go.
Now they’re sitting with work that’s not landing the same way, and reversing the decision feels like admitting they got it wrong. So they double down. Spend more time re-briefing the tools, fixing output, managing clients who are asking harder questions.
The cost of that specialist team you cut? It didn’t disappear. It just spread across everyone’s calendar in ways nobody’s tracking. Hours spent checking AI output. Hours re-doing work that a specialist would’ve got right first time. Hours on client calls explaining things that never needed explaining before.
The saving looks real in the spreadsheet. The cost is just harder to see.
Nobody to call when it goes wrong
There’s something else people don’t talk about. When a specialist team or supplier got something wrong, you had someone to ring. A relationship. Someone who knew the account, understood the history, and could fix it with context.
Now when quality slips, there’s no one on the other end. You’re troubleshooting a tool. Rewriting prompts. Trying to fix something without fully understanding what went wrong or why.
That loss of accountability is disorienting — and most people didn’t realise they were relying on it until it was gone.
I watched the other side of this in 2024
Most of the small businesses I was working with had clients leave them. Not because the work was bad — because those clients decided AI could replace the service. The SEO agency. The content team. The ops support they’d had for years.
A lot of those businesses aren’t trading the same way anymore.
And the clients who made those cuts have been through some pain. The gap between “AI output” and “team who’ve done this for years” turned out to be wider than the spreadsheet suggested. They saved money. They also lost the thing keeping quality consistent — and their own clients started to notice.
The exhaustion nobody admits to
There’s a pressure sitting underneath all of this that I keep seeing in founders and directors. Everyone looks to you for the answer on AI. Your board, your team, your clients. You’re expected to have a position.
And most of you don’t. Not really. You’re working it out as you go, same as everyone else — but you can’t say that out loud. So you perform certainty. You make decisions with conviction you don’t fully feel. And you carry the weight of that gap between what you project and what you actually know.
That’s exhausting. And it leads to bad calls — like replacing expertise with a tool because it felt like the decisive, modern thing to do.
The junior question nobody’s asking
If you stop hiring juniors today, where do your senior people come from in three to five years? Not just in your company — across the whole industry. Juniors ask the stupid questions that keep seniors sharp. They carry energy that burned-out teams desperately need. And the craft knowledge that used to get passed down through pairing and code reviews doesn’t live in a wiki — it lives in the habit of teaching it. Cut that pipeline now and you’re not saving a headcount. You’re hollowing out the talent supply you’ll depend on later. I’ve written a longer piece on what this means for the industry →
The combination that actually works
The flip side: someone who understands how these tools actually work — and who’s happy to experiment, break things, and figure out what’s possible — is genuinely valuable.
The analogy I keep coming back to is: fuck around and find out.

Not carelessly. But there’s real value in having someone whose job is to keep testing. Not reading about what AI can do. Actually building things, breaking things, wiring things together.
Here’s where most companies go wrong though. They hire a young AI experimenter type (or a junior from their team) — someone who’s good with tools, loves tinkering, can go and play. The problem is they don’t know what to build. They’ll spend weeks automating something that wasn’t the real bottleneck. They can operate the tools, but they don’t have the context to know where the real friction sits.
The people who get results with AI are the ones who already know the domain. They’ve done the work. They understand where things break, what causes drag, which processes eat hours for no good reason. AI is much better at doing the thing when you know what you want it to do.
I’ve seen this in delivery and ops — the space I work in. The repeated handoffs between teams that nobody owns. The status updates that eat Monday mornings. The reporting that takes three hours and changes nothing. I know those problems because I’ve spent years sitting inside agencies and scaling teams, fixing them by hand. Now I build systems with AI that handle the grunt work. Not experiments — things I use every day and that the teams I work with rely on.
Domain expertise plus AI capability is the combination that works. Pure tinkerers don’t have the context. Pure domain experts who ignore AI are getting outpaced. The people who have both — whether you hire them, develop them internally, or bring them in on a project basis — are the ones worth finding.
That gap you felt at the top of this post — the one where something’s not right but you can’t quite name it — that’s the sound of expertise leaving the building. The companies that get this right won’t be the ones who replaced the most people. They’ll be the ones who figured out what should never have been replaced at all.
April 2026 experiment — £5k
New financial year. If you’ve been wanting to figure out what AI can actually do for your business — but haven’t had the time or the right person to test it — here’s an option.
For £5k and I’ll spend the rest of April solving things for you. Building systems, automations, workflows. Whatever’s causing drag in your delivery or ops.
I know how these tools work. I also know how operations and delivery work in agencies and digital businesses, so I know where the friction usually sits.
If you want a practical experiment instead of another strategy deck, let’s talk.