article
The thing ran beautifully. But when I stepped back and looked at what it was actually doing, I realised I'd built a cathedral where a shed would have done.
I built something brilliant last year. A system for crawling and parsing social posts from TikTok and Instagram, fully agentic, with tool calling, agent oversight, multiple agents each encapsulating their own tasks. When something went wrong, it could diagnose the problem and try to fix it. When a new edge case appeared, it could adapt. It was genuinely impressive.
It was also completely needless.
The thing ran beautifully. But when I stepped back and looked at what it was actually doing, I realised I'd built a cathedral where a shed would have done. Yes, if something went wrong the agents could try to sort it. But so could a try/catch block. So could a cron job that pinged me when something failed. So could I, frankly, in about ten minutes.
I'd fallen for the thing that's easy to fall for right now: agentic is cool, therefore agentic is right.
When you spend your days building with AI, you start seeing agentic patterns everywhere. And the tools have got genuinely good. Multi-agent orchestration, tool calling, self-correction, memory, planning. You can build systems that feel almost alive in how they respond to the world.
The problem is that capability and necessity are different things. Just because you can give a system the ability to reason about its own failures doesn't mean the failures are complex enough to need reasoning about. Just because you can have agents hand off to each other doesn't mean the task has natural handoff points.
Sometimes the answer is a script that runs every hour and emails you if it breaks.
After that project, I started asking different questions before reaching for the agentic toolkit:
Is the problem actually unpredictable? Agents shine when they need to respond to genuinely novel situations. If the failure modes are known and finite, a switch statement might be more honest.
Does it need judgement or just execution? Tool calling is powerful when the system needs to decide which tool to use based on context. If it's always the same sequence, you've just built a very expensive pipeline.
What's the cost of getting it wrong? Agentic systems can recover from mistakes, which is valuable when mistakes are likely and recovery is complex. If mistakes are rare and recovery is simple, the overhead isn't worth it.
Am I building this because it's interesting? This one's harder to admit. Sometimes we reach for the sophisticated solution because we want to build it, not because the problem demands it. That's fine for learning. It's expensive for clients.
None of this means agentic AI is oversold. It isn't. There are problems where the ability to reason, adapt, and recover is exactly what's needed. Complex workflows with genuine decision points. Systems that need to operate in messy, unpredictable environments. Tasks where the edge cases outnumber the happy path.
The point is that these problems are rarer than the current hype suggests. Most business challenges don't need a system that can think. They need a system that works.
We've made this idea central to how we work. Lagom. Just the right amount. Not the most impressive solution, not the most technically interesting, but the one that actually fits the problem.
Sometimes that means building something agentic and sophisticated. Sometimes it means talking you out of AI entirely and suggesting you fix your spreadsheet first.
I'd rather have that conversation honestly at the start than build you something brilliant that you didn't need.