Your AI Programme Has a People Problem

The 70:30 Rule

The 70/30 Rule

I was on Snowflake's AI + Data Predictions 2026 webinar recently.

I’d imagine that there was a sophisticated audience of mainly enterprise-level people on the session. It was the kind of event where a few of the people involved have already heard every AI pitch going.

Yet the first ten minutes were spent defining what an agent actually is.

That's not a criticism. It's something of a signal for me though. If that kind of audience needs the definition, then so does the boards they work for, most likely.

There’s an important distinction. An AI copilot or assistant keeps the human in the seat. You ask, it acts for you. You're still the one driving. An AI agent is different.

Ryan Watson, one of the panellists, put it simply: end-to-end automation, hands off keyboard, the whole thing handled without you directing every step. That's the critical difference, he said, because we tend to conflate AI with agentic. They are two very different things.

Most AI programmes right now are being built as if they're the same thing or with that confused thinking. That's probably why they're not making the expected headway.

The number that keeps coming up

Organisations investing in transformation generally cite the same ratio: 70% of the effort goes on people, processes, and ways of working. Only 30% is on the technology.

I've heard this from digital transformation leaders across industries a fair amount over the years.

It was true of social media adoption when I was helping Sky figure that out. It was true of data analytics more recently. It's now true of AI. And yet, the investment and focus keeps going into the 30%.

The tool, the model or the platform. The shiny SaaS. The demo that looks brilliant at pitch and doesn't survive first contact with a real workflow or teams. The 70% gets noted, nodded at, and quietly passed over. Usually because it's harder to sell in, slower to show a return, or doesn't come with a flashy UI or pitch.

In financial services and professional services, the 70% problem is arguably worse than average.

Your data is fragmented across legacy systems that were never designed to talk to each other. Your teams are structured by platform or function, not by the outcomes that AI agents need to deliver against. Your compliance requirements mean governance can't be an afterthought bolted on later.

And underneath all of that, there's a cultural issue that no one wants to put in the project plan.

AI adoption requires people to change how they work. Some of them don't want to. Some of them are right to be cautious. That's not resistance; that's judgement or fear of change or the future. Ignoring the 70% doesn't make those problems go away. It just means you discover them after you've committed budgets.

The governance problem is now a leadership problem

Eddie Drake, Industry Principal at Snowflake, made this point better than I've heard it made in a while.

In an agentic world, brands are judged by the behaviour of their autonomous systems, not their marketing copy. And when something goes wrong, "AI made me do it" is not going to be a defence. His assertion was quite direct: "Governance is an enterprise problem. Not a marketing problem. Not an IT problem."

That's exactly right.

When an AI agent makes a decision, takes an action, or has a conversation on behalf of your organisation, the accountability sits with you. Not the model. Not the vendor. You.

The organisations building capability right now are treating governance as part of the design process; not as a post-launch compliance exercise. That's not slowing them down. It's the thing that will let them move faster in six months, when the organisations that ignored it are rebuilding their approach from scratch.

2026 is a foundations year

Ryan Watson's closing thought was worth noting.

Companies will self-sort into two camps: those working out how to thrive, and those just trying to hang on. 2026, he said, would be the “table-setting” year.

Those building the right foundations now will have options in 2027. Those waiting for certainty will be making reactive decisions under pressure, not on their terms.

The organisations that built proper data infrastructure in the early 2010s had a material advantage when data-driven marketing became the standard. The ones that didn't spent the following years in expensive catch-up programmes with consultancies who'd perhaps recommended the self-same infrastructure five years earlier.

The window to build and skill up teams is well is open now. Not because the technology demands urgency, but because the competitive gap opens slowly and then closes very suddenly.

What to do

If you're in financial services or professional services and your AI initiative has been a committee for six months or a job just for IT, here's the honest version of what's happening.

It's not an AI problem. It's a workflow problem, a data problem, a capability problem, and a governance problem. The AI is usually the easiest part. The committee is not.

Start by getting clear on what your overall objectives are. What you're actually trying to automate, what decision it involves, who owns it, and what good looks like. Before any agent is built. Before any platform is selected. Before the next working party is commissioned or an individual is asked to take on AI in addition to their existing job.

That's the 70% that determines whether the 30% is worth spending.

Andy Roberts is Co-Founder and CMO of Zygens, the agentic AI consultancy. Zygens helps regulated organisations move from AI ambition to production, safely and with governance at the core.

If you want to talk about where to start, the Discovery Programme is the right first step.

Andy Roberts

CMO at Zygens

Previous
Previous

Chancellor Gives AI Mandate. The Hard Work Starts Now.

Next
Next

AI Is Scaling. Skills Are Not. Are You Fixing the Wrong Thing?