There is a finding in our research that stops most senior executives cold when they first hear it.
Failing programs — the ones that miss their targets, burn through contingency, and eventually land on the front page — routinely score higher on governance indicators than programs that succeed.
Better documented objectives. Stronger execution discipline. More robust reporting frameworks. More controls.
And worse outcomes.
We call this the Clarity Paradox. And once you understand it, you cannot unsee it.
What the data shows
Between 2004 and 2024, we analysed thousands of major programs across banking, healthcare, government, telecommunications, and the private sector. The pattern is consistent and uncomfortable.
Across every sector, fewer than one in three programs exceeding $15 million succeeded. The other 70% were distressed or failed outright. Among programs exceeding $100 million, the picture is considerably worse.
Yet when we examined the factors that actually distinguished successful programs from failed ones, governance indicators were not among them. In several cases, they pointed in the wrong direction entirely.
Why this happens
The answer lies in what governance actually measures — and what it doesn't.
Most governance frameworks are designed to measure activity. Are the right meetings happening? Are the reports being filed? Are the RAG statuses being updated? Is the project plan current?
These are process questions. They tell you whether the machinery is running. They tell you almost nothing about whether the machinery is pointed in the right direction — or whether the people operating it are telling you the truth about what they're seeing.
The largest performance gaps are not in systems or processes. They are in behaviour and leadership — precisely the areas that traditional governance doesn't measure.
Specifically: whether leadership engagement is genuine or performative, whether teams feel safe reporting uncomfortable truths, and whether decision-making is driven by evidence or by the organisational pressure to maintain momentum.
These things do not appear on dashboards. They do not show up in status reports. And they are almost never captured by governance frameworks — because governance frameworks are designed by people who assume the humans inside them are operating in good faith.
When they are not — when velocity has replaced judgment, when reassurance has replaced honesty, when continuation has become automatic rather than earned — governance intensifies in direct proportion to the problem. More reporting. More controls. More meetings. The dashboard stays green while the program quietly fails.
The paradox in the data
This is where the findings become genuinely counterintuitive.
Behavioural resilience is the single strongest predictor of program success in our data. Not process maturity. Not documentation quality. Not the sophistication of the project management framework.
Whether the leadership team is resilient, honest, and genuinely engaged with reality — including the parts of reality that are inconvenient.
But here is what the data also shows: failed programs frequently outperform successful ones on the governance indicators that most organisations actually measure. Clear business objectives, execution discipline, tools and infrastructure — higher in the programs that failed.
The problem isn't a lack of governance. It's how governance is applied. Structure takes the place of judgment. Reassurance masks honesty. Green dashboards cover red realities.
The moment that matters
There is a specific window in every failing program where intervention is still possible. Where the early indicators are present, the options are still open, and the cost of correction is manageable.
That window closes. Usually quietly, and usually before anyone in the governance structure has named the problem.
By the time warning dashboards start flashing red, the real decisions have already been made — by inertia, by sunk cost, by the accumulated weight of reporting that said everything was fine.
The question is not whether drift begins. It always does. The question is how long it remains unrecognised — and whether anyone with the independence to name it is in the room.
What this means in practice
If you are a board member, senior sponsor, or executive leader responsible for a high-stakes program, the Clarity Paradox suggests a specific and uncomfortable question:
Is the governance on this program telling me what is actually happening — or is it telling me what the system has been designed to surface?
The two are not the same thing. And the gap between them is where most major program failures live.
The earliest warning signs are rarely hidden. They are present in the behavioural patterns of the leadership team, in the quality of the signals being escalated, in the gap between what the data shows and what the narrative says.
They are just not what most governance frameworks are looking for.
The difference between success and failure is rarely effort. It is visibility — and the willingness to act on what visibility reveals, even when what it reveals is inconvenient.
This is where we operate.