Seedlings in stony soil: On program experiments in high-risk situations

Little by little, it’s becoming more acceptable in the peacebuilding world to admit that we don’t have the answers.

People now talk regularly about “demand-driven” and “people-centred” approaches. Successive high-level reviews have insisted that success flows from good fit to context, rather than abstract technical excellence.

Yet the problem remains: Serious crises are not hospitable environments for experimentation. It may be trendy in the tech world to “celebrate failure” and speak at fuck-up nights, but that’s morally off-putting when you’re talking about people’s lives.

So what do we know about learning this way? If you’re leading an organisation, how you can responsibly fumble your one towards a useful contribution?

The view from the greenhouse

Start-up entrepreneurs are gloriously clear in how they think about opportunities. They have to be, because they’re price-takers. They can’t do a closed-room deal with sympathetic government officials; they don’t have a guaranteed role because of a public monopoly.

The implication’s simple. In this position you don’t know whether you have a product that people want, until you try and sell it. And as Steve Blank wrote a while back, even that first tentative offer:

… is only the date when Product Development thinks they are ‘finished’ building the product. The first customer ship date does not mean the company understands its customers or how to market or sell to them … anyone who has ever taken a new product out to a set of potential customers can tell you a good day is two steps forward and one step back.

On this view, a long design process—a Prince-2 certified, PhD-backed, literature-reviewed inception phase—is “the path to disaster”. The prudent first step is instead customer discovery, working out what people want before you invest a ton of resources into building the wrong thing. This is followed by “customer validation”, to nail down the model, and only then scaling up for rollout at large scale.

This is an intuitively appealing model for anyone who’s handled stakeholder engagement for sensitive issues like security sector reform, or land reform.

When you go out to meet your interlocutors you encounter disbelief, amusement and often heated resistance. Finding an approach that is both technically and politically workable is a tricky feeling-out process, and one that often runs into utter dead ends. It’s unarguable that it’s better to get a feel for that early on … before funding lines and recruitment are locked in and tough to change.

The problem comes, however, when we turn to practicalities. Blank argues that his model requires a firm with three basic capabilities. It must be able to (i) ship “working products” to test market reaction, (ii) do this in rapid iterations without long lead-up times, (iii) put its faith in those market trials rather than laboratory tests.

And this list, to put it mildly, utterly contradicts how organisations holding risky, public functions will usually operate.

When you’re too big to fail

Think of a peacekeeping mission tasked with protecting civilians; or a bilateral donor instructed to be “conflict-sensitive” in a volatile environment.

Their mind-set is well-explored in another influential book: Karl Weick and Kathleen Sutcliffe’s Managing the Unexpected. Both are serious management thinkers, and spent a lot of research time with fire-fighting teams, aircraft carriers, and the like. These are environments where errors are high-profile, politically sensitive and costly in lives.

Weick and Sutcliffe describe such organisations as “high-reliability”, and argue that they are:

  1. Reluctant to simplify: Staff tinker with processes to progressively optimise, and avoid stripping out elements without careful testing.
  2. Preoccupied with failure: Leaders aim to minimise downside risks to the maximum extent, often “aborting the launch” if anything it is of place
  3. Deferential to technical expertise: Management listens closely to those with acknowledged credentials, and in some cases hands a veto to them.


So when we ask for “demand-led” approaches we seem to be asking for two incompatible things. One side of the organisation has to aim for “high reliability” and a low error rate; and another has to “fail fast and often” in order to find new approaches.

What’s critical to realise, however, is that this is not a unique balancing act. In fact, it’s a permutation of pretty familiar one.

In large private firms, it’s a well-known risk that innovation gets crowded out by the pressure for financial returns in the short-to-medium term. What’s more, organisational culture usually reinforces this. Clayton Christensen famously argued that engineers and experts tend to resist simple products. They struggle to see past the failure to perform against technical performance metrics, to think about new areas of value for the customer.

The net result, as another study put it, is that status quo operations “simply swat down innovation initiatives—or any project, for that matter, that cannot make an immediate contribution”.


The typical solution is a skunk works, named after Lockheed Martin’s jet lab at the end of World War II (which now has its own jaunty skunk logo). These come in many forms. There are officially designated “innovation” teams, commanders’ initiatives groups, the just-plain-weird-looking boxes on the organogram that don’t fit anywhere else.

They’re all attempts to build an experimental greenhouse somewhere on the farm. Their common difficulty is this: They must work out how to tap resources, and talent, from the rest of the organisation. Otherwise they’re just a crappy version of a startup, with all the resource constraints but none of the flexibility and do-or-die mentality.

So let’s refine our challenge: How can internal entrepreneurs draw on the organisation’s overall capabilities in order to do useful experiments, but not get sucked into its slipstream?

Creating a failure-friendly environment

The first and most important part of the answer relates to leadership. It sounds trite, but top management are the guardians of mission and vision. Their job is to globally optimise – to find the right things to do, not do the same old things right.

In public organisations, this means managing strong passions. People are emotionally committed to their work, and have sacrificed a lot to stay in the field. Admitting that there are “competing hypotheses” about how to do things is personal. Accepting that a trial experiment can falsify your hard-won expertise is also very personal.

In this setting, the role of a good leader is to orchestrate the conflict. In probably the best-available book on this topic, Ron Heifetz and Marty Linsky argue that the leader’s job is to keep her organisation in a “productive range of distress”. This means unleashing “differences, passions and conflicts in a way that diminishes their destructive potential and constructively harnesses their energy”. Loose cannons and iconoclasts get to do their thing, within boundaries. (Even if you don’t like ‘em.)


A second part of the answer is some sort of innovation accounting. When we held a recent London Conflict / Fragility round table on experiments, one of the main recurring comments was how difficult it is to sell learning as an outcome.

This is peculiar, because it’s a plain fact that narrowing the range of uncertainty on tough public problems has substantial actuarial value. Whether counted in lives or dollars, getting closer to a solution matters.

So this needs to be recognised. Organisations must think about how they have shifted thinking and practices outside their own four walls—alongside “hard” indicators for direct impact on peoples lives. This is something that advocacy organisations do like breathing, with a good place to start something like Amnesty International’s Dimensions of Change framework.

Complex? Yes. But the reality is that anyone can tend a fully grown garden. It’s developing new growth that involves a bit of risk, and the ability to visualise five or ten years into the future.