Is there a danger of a tyranny of impact measurement?
- Joe Saxton
- 1 day ago
- 5 min read
There is a lot of talk about how important impact is for a non-profit organisation. And of course it is. However, the reality is that for many charities, it’s impossible to know for certain what the impact of a course of action is. This isn’t just because impact is hard to measure: it can be impossible or not an organisation’s top priority. But the issue is that if you only do something if you know what kind of impact you will have, then some of the sector’s most powerful organisations and activities would never happen. Here are the three underlying areas where impact maximisation has flaws.
Impact maximisation may lead to doing less risky activities. Planning for impact is about weighing up risks. Some activities will have a known set of impacts, and others are less clear or riskier. If you want to do the risky stuff or innovate, then there is a good chance that your impact will be reduced. In this sense impact maximisation and innovation are uneasy bedfellows.
Impact maximisation leads to prioritising short-term thinking. The impact report needs to be written. The comms team have their pens written. Telling them we’ll know in 5 years or 10 years whether a project is working really isn’t very helpful. Yet many activities that make the world a better place take time. If you want to maximise impact, you don’t want to hang about for 5 years!
Impact maximisation leads to putting quantity over quality. Impact reports need to be impressive. They need to show large numbers of people whose lives have been changed. This means that it always appears better to say that 100 people have been impacted rather than 10, or 1000 rather than 100. Headlines in an impact report are a numbers game.
Let me now give six areas where the challenges of impact maximisation become real. In other words, an organisation could make poor decisions if it were going to try to maximise impact because an organisation can’t know which course of action will have the most impact:
1. Medical research takes years to come to fruition. The foundations for many of today’s successes in medical research were started 20 years ago or longer. Developing a new drug or treatment may require a better understanding of basic cell biology or physiological mechanisms. Moreover, part of that process is a degree of redundancy – pursuing avenues that don’t go anywhere – with all the frustration and failure that entails. To ask a scientist to be successful, to have an impact, in every experiment or trial they undertake would be like asking a detective to only interview the guilty people. It’s just not possible. So there will always be processes in science where there is no impact. The line of research didn’t work. And a donor’s money has gone into that, and there is no way round it. Some research doesn’t have an impact, and it’s a necessary part of the research process.
2. Campaigning doesn’t always work. I wonder if anybody has ever tried to work out how many campaigns have actually worked. I would guess it’s about one in three at best. Some fail because they never had sufficient resources. Some were poorly planned. Some had a government of the day uninterested in the issue. Or, as in the last 2 years, a government has been trying to keep its head above water. There are a host of other reasons why campaigns fail, but the reality is they do, and it’s an endemic occupational hazard of campaigning. The important point is that at the beginning of the process, it’s very difficult to know which campaigns will succeed and fail. So an organisation that wants to be sure its campaign has impact is bound to be disappointed. Indeed, I would argue that some campaigns should be fought even if they are hopeless, because they are the right thing to do.
3. When should success be measured? Many years ago, I interviewed the CEO at Barnardo’s and one very interesting question that he said they struggled with was when to decide if something had worked. He gave the example of a child adoption. Should an adoption be deemed successful after 6 months or a year if the relationship hadn’t broken down? Or perhaps it should only be deemed successful when adopted children themselves have children. In many areas, success needs to be measured over years or decades, not weeks or months. A supporter of campaigns for LGBT rights in the early 1980s might have asked for their money back when the Thatcher government implemented Clause 28. But nearly 40 years later, their investment in campaigning has been returned with interest.
4. Measuring what is in people’s heads is hard. A project I worked on a few years ago tried to understand what progress was made in helping homeless people. They used outcomes stars and a range of other measures. The bottom line was that measuring the progress on addictions, resilience, managing money, states of mental health and so on was very hard. This is compounded by the way that vulnerable people react to crises or unexpected situations. The death of a family member or a friend can knock people sideways. In homelessness and a raft of other areas, measuring impact can only be described as ‘work in progress’. Providing accommodation is hard, yet tangible, but the real test is whether people can manage their own lives. And that is just the start – can they get the support to live happy, fulfilled lives? Should that be our real measure of impact?
5. Would Oxfam, Scope or RSPB have started if they had done an impact analysis? Imagine if the Quakers at that meeting in Oxford in 1942, which started Oxfam, had decided that what they needed was to maximise impact. What they were proposing was a very daunting task: to get relief supplies to children in Greece under German occupation, suffering from the Allied Blockade. Any grant-maker would have laughed at their application. They had no track record, no structures, no delivery mechanism – but they went ahead, nonetheless. They took risks and had a real risk of failure.
The Oxfam story is typical of so many charities: a few dedicated people don’t look at what is feasible, but what’s needed or what’s right. Demanding that a start-up charity have an impact is like demanding that a start-up company make a profit. If you do, it will kill most stone dead.
6. All roads don’t and shouldn’t lead to vaccines. In recent years, I have got into managing my own pension funds. My task has a clear goal – to maximise the growth in my pension pot. If I did the equivalent thing for delivering impact, I would probably ignore all the areas of charity work I have discussed and would just donate to providing vaccinations or clean water in poorer countries. I could certainly save a lot more people that way. The Unicef shop tells me ‘£25 could deliver lifesaving diphtheria, tetanus and whooping cough vaccines for 125 children’. That’s 20p for a life-saving vaccine for a single child. 20p!! If I want to maximise the impact of my donation pot, there is little that can beat that. But what a sad and shallow world it would be if all donors tried to maximise impact in that way.
My argument is not that measuring impact is a bad thing, nor that donors shouldn’t care about impact. It is that not all areas of the work of charities are equally appropriate for impact measurement, and measuring impact has very different metrics and methodologies depending on the nature of the work that a charity does. In the end, I want charities to be driven by a passion for the cause first and foremost. To paraphrase the Einstein quote, ‘Not everything that counts can be measured, and not everything that can be measured counts.’




Comments