Let’s stop measuring impact
Impact measurement is not only challenging – even for the experts. It can also be reductive and a distraction from what really matters, argues our columnist, who draws on conversations with and readings from some of the women critiquing and reimagining what has become a burgeoning industry.
Impact measurement – the tools, frameworks and outcome reports – has almost become an industry within the social impact sector over the past decade. Social enterprises, community organisations and charities spend thousands of hours on it every year, often paying for the support of consultants, agencies and data analytics software. But what is actually being measured? Who are we measuring for? Who are we demonstrating impact to? What do all these ‘outcomes data’ tell us?
Most of the time impact reports are compiled and sent to investors, funders, donors and board directors. Is the time and money spent on the process of impact measurement actually driving improvements in service provision?
What is impact measurement?
The purpose of impact measurement, according to The School for Social Entrepreneurs, is for social impact organisations “to prove and to improve”, ie to tell a compelling story to stakeholders (from funders to service users) that will ‘prove’ their impact, and to help the organisation improve service delivery.
Ceniarth, a family office focused on impact investing, has invested in, and also uses itself, an impact measurement company that focuses on “listening to customers, not hypesters.” Writing on Next Billion, Ceniarth senior impact manager Julia Mensink says that her firm realised that, while developing its own impact framework would be simpler and easier to manage, “it would only tell a fraction of the story – and that it would overlook multiple factors, beyond just our investment, that were responsible for impacts in communities.” If some impact measurement tools and frameworks “only tell a fraction of the story”, how are we telling the rest?
If some impact measurement tools and frameworks “only tell a fraction of the story”, how are we telling the rest?
The second purpose of impact measurement, according to the SSE definition, is to improve. Ceniarth found that “direct impact monitoring does not automatically generate actionable impact insights. Impact data requires further analysis and action to actually improve impact.” Similarly, a blog from charity sector think tank New Philanthropy Capital suggests that truly useful impact measurement is “very challenging to do” and that funders and commissioners should only really expect this in “exceptional circumstances (and only if they are willing to fund it)”.
If impact measurement practitioners themselves recognise the limitations of standard approaches and the unrealistic expectation this puts on civil society organisations, what does this mean for how the social impact sector quantifies its value?
Good intentions lost in calculations
I’m not disputing the importance of understanding if and how a service is valuable to users and the wider system in which it operates, but the bias towards ‘metrics’ as a meaningful way to represent improvement requires examination. An obsession with monitoring and evaluation, key performance indicators (KPIs) and ‘outcomes’ is the product of a system that is obsessed with the perceived binary purity of quantitative (and often financial) data to show value, rather than the human experience that sits behind them.
Dr Jess Daggers is an academic researching strategies on impact measurement and social investment. In a recent critique of impact reporting, she writes: “Despite ten years plus of trying to improve impact practice, we still don’t know what a good impact report looks like. In fact, we still don’t know what a good impact measurement framework looks like. We don’t know what sufficient, or sufficient quality, impact data looks like, and we don’t know what it looks like to use impact data to improve delivery.”
The way we’re measuring doesn’t work. Counting up how many young people were employed before an ‘intervention’, and how many are employed now, is relatively easy. Calculating that as a percentage is simple. Being able to report that “in 2020 we delivered a 30% increased employment rate among the young people we worked with” is a fantastic impact statement. Now – before I get lambasted for suggesting that this isn’t good work – of course, more young people in employment is a good thing. But what does this ‘outcome’ really tell us? What kind of work are these young people doing? How do they feel about the skills and experience they’re building? What if half of them are on zero-hours contracts? If some of them are out of work again within a few months because they weren’t signposted to the necessary mental wellbeing support they needed, will they still be counted as ‘success metrics’ in these impact reports?
Stories don’t fit into graphs
Sarah Morton, director and co-founder at Matter of Focus, a B Corporation that helps mission-led organisations understand and track their impact, told me that “the focus on ‘measurement’ means people look for hard data, when really they need qualitative insights, reflections and feedback”. We’re trying to quantify impact, when we should be qualifying it.
If we’re talking about social impact, we’re talking about the quality of someone’s life. On a day-to-day basis, most people don’t calculate how many jobs they applied for this year compared with the year before; how many miles they have to travel to get to a GP; how many hours their child with additional learning needs spent with specialist support. That’s not how we determine our social and economic wellbeing.
We’re trying to quantify impact, when we should be qualifying it
Fixation on measurement distracts from the important work of bettering. The language of outputs is reductive. What happens after the ‘outcome’ – someone getting a job or moving out of temporary accommodation? These ‘results’ aren’t an end state.
Telling stories, on the other hand, has innate value and is evidence of change. Experience has nuance, beyond what figures and graphs can relay. Stories of experience, carefully curated and thoughtfully told, are more powerful ways to evidence value and importantly, learn, but it’s harder to do. And a story doesn’t fit neatly into a graph.
Are we using the wrong yardstick?
In my experience, often social enterprises that centre on storytelling – the process of working with people who have lived experience of social inequality to listen to, tell and capture their stories – are often overlooked or dismissed as ‘artsy projects’ because they’re concerned only with ‘soft data’, ie qualitative information from an interview or workshop. This distinction between soft and hard data is unhelpful. It implies that soft impact data – the anecdotes and experience of human beings, rich with detail and yes, perhaps inconsistency – are less valuable in justifying or ‘proving’ whether an organisation is eligible for more funding, or has ‘achieved’ its impact.
Unfortunately the existence of food banks, job centres and homeless shelters as part of our modern society has been normalised. In accepting this infrastructure that develops in response to inequality and injustice, we’ve applied the same approach to measuring and understanding its value that our economic system understands – cost efficiency and value for money. In doing so, we do a disservice to the people oppressed by that system. If we tell the story of inequality, discrimination, economic injustice and biodiversity loss in this way we accept the dominance of a system that places value on outcomes, productivity and deliverables. One that rewards ‘return’ in financial, not human, terms.
Let’s stop obsessing over outcomes and instead focus on listening to the stories and experiences of real people
Better is enough
Former US President Barack Obama, when interviewed last year about his new memoir, said: “I used to explain to my staff after we had a long policy debate about anything, and we had to make a decision about X or Y, ‘Well, if we do this I understand we’re not getting everything we’re hoping for, but is this better?’ And they say yes, and I say, ‘Well, better is good. Nothing wrong with better.’”
Perhaps what matters most is whether people who use these services feel that things have improved, even slightly, for them as an individual in the context of their complex and unique set of experiences. If the answer is yes, is that not enough? Do we need to ask them to fill in a survey, join a workshop, complete a questionnaire?
Perhaps we’ve overcomplicated things. Let’s stop measuring impact and obsessing over outcomes and instead focus on listening to the stories and experiences of real people. This can inform how social impact organisations and charities are governed, structured and organised to deliver services better. In holding ourselves to account by the values that the social impact sector stands for and measuring ourselves against those – care, equity, justice – then we will do better. And there’s nothing wrong with better.
Thanks for reading our stories. As an entrepreneur or investor yourself, you'll know that producing quality work doesn't come free. We rely on our subscribers to sustain our journalism – so if you think it's worth having an independent, specialist media platform that covers social enterprise stories, please consider subscribing. You'll also be buying social: Pioneers Post is a social enterprise itself, reinvesting all our profits into helping you do good business, better.