How good is your impact report?

Impact reports deserve more than the usual platitudes and a superficial glance. But without a collective understanding of what makes good quality impact measurement, it’s impossible to critically engage with each other’s work, argues researcher Dr Jess Daggers – who also puts Big Society Capital's latest report under the microscope and proposes a new starting point for impact management.

Jess DaggersGiven the importance that is placed on impact measurement as a route to better practice, there should surely be a healthy interest in the impact reports that get published, especially by prominent organisations. Impact reports are the visible output of the enormous amount of effort that goes into building measurement frameworks.

And yet, what happens when an impact report is published? If it gets any reaction at all, the reaction is always positive, and unlikely to engage with any of the detail. We see celebrations of all the good work, of all that has been achieved, without a sign that anybody has actually read the thing. So, on the one hand, we buy into the narrative that impact measurement is a route to greater effectiveness or ‘maximisation of impact’; on the other hand, we pay little or no real attention when the results of these efforts are published.

This situation is far from desirable. The only way we are going to move forward on improving impact practice is if we engage with the work our colleagues in the space are doing, question and challenge it, celebrate when it is done well, and point it out when someone has done it badly. This extends not just to the results being presented, which are of course important, but the method that is being used. The method, after all, dictates whether the results are worth looking at in the first place.

 

We still do not know what good looks like

So why isn’t there more critical engagement with impact reports? There are lots of possible reasons — maybe the culture of the social sector steers people away from criticism and towards positivity.

But there is one reason in particular. Despite ten years plus of trying to improve impact practice, we still don’t know what a good impact report looks like. In fact, we still don’t know what a good impact measurement framework looks like. We don’t know what sufficient, or sufficient quality, impact data looks like, and we don’t know what it looks like to use impact data to improve delivery. This is not to say that some organisations don’t do impact measurement well — the point, rather, is that we do not have a collective sense of what counts as good quality impact measurement and reporting. Without this collective sense, it becomes very difficult to have opinions about each other’s work, to compare and contrast different approaches, or to engage in constructive criticism. So we resort to platitudes and back-slapping all round, accompanied, often, by an underlying sense of pointlessness.

We do not have a collective sense of what counts as good quality impact measurement and reporting

 

Revisiting the basics

There are some basic dynamics at play in the impact measurement field. Identifying them helps us to see what is driving current patterns of behaviour.

The whole edifice of impact measurement is based on a simple assertion: organisations who claim to do good in the world should go beyond measuring their activities to measuring the change those activities create for beneficiaries. Not just outputs, but outcomes.

(To avoid confusion, a brief note on terminology: both ‘outcomes’ and ‘impact’ refer to the changes that result from a specified activity. In this article I use ‘outcomes’ to refer to specific changes that are being measured, while ‘impact measurement’ refers to the overall attempt to map out the connections between activities and outcomes, and to build a measurement framework that collects data against these elements.)

The futility of meaningfully measuring outcomes in many settings is widely recognised, yet we have not revisited the basic idea of doing so as a route to more effective practice

The past decade has seen substantial effort going into working out what it means for practitioner organisations to measure the outcomes of their work. The Inspiring Impact initiative was set up for precisely this purpose in 2011, and it is a central plank of the work of New Philanthropy Capital, a UK-based think tank. The term ‘impact management’ started to circulate a few years ago, placing emphasis on the importance of using the data collected through impact measurement to improve the way the organisation operates. The UK’s Impact Management Programme emerged to push this agenda forward. Parallel developments in the impact investing space saw the creation of the globally operating (and similarly named, but unrelated) Impact Management Project.

Writing in 2020, it has become widely accepted that outcomes can and should be measured. While the challenges of doing so are well recognised, the expectation is that organisations will aim to improve their impact measurement over time, eventually reaching a state where their systems are developed enough to yield high quality, meaningful insight into outcomes. Because we are talking about causal knowledge — specific activities are thought to cause specific changes — for insight to be ‘meaningful’ it needs some way of tackling the issue of causal attribution.

However, there is a problem with this agenda: measuring outcomes is simply out of reach of most organisations. Most of the time, getting hold of data about outcomes is simply not feasible, or if it is, the data is so drenched in causal uncertainty that it cannot clearly inform decision making. The blanket assumption that organisations should nevertheless strive to measure outcomes, and use the data to inform their work, is fundamentally unhelpful. The irony is that the futility of meaningfully measuring outcomes in many settings is widely recognised, and yet we have not revisited the basic idea that doing so is a route to more effective practice.

Before I turn to what I think we can do about this situation, a working example will help to ground the discussion.

 

Big Society Capital has a new impact report

After quite some time, the UK’s social investment wholesaler Big Society Capital published an impact report in December. Once you work out how to navigate the online interface, the report presents a large amount of well-organised content. BSC have clearly put a lot of work into gathering numerous examples of where their money has ended up.

Is this a good impact report? Well, there has been a clear attempt to explain BSC’s activities, and then to consider what they achieved with their activities under the heading ‘impact and learning’, which appears in most sections. These subsections are varied in what they present. Some present lists of outcomes metrics, such as the section on market intermediaries (where the creation of more market intermediaries is plausibly an outcome of BSC’s work building up the ecosystem). Others, such as the case study of Greenwich Leisure, are mostly descriptive, without much sign of impact measurement. Overall, it provides an overview of what BSC does and the kinds of results it achieves.

Big Society Capital impact report 2020

Above: A section of Big Society Capital's 2020 online impact report

This is in fact a good example of where lots of organisations find themselves. BSC have not been able to implement comprehensive impact measurement, and the results are a mixture of narrative, output data, and a few areas with reasonable outcome measures, but no real sense of how far changes are attributable to the activities in question.

So, is it a good impact report? It doesn’t look like impact measurement best practice, but then we can’t expect it to. BSC is a wholesaler, multiple steps removed from end beneficiaries. The outcomes they do contribute to directly — building the market, enabling more investment — can be difficult to measure, without any obvious ways of considering attribution properly. There is also the sheer diversity of their work.

So where do we go from here?

 

 

We need a new starting point

I think the social sector as a whole could take significant steps forward if it let go of the unobtainable vision of outcomes measurement as the solution to the problem of organisational effectiveness and accountability.

Crucially, this does not mean giving up on the idea of measuring outcomes altogether. Instead, it means two things: 1) being far more discerning about when we expect outcomes to be measured and reported on, and 2) refocusing on all the other kinds of very useful information that organisations can collect. In turn, this would open up possibilities for developing a collective sense of what good quality impact practice looks like.

1) Being more discerning about when outcomes measurement is feasible and useful

BSC’s report is varied in where it presents outcomes data and where it does not. But there is no discussion of why it is sometimes available and sometimes not. As the reader, you get what you are given.

This is a missed opportunity. BSC could devote very useful time and energy to exploring and analysing the structures and factors determining the suitability of outcomes measurement. For example, initiatives that tend to engage with a well-defined population of end users over a significant time period (such as sheltered housing) will face very different measurement options to an intervention that is accessed potentially fleetingly, and by a dispersed population (such as a mental health app).

I have previously made the case for pursuing this avenue of enquiry, and will continue to do so in my own research. Encouragingly, Alnoor Ebrahim’s recent book, Measuring Social Change, is distinctive for systematically addressing the challenges of generating causal knowledge. He proposes a typology of four contrasting strategies to help organisations consider where their energies should be directed. There are many more strategies that could be elaborated.

BSC’s privileged position with an overview of most of the social investment market means they could offer useful insight on how impact measurement varies across sectors. Their own impact report would be a great place to report on progress. Needless to say, if BSC revealed that there was in fact very little impact measurement practice going on across their investees, this would be a valuable piece of insight to share.

BSC’s overview of most of the social investment market means they could offer useful insight on how impact measurement varies across sectors

This is not just a task for BSC, however – this is a change that would need to take place on a much broader basis, bringing in the research and funder / investor communities. Exploring measurement strategies would be an enormous step forward from a blanket assumption that outcomes measurement is always required.

This work could have enormous benefits for social sector organisations, as it would refine the message regarding what is expected of them. It would mean much more nuanced guidance on what kind of impact measurement activity is expected, given an organisation’s sector, or business model, or type of intervention. This would relieve the pressure on organisations to constantly justify their measurement approach to people (funders) who have been taught to expect outcomes measurement from everybody.

2) Refocusing on the information we do have

If we broaden our focus from (often scrappy, inconclusive) measures of change and turn our attention to how an organisation is integrating concern for impact into its decision making, we open up significant new possibilities. To put it another way, concern for impact does not have to rely on outcome measures. The focus on outcome measures has diverted attention from myriad other pieces of data that potentially contain a lot more information.

Impetus’s work on ‘Driving Impact’ makes a similar argument about the importance of refining a delivery model and getting data collection systems in place before asking an organisation to worry about outcomes measurement — and then asking an external evaluator to do the work of assessing outcomes data. Their model is designed for the education space, with heavy venture philanthropy style-involvement. We need to generalise this kind of pragmatic thinking about the realities of impact measurement across other settings and delivery/funding models.

This brings us to a more significant criticism of BSC’s report: it gives no indication of the kind of frameworks and data that BSC actually uses. BSC must have internal processes that they use to assess the impact ‘credentials’ of investment opportunities. In fact, a couple of years ago a senior team member at BSC told me that the Impact Management Project’s ‘five dimensions’ were “baked into” the way BSC was run. There is no sign of them, or any other such framework, in the impact report. Insight into how BSC tackles these challenges is far more interesting and relevant than a series of disembodied outcomes measures. Their impact report is an obvious place to communicate this insight.

Overall, the point is that we need method in impact practice. Method means creating structures for collecting and interpreting data. Method means keeping to those structures, and using them for organising and reporting information. The structures can be changed in response to changing circumstances, but only according to deliberate and explicit reasoning. Method governs what kind of conclusions can be drawn from the evidence. Method is the common language that generates a common sense of what good, reasonable or acceptable conclusions are, and what falls short, or needs improvement. Method prevents cherry-picking shiny examples, and guides us away from impact as a PR exercise.

There is no sense of what they set out to achieve with their data collection... Instead it seems like a post-hoc attempt to tie together disparate pieces of information into a coherent picture

It is worth noting that the evaluation profession has spent decades developing its methods for measuring outcomes, and the community of professional evaluators has a good sense of what counts as good or bad quality evaluation. While the basic assumptions of impact measurement — namely, that every organisation should measure its own impact, and use the results to improve delivery — mean that the standards of the evaluation profession cannot straightforwardly be transferred across, this serves as a helpful comparison point for understanding what impact measurement is missing.

So we come to the biggest problem with the BSC impact report: it is entirely devoid of method. There is no sense of what they set out to achieve with their data collection, of a framework that structured their thinking. Instead it seems like a post-hoc attempt to tie together disparate pieces of information into a coherent picture. In so doing, there is absolutely no sense of the journey that BSC have gone on over the past few years of trying to measure and understand their impact — of the structures they have built and had to adapt. And there is no sense at all of the investments they must have made that did not achieve any impact, or achieved the wrong kind of impact. We have been presented with a glossy PR-impact report, the cracks papered over, varnished and polished to a shine.

Where next?

BSC’s report is a response to the same pressures being faced by the whole social sector. BSC have focused on reporting outcomes wherever they can. As is often the case, this approach comes at the cost of sharing more relevant and meaningful information about what they do and what difference they think it makes. I have made some suggestions for how we might start to think differently about what we should expect of impact reporting, and how we might make it a more meaningful and effective exercise.

In case we need a reminder of why all of this matters, I would emphasise again how the promise of impact measurement — greater insight, leading to greater effectiveness — justifies the whole endeavour. Thousands of hours of people’s time are spent trying to measure outcomes because we have accepted the premise that the results will be valuable. If the results are not valuable, then we have work to do in redirecting our energies.

  • Dr Jess Daggers is a researcher and impact measurement consultant. This is an updated version of a blog first published in December, and is republished here with her permission.

Thanks for reading our stories. As an entrepreneur or investor yourself, you'll know that producing quality work doesn't come free. We rely on our subscribers to sustain our journalism – so if you think it's worth having an independent, specialist media platform that covers social enterprise stories, please consider subscribing. You'll also be buying social: Pioneers Post is a social enterprise itself, reinvesting all our profits into helping you do good business, better.