Social value matters: how much is enough data?

Without any universally accepted impact measurement practice, proving your social value is a grey area. But there is a solution lurking in the ambiguity, says Tom Adams director of Acumen. It's called  "decision data". 

When it comes to the concept of impact measurement it sometimes feels like a binary question, you either do it, or you don’t. Which can sometimes read across to a judgment that you’re either creating impact or you’re not. The reality of course is that it’s all a bit greyer than that. Without any universally accepted impact measurement practice, as there is say for financial accounting, each organisation trying to learn about its impact is doing so to a greater or lesser degree of imperfection. So in a world of varied rigor, appetite and capacity, what is the right amount of measurement for social enterprises, investors et al?
 
In a recent article, the impressive folks at Innovations for Poverty Action (IPA) tried to address that question for development practitioners, and suggested some highly sensible principles. Such as matching data collection with the systems and resources your organization has to collect it. They call this “responsible” collection. They also warn off wasteful data collection; when evidence already exists, or data cannot be collected well.

“decision data”... the chief principle here is that data is generally only as useful as the decisions it leads to.
 

At Acumen, we’re accepting and embracing the ambiguity that surrounds social impact data collection, understanding its challenges and building the ability of both ourselves and the folks we invest in to do this better. For our own work, which focuses around capturing the impact of the portfolio of companies we invest in, we approach the question of how and what to measure on a case by case, or company by company, basis.
 
This bespoke approach to data avoids imposing a framework of measurement down upon our companies, demanding that they collect a laundry list of data points. I understand the appeal of such an approach, once you’ve got a framework you can compare performance across varied actors, but in general it requires that those you’re working with be at the same, typically quite high, standard of data collection. In general most aren’t there yet, hence, in part, the absence of universal standards.
 
So what to measure? For Acumen it is not unreservedly ground up. We do have some cross-cutting indicators that we’re particularly interested in - such as the poverty level of company customers - but in general we want to listen to our companies and collect data that they are able to (and we’ll coach to improve capability where they’re not), and most critically will value and use to inform decisions. This is similar to the concept of “client-centric data” advocated by Root Capital in their terrific Roadmap for Impact.
 
For fairly obvious reasons we ourselves call this sort of data, “decision data”, and the chief principle here is that data is generally only as useful as the decisions it leads to. Moreover if data does not lead to action, or has no bearing on choices made, then we don’t collect it. Again there is no hard and fast rule here, and we tend to encourage as much data collection as is feasible in order to answer the questions we and the company have, the insights we wish to learn, and the performance we most want to incentivise.

This article was produced in partnership with the SROI Network. The SROI Network event, Social Value Matters, is on June 12 and 13 2014.

Photo credit: Acumen