A Fireside Chat on Impact and Mission Measurement
15 May. 2018 | Comments (0)
As part of staying up to date on corporate social responsibility and nonprofit program management best practices, we sat down for a chat with Jason Saul, Founder and CEO of Mission Measurement, and Alex Parkinson, Senior Researcher and Director of the Society for New Communications Research of The Conference Board (SNCR). Here’s what we learned about why it’s important to measure impact, what donors expect, current trends—and common mistakes.
Why Impact Measurement Matters
Why are companies measuring impact?
Alex Parkinson: Because we can’t conflate evaluation with measurement. Companies have hundreds and hundreds of grantees. They need to track them, and get information on the outputs. Companies are going to continue to do that and continue to know that their money isn’t wasted. But, that evaluation isn’t measurement. Impact measurement is where we talk about outcomes. If you want to measure your impact more effectively, look at where the majority of your money is going and think about turning towards an outcome position for that money.
Jason Saul: Companies are also improving employee engagement with giving and volunteering platforms like Salesforce.org Philanthropy Cloud,* responding to what employees want, in addition to corporate strategy. Some employees want results, but some just want to support what their peers are volunteering or donating. Let’s not forget that the corporation is the meta-purchaser of impact by coordinating employees through a giving and volunteering platform. So the company may have an expectation of impact even if individuals may or may not.
What Donors Expect
What do donors want in impact measurement?
Jason Saul: I cover this in detail in my book The End of Fundraising, but basically, there are two different sets of donors to consider. One set of donors is looking for psychic benefits—these are the types of people who plunk a few dollars in the bucket for Salvation Army after shopping at a local grocery store, or who make an annual donation to the animal shelter because they have a pet. This group wants to feel good. They want stories and pictures. They don’t really have any expectations of the impact of their donation, except to know that their money wasn’t wasted. I’d say that these types of folks are 90% of the money in giving right now.
However, there’s a smaller set of donors, let’s say 10% of donors, that aren’t “psychic benefit donors.” This group are what I call “impact buyers.” For example, a college graduate who is applying to law school wants to know the outcomes for job placement after getting a degree. Or a company that hires people from public schools wants to know the outcomes from STEM programs. Governments that want to reduce hunger and also cut costs on food stamps, to get people to be self-sustainable—they’re also impact buyers. This category of individuals and organizations are willing to pay for outcomes. And they don’t just want to just feel good; they know what results they want.
I’d also like to add that the larger the amount of money spent, the more that those impact buyers have an expectation of outcomes and impact.
Trends in Impact Measurement
What are leading trends in impact measurement with technology?
Jason Saul: We are seeing measurement and tech intersect at two different levels going on in tech for impact measurement. The first is performance measurement, such as data collection systems like Sopact, Social Solutions, SAMETRICA, and Amp Impact from Vera Solutions. These are “production-level” data systems used primarily by service delivery organizations. They track outputs in program delivery: the number of people served, number of meals handed out, and so on. This is one type of measurement that software systems do.
The other level of intersection between tech and measurement is impact reporting. Once you have your program data, how do you communicate it in a standardized way? These include standards like the UN Sustainable Development Goals, Global Reporting Initiative, IRIS, CECP Giving in Numbers report (in association with The Conference Board) and the Impact Genome Project that we’re doing at Mission Measurement. What’s important to note here is that these are reporting standards—not performance measurement tracking tools. Aggregate reporting systems “roll up” and align production-level data into a normalized standard.
Nonprofits and philanthropists should consider what level you want to use when you think about technology for impact measurement. If you want to answer the question “how many people received this service?” that’s a question for a performance management system. If you want to answer the question “how did this program contribute to a larger social impact goal?” then you need an aggregate reporting tool. You can’t just “print screen” from your software system and mail that to a donor. You need to roll it up and tell a story in a meaningful way about the outcomes you’re delivering.
Common Mistakes in Impact Measurement
What are three mistakes companies make in measuring their impact?
Alex Parkinson: The #1 mistake I see is that companies are too simplistic. Some companies are still looking at measuring inputs and outputs and not challenging themselves enough. They need to look at outcomes and impacts. Be outcomes-driven: what results did your funds or work enable?
Jason Saul: Exactly. The way in which simplicity is playing out is that people are measuring outputs, not outcomes. Outputs mean how many meals delivered or how many people reached, not how their lives were better as a result (an outcome). I heard that one investment bank “touched 400,000 kids”—which not only doesn’t sound right, from an impact measurement point of view it’s just too simplistic.
At the other extreme, the #2 mistake is that some companies go to the other extreme and hire expensive fancy university evaluators for millions of dollars to do longitudinal program evaluation of some of their bigger grants. This is misusing the tool of program evaluation. Program evaluation is not about avoiding wasting money; it’s about testing a novel theory of change to see if it works, and then publish it. Overly academic program evaluation can make the board of directors feel good, but that report may just gather dust on a shelf and not actually change anything.
Problem #3 is constantly reinventing the wheel. Before we chatted, I just got off the phone with a huge tech company that’s trying to hire more diverse people in tech and wanted to know about how to measure what works in producing STEM outcomes for companies. Right before that I also got a call from a major foundation that wanted to measure the same thing. We keep asking the same questions, but because we have no common language, we don’t have as many standards. So we end up re-inventing measurement.
That’s why I’m excited about the Impact Genome Project, which will help us get away from reinventing the wheel, to standardize measures and outcomes in a way that’s evidence-based and we know what works. But whether you use Impact Genome or another standard, whatever you do, just don’t try to make it up on your own. Don’t reinvent measurement.
Standardizing reporting in corporate philanthropy will really help nonprofits because charities are asked to report into everyone’s system. One nonprofit we work with has 200 corporate partners, each asking them for a different final report with a different software system. Why do public companies only have one 10K report for investors, but charities have to do a different 10K for each grant? That doesn’t make sense. Standardized reporting, like Impact Genome and other tools, can reduce the burden on grantees so nonprofits can spend more time delivering programs and less on reporting.
If you’re curious about Salesforce.org Philanthropy Cloud, read this e-book.
This piece was originally published by Salesforce.
-
About the Author:Katharine Bierce
Katharine serves as editor-in-chief of the Salesforce.org blog and helps create e-books and other digital content at Salesforce.org. She is a lifetime member of Net Impact, a StartingBloc fellow, and …
0 Comment Comment Policy