Nonprofit evaluation should be purposeful, feasible and sustainable. Here’s whyAugust 16, 2017 Category: Featured, Long, Method
DisclosuresThis is a guest post by Claire Robertson-Kraft.
Over the last decade, I’ve worked with dozens of nonprofits in Philadelphia. Through these experiences, I’ve witnessed firsthand the deep passion nonprofit leaders bring to their work. But I’ve also seen how challenging it can be for these leaders to communicate about their impact with the broader community.
Nonprofit leaders may have a good feeling about a training program they’re running or about how stakeholders are benefiting from their services, but often lack the data to support their intuitions. Unfortunately, organizations don’t typically have the necessary capacity and expertise to use data to improve their effectiveness and share their story.
The demand on nonprofits to empirically demonstrate their impact is greater than ever before, as providing evidence is increasingly becoming a prerequisite for receiving funding. But evaluation efforts should be about more than reporting results to foundations; they should be about informing strategic decisions. In the words of the great Yogi Berra, “If you don’t know where you’re going, how are you going to know when you get there?”
So, how do organizations maximize their impact by combining their passion with strategic use of data?
We launched the Social Impact Collaborative to help organizations answer this question. We believe in empowering leaders with the information, skills and tools they need to accelerate social change.
The Social Impact Collaborative is a year-long program run by ImpactED at the Fels Institute of Government that offers training and support to nonprofit organizations that want to use data to inform their work.
In our pilot year, with support from the William Penn Foundation,* we trained 10 nonprofits selected from over 50 applicants in the region. These nonprofits ranged from community-based organizations like the Norris Square Community Alliance to established cultural institutions like the Barnes Foundation to citywide organizations like the Philadelphia Parks Alliance and the Bicycle Coalition. See a full list of participants here.
From our Partners
While organizations varied considerably, they had quite similar experiences when it came to collecting and using data. Below are a few lessons we learned about building nonprofit evaluation capacity and how we’d advise others looking to engage in similar work.
First, evaluation should be purposeful.
There’s a lot of talk about data collection and analysis methods: What is the best way to get a strong response rate? How do we isolate the impact of a program? While these questions are important, unless — and until — organizations have developed a clearly articulated strategy, the answers won’t mean much.
Too often, organizations are collecting data simply because it’s required for a funder or government entity. While this may be unavoidable, organizations should also track the indicators that matter most to their stakeholders. This requires prioritizing quality over quantity by asking questions, such as: Which indicators say something of central importance about the outcomes you care most about? Which indicators give you the information you need to make strategic decisions?
Second, evaluation should be feasible.
Determining which methods to use can be overwhelming, and sometimes leaders feel like they’re drowning in data trying to make it useful. Organizations should take their planning down to the operational level by considering questions such as: Is quality data available on a timely basis? If not, can you develop ways to collect it that are not burdensome to staff?
No one method is more feasible than others. It depends on the time you have available, the physical, financial, and human resources at your disposal, and what type of access you have to certain types of data and stakeholders. As you begin to collect data, you should track which indicators are most useful for your organization and refine — and focus — your efforts over time.
And finally, evaluation should be sustainable.
Perhaps most importantly, transforming data into knowledge requires building a data-informed culture. Ultimately, the end goal isn’t about being bound by your data but being informed by it. In fact, it’s not actually about the data; it’s about generating knowledge that informs organizational learning. Building this type of culture requires organizational leaders to be unafraid of what the data say and provide a safe space for staff to admit when things don’t go as intended and then the authority to explore why.
Before launching an evaluation effort, organizations should consider the following questions: Who might be internal champions of data and knowledge generation within your organization? What systems and structures do you have that currently support staff learning?
Without question, building evaluation capacity is challenging work, but it can be equally rewarding. Over the last year, we’ve watched the 10 organizations we’ve worked with in the Social Impact Collaborative explore pivotal questions about their mission and future direction, learn new — and in some cases, surprising — information about the stakeholders they work with, and wrestle with how to align data use with their many organizational priorities.
To hear about the progress that the first Social Impact Collaborative cohort has made, come to our final showcase on Thursday, Sept. 28. And for more tips and tools on building evaluation capacity, follow us at @impactedphl.
*The Social Impact Collaborative is funded by the William Penn Foundation, but the opinions expressed in this guest post are those of the author and do not necessarily reflect the views of the William Penn Foundation.