Thursday, May 9, 2024

Follow

Contact

How ImpactED’s nonprofit evaluation program changed for its second cohort

Social Impact Collaborative's first cohort. October 2, 2018 Category: FeaturedLongResults
There is such a thing as tracking too much data. What matters more is what you do with what you can get.

The second cohort of the ImpactED-led Social Impact Collaborative (SIC) for William Penn Foundation grantees kicked off in mid-September. The program trains nonprofits of varying sizes and missions in evaluation strategy, data collection and analysis, and better data use.

Similarly to Pew Charitable Trusts’ new evaluation capacity-building initiative — also led by ImpactED — to help grantees assess hard-to-measure outcomes, SIC participants receive both group instruction and individualized coaching from ImpactED and McClanahan Associates Inc. over several months on being smarter about what client feedback, programmatic outcomes and other data to keep top of mind and use for action, and which to forget about.

Ten new organizations are participating in its second edition:

  • Community Design Collaborative
  • Cooper’s Ferry Partnership
  • Families Forward Philadelphia
  • Fleisher Art Memorial
  • Free Library of Philadelphia
  • Independence Seaport Museum
  • People’s Emergency Center
  • Philadelphia Parks & Recreation
  • Sustainable Business Network
  • The Village of Arts and Humanities

ImpactED founder Claire Robertson-Kraft wrote about learnings from the first cohort, and why nonprofit evaluation should be purposeful, feasible and sustainable, last summer:

“Without question, building evaluation capacity is challenging work, but it can be equally rewarding. Over the last year, we’ve watched the 10 organizations we’ve worked with in the Social Impact Collaborative explore pivotal questions about their mission and future direction, learn new — and in some cases, surprising — information about the stakeholders they work with, and wrestle with how to align data use with their many organizational priorities.”

From our Partners

First-year successes included the purposeful mix of organizational type — “We were pleasantly surprised that there was a lot more commonality than difference,” Robertson-Kraft said this week — and a 55 percent increase in participants reporting that their organizations are now using logic models. (Read Generocity’s recap of SIC’s inaugural showcase here.)

But what’s different for SIC this year? What did it learn and then improve about its own processes?

For one, Robertson-Kraft said, her team learned that while individual participants reported making considerable growth in their knowledge, they struggled with implementing lessons from the cohort across their entire organizations. That especially became a problem when participants left their jobs, taking that knowledge with them.

Robertson-Kraft said SIC tackled these issues in three ways:

  • Increasing teams’ sizes from two to three or four, and mandating that one member be a senior-level staffer
  • Expanding the program from one to two years, plus funding for the inaugural cohort to receive support for an extra year “as needed”
  • Integrating discussions about sustainability into every session instead of one remote session

The second year of funding, especially, is meant to support nonprofits in scaling their first-year lessons into organization-wide integration with “more intentional support” and personal coaching, Robertson-Kraft said.

William Penn Program Director Elliot Weinbaum said SIC “grew out of feedback we received from our grantees” in a 2014 survey: While 96 percent said using data was a “critical component” of their work, fewer were collecting data regularly, and only about half had a staffer who was “sufficiently trained” to collect and analyze data.

Funding for both the second year of the inaugural cohort and the first two years of the current cohort totaled about $540,000, according to Weinbaum. (The program is free for participants.) No William Penn staffers are involved in the trainings: “We want these to be very candid conversations between the organizations and their coach or consultant,” he said. “We just look at the feedback data that is collected anonymously from participants.”

That other evaluation capacity-building initiative for Pew grantees? Very much modeled on this one, according to Weinbaum. Scattergood Foundation and Barra Foundation’s Building Evaluation Capacity Initiative is also similar, though not exclusive to their grantees or led by ImpactED.

But Weinbaum said the programs aren’t in competition. Indeed, they share many grantees, and William Penn hosted a shared information system with Scattergood to help nonprofit leaders discern the best program for them.

“We’re working quite intentionally across the funders to create a more obvious ecosystem of support,” he said.

Trending News

100 Days With No Plan, Delaware County Residents Want More Valerie Dowret
Government Can’t Save Us, But, Don’t Hurt Us: Philly to Harrisburg Jude Husein
When you’re unsheltered, the public in ‘public safety’ doesn’t include you Dionicia Roberson
Gasping for air Andre Simms
Monday Minute with Haniel Tracey Monique Curry-Mims

Related Posts

May 31, 2023

Solutions at the Intersection: Lessons Beyond Philadelphia

Read More >
May 16, 2023

Standing in the Gaps

Read More >
March 8, 2023

Empowering Healing and Growth: Create Safe Spaces for Young People

Read More >