Overview of Methodology¶
ImpactMatters estimates the impact of nonprofit programs (a.k.a. interventions). Example programs include emergency food assistance, academic tutoring, job training or vaccines. We define impact as the change in mission-driven outcomes net of what would have happened in the absence of the intervention (“counterfactual success”) — relative to the cost to achieve that change. We analyze impact in order to determine if a program is cost-effective — in other words, its benefits outweigh the costs. A cost-effective program is making good use of resources to improve the lives of the people it serves and earns 4 or 5 stars in our rating system. We discuss our rating methodology in detail here.
Impact analysis comprises five steps:
Identify key outcomes by which to measure the program’s success toward completing its philanthropic mission.
Identify, for each outcome under review, appropriate metrics by which to measure impact.
Estimate, following steps 1 and 2, the outcomes attributable to the philanthropic program. The estimates follow the dictates of rigorous social science, including netting out counterfactual outcomes (outcomes that would have occurred even in the absence of the program) and third-party effects (gains or losses imposed on individuals who are not participants in the program).
Estimate the costs of the philanthropic program, including costs to the program, partners and beneficiaries.
Divide costs by outcomes to calculate impact, or cost per unit of outcome.
Impact analysis provides an estimate of mission success using available data. We do not generate new data as part of the analysis. As such, our analysis is no substitute for formal program evaluations (for example, randomized controlled trials).
Impact is defined as the change in measured outcomes that is attributable to the program under review, compared to the costs to achieve those outcomes. Attributing changes in outcomes to a programmatic intervention requires estimates of counterfactual success: how much success would have been achieved by program participants even had they not, contrary to fact, participated in the program under review.
This document describes the standards against which an impact analysis is conducted, describing methods of estimating outcomes and cost.
Nonprofits are divided into programmatic categories, like H.I.V. prevention, high school graduation and housing for the homeless. Impact can be estimated for any mission-driven outcome whatever the category, assuming the mission is philanthropic. Although the exact steps by which we measure impact varies across program category, program, location and available data, the fundamental principles by which we measure impact applies to any intervention that delivers a direct service to participants.
Because missions differ, we recognize that nonprofits operating within the same program category might focus on different outcomes. [A nonprofit that addresses poverty would look to high schools to raise graduation rates; but a religion-based nonprofit might instead focus instead on ethical maturity.] To achieve comparability, ImpactMatters estimates impact on some outcomes and not others for programs sharing the same program category. This runs the risk of introducing error into our calculations.
Our methodology can accommodate interventions that generate multiple philanthropic outcomes. We do so by estimating the impact of each outcome separately. However, we do not estimate an aggregated impact of an intervention that generates multiple outcomes. Take the case of a nonprofit whose intervention is designed to raise future earnings of participants and to improve their lifelong health status. We estimate the impact of raising future incomes (the nonprofit spends $x to raise the future incomes of participants by an average of $y per year); we also estimate the impact of improving health (the nonprofit spends $z to extend the lives of participants by an average of one year). But we cannot combine these two estimates to a single, aggregate measure of success. To do that would require imposing weights — making a judgment about the relative value of higher incomes vs. better health status. We are not in a position to impose such weights on a nonprofit under our review.
Finally, there are questions that this methodology does not purport to answer:
What is a nonprofit’s organizational effectiveness (financial controls, fiscal systems, succession plans, other)?
What are the best methods for evaluating a program? We tap, first, data that the nonprofit under review has already collected and, second, findings from the research literature about similar interventions.
As stated above, our methodology applies to nonprofit delivering a good or service to participants. But it does not readily apply to programs built around advocacy (for example, nonprofits seeking to change public policy).