We are creating a new nonprofit rating system that rewards results, rather than punishing overhead. We welcome your feedback.
For more information, see our general FAQ.
Cost-effectiveness is a measure of the impact of a nonprofit’s program relative to the cost to run that program.
Impact is the measure of the change caused by a nonprofit’s program, net of counterfactual change (see more below).
A cost-effective program makes good use of resources — compared to possible alternative uses — to improve the lives of the people it serves. ImpactMatters awards four- and five-star ratings to nonprofits that are highly cost-effective.
Cost-effectiveness is often viewed as anathema to the nonprofit sector. We see it as the opposite. Take a simple thought exercise: A program has a limited budget of $100,000 to improve literacy in a community. It can choose between two approaches to do so: one that can boost literacy by a grade level for 100 students and a second that can also boost literacy by a grade level but for 200 students. All else equal, a sensible program administrator would choose the second, as of course it reaches twice as many students. This is a cost-effectiveness decision. We have limited resources and unlimited need. Cost-effectiveness is a decision tool that makes those resources go further — helping more people in more ways.
We rate “service delivery” nonprofits, i.e., nonprofits that deliver a program directly to people to achieve a specific health, anti-poverty, education or similar outcome. We do not rate two types of nonprofits: (1) advocacy and research nonprofits; and (2) “donor use” nonprofits.
Advocacy and research nonprofits. Nonprofits that seek systems change through advocacy, research or similar activities may be highly effective, but they are much harder to measure. The link between the nonprofit’s work and the final outcome is longer, and often there are alternate explanations for why that particular piece of legislation passed or those minds changed. We do not (yet) have a good method for consistently estimating the impact of these programs, and so we do not issue ratings for them.
"Donor use" nonprofits. For some nonprofits, the donor herself is a user of the charity, e.g., religious organizations, community associations and most arts and culture institutions like museums. We neither encourage nor discourage donating to such nonprofits; we just do not rate them. With these “donor use” nonprofits, the donor decision to donate is largely driven by her personal experience with the charity. As such, we do not see the same value-added from applying our methods to such organizations.
Nonprofits can get funding from individual contributions, foundation and government grants, investment income and other sources. Because the audience for our ratings is donors, we only rate nonprofits that receive at least some funding from individuals or foundations. A nonprofit that is less reliant on donor dollars is neither worse nor better; just less relevant to donors seeking guidance and confidence when giving.
We give each nonprofit the opportunity to review our rating prior to publication. Nonprofits can correct our numbers, add a public comment, upload images and stories to enrich their profile and leave general feedback. We will contact your executive director and chief communications staffer.
To increase accuracy and comparability, and maximize efficiency, we rate one type of intervention (soup kitchen, etc.) at a time. As a result, we can't respond to individual requests. However, we encourage you to leave us a note here - if there is sufficient interest for ratings of your intervention type, we will prioritize.
We apply a simple test: Does the nonprofit spend at least 65 percent of its total costs on programs? If a nonprofit passes this and other basic tests for financial health, it earns at least two stars and can be considered for more stars based on impact transparency and cost-effectiveness.
You may appeal your rating or share additional information with us. We do not give the option to opt out of a public rating. However, if our analysts conclude we are mispresenting your impact, we will withhold the rating.
Yes. Today, nonprofits have few incentives to publicly share high quality impact data. Our ratings meet nonprofits where they are - for example, we ask soup kitchens to provide data on meals served. In the future, we will be enhancing our ratings to better capture and communicate nonprofits' impact - for example, moving from meals served to nutritious meals served.
To understand the impact of a program, we must ask the counterfactual question: What would have happened to beneficiaries if the program had not, counter to fact, been there to serve them? We then measure the difference between what actually happened and what we think would have happened if the program had not been around. That difference is the impact of the program. Just looking at what actually happened is not sufficient for understanding impact because many factors besides the program could affect how beneficiaries fare over time. For example, an economic boom affects both beneficiaries of a job training program and non-beneficiaries. An observed increase in employment among beneficiaries is insufficient evidence to conclude that the program — and not the economic boom or other factors — caused an increase in employment.
Most communication about impact today inadvertently ignores the counterfactual. But ignoring the counterfactual, in effect, assumes the counterfactual to be zero. In other words, it assumes that in the absence of a program, the outcomes of beneficiaries would not have changed at all. This may well be the case for some programs in certain settings. But for many others, it would be extreme and erroneous to assume, for instance, that without a program, no children would have graduated from high school.