ImpactMatters has been acquired by Charity Navigator.

Learn more on charitynavigator.org

ImpactMatters has been acquired by Charity Navigator.

Learn more on charitynavigator.org See an example of the new impact rating

Blog

Tips and information on effective giving

What Goes Into a Program Methodology?

Selecting and testing an approach to estimating the returns to education

What is the value of a college education? In estimating the impact of postsecondary scholarship programs, we searched the research literature for an answer. Two options emerged. We could apply estimates from studies that used causal identification techniques, such as those by Zimmerman (2014) and Belfield and Bailey (2017). Or, we could adopt the method of studies such as those by Levin et al. (2007) and Carnevale et al. (2011) and take the difference in median earnings between graduates and non-graduates, stratified on demographic characteristics. 

In this blog post, we discuss our choice of strategy and how we checked the validity of that strategy. 

Both methods have their limitations. The latter option — taking the difference in earnings — fails to isolate the causal impact of a college education. Earnings and degree receipt may correlate with other factors (like motivation) that are not controlled for. As a result, the estimated effect may be too high or too low. Additionally, our population of interest is scholarship recipients who, without the nonprofit’s scholarship, may not have graduated otherwise. That population could differ from the typical degree holder as identified in the national earnings data. Namely, graduating is likely more difficult for the recipient of interest.

On the other hand, estimates drawn from rigorous but narrow studies pose difficulties in widespread application to different contexts. The populations and methods of these studies differ from one to another, so mixing disparate independently-generated estimates may introduce bias. For example, suppose that the study we used to estimate the returns to a bachelor’s degree drew from a population that had uncharacteristically low earnings, while the study that estimated returns to a master’s degree studied a population with uncharacteristically high earnings. In this case, we would unfairly “punish” programs that supported bachelor’s students and “reward” programs for master’s degrees. Another issue is that the results of one study may hold true only in the context of that study. Take Zimmerman (2014), which analyzed the subset of applicants to Florida International University who were right below and above its grade cutoff. By comparing their future earnings, Zimmerman estimated the returns to a bachelor’s degree. Contrast this to Belfield and Bailey (2017), who used individual fixed effects to estimate the returns to an associate’s degree for a broad array of students. The issues discussed above — of creating a patchwork of estimates from studies that may have limited external validity —  mean that the advantage of the “causal studies” method is unclear when compared to the alternative we have available.

To maintain simplicity and keep estimation methods consistent across degree type and demographic, we opt for the “difference in earnings” approach. We use Current Population Survey data from the U.S. Census Bureau to compare lifetime earnings across education levels for individuals of a given race and gender. For some subpopulations, we further adjust this differential. For instance, for scholarship recipients from low-income families, we adjust the effect downwards. This is based on Bartik and Hershbein’s (2018) finding that a degree increases future earnings by more if a student’s parental income is higher.

Here’s an example of our methods in action. Consider a low-income student who receives a $1,000 need-based scholarship to pursue a bachelor’s degree. Applying results from Denning et al. (2019), we estimate that the award causes a 6.6 percentage point boost in the student’s probability of graduation.1 If she graduates, she’ll earn about as much as the average low-income bachelor’s holder, $53,162 a year; and if not, the average earnings of a low-income high school graduate, $30,731 a year. Using the “difference in earnings” method, we expect the difference between the two ($22,431) is the result of graduating from college. The expected impact of the scholarship on the student’s earnings is therefore the 6.6 percentage point boost in her likelihood of graduation multiplied by the $22,431 boost in earnings if she graduates — or $1,480 in additional earnings.

As a validity check, we tested our result against Denning et al.’s own estimate. Using a regression discontinuity design, the authors tracked the graduation rates and earnings of students who did and did not receive an extra $500 in Pell funding. They estimated that a $1,000 boost in Pell funding yields additional annual earnings of $1,647 for a low-income student — not too dissimilar from our estimate of $1,480. 

Based on the result of this check, and because there is precedence in the education literature for using the “difference in earnings” method, we believe that comparing median earnings is a reasonable method of estimating the returns to education. But this is an ongoing debate in the field of research on the returns to education. We warmly welcome any feedback and suggestions you might have for how we could improve our methodology.

1 Denning et al. find that $500 in addition Pell funding leads to a 3.3 percentage point boost in graduation. We multiply 3.3 by two to find the estimated graduation effect for $1,000 in additional funding.