Benefit-cost analyses (BCA) — quantifying benefits of interventions, often expressing them in dollars returned per dollar invested — are key drivers of early education policy. They’re widely consulted when early education decisions are debated, but few who use them have much in the way of an understanding of how they come about. A booklet just off the press from the National Research Council goes a long way toward explaining the issues.
Strengthening Benefit-Cost Analysis for Early Childhood Interventions is a summary of a March 2009 workshop where leading practitioners of the discipline, including NIEER Co-Director Steve Barnett, talked about the challenges of generating dependable BCAs and ways to strengthen them. Their discussions provide a window on the science — and art — of conducting BCAs. Here are some key issues:
• BCAs depend on rigorous program evaluations. Of course, the gold standard in rigor is the randomized controlled trial — a method that is not always available. Complicating matters is the fact that the control condition against which interventions are evaluated are seldom composed of kids who had no exposure to early childhood programs. These days, most kids in the general population attend a program of some type. These issues weren’t much of a factor in the era of the Perry Preschool Program — something that makes data from that era all the more valuable.
• Arriving at true program costs is a challenge. Budget figures gathered in advance of program implementation often don’t portray true costs and total costs may not be completely accounted for, particularly when programs involve matching or braided funding. Analysts often end up estimating cost using comparable market costs or deriving other measures such as “shadow prices.” For example, in many developing economies observed wage rates overstate the true marginal cost of labor while observed interest rates understate the true cost of capital. Accurate estimation of cost is one of the most neglected aspects of this work. All too often, cost receives little attention and the cost estimate used has no scientific basis at all. Yet, cost is just as important for arriving at a good decision as benefit.
• Assessing program value is arguably the area where researchers have the most work cut out for them. Some benefits of programs like greater socio-emotional development or better health behaviors are inherently more difficult to put a value on and have probably been under-estimated in the past. Manifestations of their value often don’t occur for years, even decades, in the future. In lieu of very long-term studies we must build on other research, linking pre-K to outcomes—grade retention, behavior problems, achievement, dropout—that other studies in turn link with later education, earnings and employment, mental and physical health, crime, and civic participation.
• Maintaining the integrity of study samples and having robust data available for long-term studies is a growing concern due to degradation of contact information and the growth of privacy concerns.
The presenters pointed to work done in other fields that has the potential to inform BCAs in early childhood education. In health economics, for instance, analysts are measuring the quality and length of lives saved by a health intervention in terms of a Quality Adjusted Life Year (QALY). Researchers now estimate the value of detecting and medically treating lead poisoning at $1,300 per QALY gained. When they factored in the additional cost savings from remedial education not needed when lead poisoning is prevented, they found the intervention was a sound investment.
Other recommendations the group discussed include more standardization of economic measures such as discount rates that analysts apply over time and developing more standardized practices for research procedures in the field.