A few months ago, I wrote a blog post titled, “Overcoming the Seductive Logic of Randomized Controlled Trials (RCTs)”. In it, I suggested that while the logic of why we need RCTs in the social sector is highly seductive, there are multiple assumptions underneath that don’t necessarily hold up under further scrutiny. For instance, the assumption that stand-alone, randomizable “programs” lead to powerful social outcomes; in practice, it is more likely to be a system of interventions working in concert. In addition, the role of context is almost always under-valued in an RCT model. Since writing that post, I’ve had several folks reach out with words of appreciation, as well as one recurring question. “What do you suggest as the alternative to RCTs?”
This question becomes even more salient when one is dealing with the public/government sector. At the recent Next Generation Evaluation conference, Lee Schorr from the Center for Study of Social Policy, suggested that the conference participants, mostly progressive foundations, nonprofits, and evaluators, were “living in a bubble”, as they did not have to deal, on a daily basis, with the increasing demands for a certain type of evidence in order to receive public funding. The public sector, in some ways over-compensating for decades of poor accountability, seems to have taken a “one size fits all” approach in applying RCTs as the gold standard of evidence for every type of initiative.
Here, I offer three ways for the public sector to move away from a reliance on RCTs as the only way to measure effectiveness and thus direct funding. None of them is a “silver bullet” replacement, and some of them might seem plain “duh”, but each has something unique to offer:
- Value logical ways, not just statistical ways, to show impact: The New York Juvenile Justice System saw significant improvements in outcomes for youth as a result of a systemic “Collective Impact” effort. Such an effort, is by definition, systemic, and hence not amenable to an RCT-type approach (there is no way to randomize “treatment” when the whole system is in the room). However, the initiative has shown remarkable results in a few short years, with the number of youth in state custody declining 45% between December 2010 and June 2013. While it is impossible to statistically show that the Collective Impact effort, without doubt, led to the outcomes, it is possible to do so logically. A variety of intermediate outcomes that were realized (new and stronger relationships across the system, commitment to data-driven decision making, etc.) taken in conjunction with research that links the intermediate outcomes to long-term impact makes a strong case for the initiative.
- Encourage context-sensitive mixed method evaluations: Over the past five decades, as the evaluation profession has grown tremendously in size and sophistication, it has come to emphasize “mixed methods” approaches using a combination of quantitative and qualitative evidence applied in context. The American Evaluation Association, the leading industry body in the field of evaluation, lays out a set of guiding principles for evaluators, which emphasize the importance of context in choosing appropriate evaluation methods and approaches. The Association also released a statement in 2003 in response to the U.S. Department of Education’s proposed priority for using “scientifically-based methods” that argued: “Actual practice and many published examples demonstrate that alternative and mixed methods are rigorous and scientific. To discourage a repertoire of methods would force evaluators backward.” There are plenty of qualified evaluators who can bring a context-sensitive mixed methods approach to bear, if only the public sector chooses to change its incentives for using such evaluations.
- Establish a “merit system” based on good process indicators: This recommendation, on the surface, might seem to fly in the face of every book written in the last two decades on the need to move to an “outcome-based” model. However, as manufacturing and other fields have realized, and as C. Jackson Grayson, Executive Chairman and Founder of the American Productivity and Quality Institute reminds us, it is good process that leads to good outcomes. And in a field where outcomes are notoriously hard to measure, a merit system based on good process indicators might do us a world of good. Consider the ISO 9000 system of certifications, which is largely a process-based system. In order to be certified, an organization needs to show, for instance, that it is adhering to safety norms, that it has a system in place to monitor quality, and that it ensures that product specifications are followed. Similarly, nonprofit organizations could have a system to demonstrate that they build their programs based on the best available research, use data to make decisions, and engage in ongoing improvement and learning.
Lastly, I will repeat a disclaimer I made in my previous post about RCT – by no means is this an invitation to “throw the baby out with the bathwater” when it comes to useful techniques of randomization, experimentation and “A/B testing.” They are tools, and as with most tools, they are applicable in a limited set of circumstances. The more we refrain from dogma that elevates one technique over the other irrespective of context, the more we can play the role we all want to play, which is to be responsible stewards of social and public resources.