Bipartisan Evaluation: Reaching Across the Methodological Aisle

By Jackie Williams Kaye, Director of Research and Evaluation, Wellspring Advisors

Two weeks ago a taxi driver asked me where I was from and I said “Washington, DC.” You probably think that is unremarkable (and a less than promising opening line). But since I moved to Washington 8 months ago this was the first time I had been asked that question and did not answer “New York.” I am now acknowledging my relationship with a city that most Americans associate with one thing: partisanship.

The evaluation field can hold its own in any discussion of partisanship. Our field mastered that approach at a time when Congress was actually still pretty good about engaging across the aisle. The evaluation partisans? The randomistas and the story tellers. The randomistas, when not busy implementing their experimental studies, have been vigilant in their efforts to seek out and eradicate anecdotes. The story tellers have spent a lot of energy avoiding a step onto the slippery slope they envision if you admit that an RCT could sometimes make a contribution. With the two groups in the same room, it’s often not been pretty.

I want to introduce you to some bipartisan randomistas and story tellers. I want more of them.

Esther Duflo is Abdul Latif Jameel Professor of Poverty Alleviation and Development Economics in the Department of Economics at MIT. Together with Abhijit Banerjee and Sendhil Mullainathan of Harvard University, she founded the Abdul Latif Jameel Poverty Action Lab in 2003. She and Banerjee recently co-authored Poor Economics. As noted in the book’s overview, “Through a careful analysis of a very rich body of evidence, including the hundreds of randomized control trials that Banerjee and Duflo’s lab has pioneered, they show why the poor, despite having the same desires and abilities as anyone else, end up with entirely different lives.”

Yes, “hundreds of RCTS” – she is quite the randomista. But let’s take a look at three studies she and colleagues implemented in Kenya where rates of early child bearing and risky sexual behavior were alarmingly high among young adolescent girls. In particular, young women age 15 to 19 were found to be five times more likely to be infected with HIV than young men in the same age cohort, apparently because the young women had sex with older men who have comparably high infection rates. The first experiment tested the health education curriculum offered in schools; its focus is on abstinence until marriage and condoms are not discussed. The second strategy tested the provision of key information to the girls – the fact that older men are more likely to be infected with HIV than younger ones. The third experiment tested an effort to help girls remain in school by paying for the mandatory school uniform that many could not afford.

The studies showed that the curriculum did not increase knowledge about AIDS or reduce pregnancies. The “sugar daddies” experiment found reduced pregnancy rates mainly attributable to a reduction by two-thirds in pregnancies where an older male partner was involved. And for every three girls able to stay in school because of the free uniform, two delayed their first pregnancy. But this effect was seen only in schools where teachers had not been trained to deliver the curriculum. In those schools, there was no difference in pregnancies compared to girls in schools with no interventions

Now, read what the authors tell us about these findings and then I’ll tell you the two points I want to highlight:

“Putting these different results together, a coherent story starts to emerge. Girls in Kenya know perfectly well that unprotected sex leads to pregnancy. But if they think the prospective father will feel obliged to take care of them once they give birth to his child, getting pregnant may not be such a bad thing after all. In fact, for the girls who cannot afford a uniform and cannot stay in school, having a child and starting a family of her own could be a really attractive option…This makes older men more attractive partners than younger men who cannot afford to get married (at least when the girls don’t know that they are more likely to have HIV). Uniforms reduce fertility by giving girls the ability to stay in school, and thus a reason not to be pregnant. But the sex education program, because it discourages extramarital sex and promotes marriage, focuses the girls on finding a husband (who more or less has to be a sugar daddy), undoing the effects of the uniforms.”

What do I see here? I see RCTs being used in what is essentially developmental evaluation, a purpose and methodology that story tellers typically would not think of as a match. More important perhaps, the discussion of the findings (with the word “story” in the first sentence) highlights complexity, something that story tellers cite as a big limitation of RCTs- their perceived inability to contribute when issues are complex.

What about the story tellers? An article in the Stanford Social Innovation Review by Suzie Boss (“Amplifying Local Voices”) describes GlobalGiving’s storytelling project and the work of a UK-based firm, Cognitive Edge. As noted in the article “Listening to stories may seem simple, but turning this into a method for monitoring development work has meant drawing on fields as diverse as complexity theory, behavioral psychology, and technology.” The goals of the work include helping NGOs have more systematic data and to get to better results more quickly. For Cognitive Edge, the aim is not “gathering heartwarming stories for their emotional appeal” but analyzing large quantities of what they call micro-narratives. They have developed analysis software to reveal patterns as stories form clusters around particular topics.

Randomistas would say that one example of more rigorous story telling is not sufficient. I can offer two others you could check out: See Change Evaluation has an approach they call Story Science. And Most Significant Change is an increasingly known evaluation approach involving a comprehensive and structured process to systematically identify change, using stories.

What these story tellers have in common is an approach that is grounded in rigor and designed to produce systematic data. “Rigor” and “systematic” are words partisan randomistas assume do not ever apply to story tellers. And the earlier example about RCTS showed them being used for learning. Oh my. There’s a word – “learning”- that partisan story tellers never think of as a motivation for a randomista.

Hopefully, we are moving toward a time when the evaluation field will engage in more bipartisanship as we consider methods. Then we will put the bipartisan randomistas and story tellers to work on deficit reduction.

About Jackie Williams Kaye: Jackie Williams Kaye is the Director of Research and Evaluation at Wellspring Advisors. Wellspring coordinates grant making programs that advance social and economic justice. Jackie supports Wellspring staff and grantees in using evaluation & research to inform grant making strategies and to learn from them. Prior to joining Wellspring, Jackie spent 10 years integrating evaluation and learning into the grant making work at The Edna McConnell Foundation and then at The Atlantic Philanthropies. Jackie spent the first phase of her career as a researcher and program evaluator in the areas of public health, education and other human services.

Related Blogs

View All Blog Posts
Close

Sign Up to Download

You will also receive email updates on new ideas and resources from FSG.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Already signed up? Enter your email

Confirm Your Registration

You will also receive email updates on new ideas and resources from FSG.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.