Search form

Research and development

Rachel Glennerster's insights on the randomization movement

How randomized controlled trials are transforming the world of international development

Last updated 
Chief Innovation Officer, International Rescue Committee
Director of Innovation Strategy, International Rescue Committee

Rachel Glennerster is in the business of subverting conventional wisdom. “You see lots of studies in the newspapers saying, ‘Eat chocolate…because it causes all these good things,” she tells Grant and Ravi on this week's episode of Displaced. But that statement is statistical nonsense: Chocolate consumption may be correlated with weight loss, traffic accidents, or even with winning the Nobel Prize -- but it causes none of those things. Similarly, we often hear folks in the aid sector trumpet “investment in women’s education is the most effective investment in development,” Glennerster says. “Actually, all they’re doing is pulling from correlations. There's pretty much no evidence behind that statement. 

“That’s not a very reliable way to make policy or make decisions.”

Wielding econometric analysis, Glennerster, the Chief economist at the UK’s Department for International Development, is on a mission to ensure that development programs are designed based on hard evidence of what actually works. Before she joined DfID, Glennerster headed up the Abdul Latif Jameel Poverty Action Lab, an MIT-based research center focusing on promoting evidence-based solutions for ending poverty.

Rachel Glennerster

Since it was founded in 2003, J-PAL, as it’s known, has been at the forefront of bringing research tools and methodologies for impact analysis from the field of economics to the world of international development. The group’s initial premise was simple: Founders Abhijit Banerjee, Esther Duflo, and Sendhil Mullainathan argued that aid funding should flow towards the most effective projects, and that we can only know which programs are doing the most good with rigorous study. (More generally, this argument is one of the cornerstones of the effective altruism movement.)

Perhaps the most talked-about idea out of groups like J-PAL is that to the extent possible, we should evaluate the impact of development projects using a research tool called a randomized controlled trial, an experiment in which participants are randomly assigned to control and test groups. In the field of economics, randomized controlled trials (RCTs) have helped shake the foundations of previously unquestionable economic wisdom -- in particular, the myth that humans are rational actors who make smart choices to maximize their economic well-being.

It’s a combination of good descriptive work, good theory, and good impact evaluations -- that’s when you get breakthroughs: When you get all of those three things working together and telling you a clear story.

Insights from randomized trials, Glennerster points out, have helped us understand that people “massively underinvest” in their long-term well-being in favor of short-term gains. We also understand that people are sensitive to both price and convenience: Even when the cost of purified water is so low as to be nonexistent, usage rates of purification mechanisms like chlorine tablets that require energy and attention are low. Michael Kremer of Harvard University and his colleagues found, through a series of randomized controlled trials conducted in Kenya, that providing chlorine as a concentrated liquid at prominently displayed dispensers at local water sources dramatically increase the rate of disinfection. Evidence from RCTs have helped deflate enthusiasm over hyped-up aid trends (the microfinance bubble, for example) and elevate new solutions to thorny problems, including how to boost school attendance, encourage people to obtain preventative health care, and improve immunization coverage.

But RCTs and the boosters, affectionately called randomistas have also run into their fair share of criticism. Detractors say that their results cannot be generalized to address similar problems outside the region of the experiment, that they focus on micro-level interventions without considering high-level dynamics, and that RCTs themselves have become a hyperinflated aid trend, deployed willy-nilly with little interrogation of whether another evaluation tool may be more appropriate.

Glennerster, Grant and Ravi talk about whether RCTs can only be used to evaluate small interventions, the implications of aid programs that promise more than they can deliver, and what the evidence says about how to improve education systems. This episode is deliciously wonky, with real insight into how development programs are designed and evaluated.

Related Reading

When do innovation and evidence change lives? -- Rachel Glennerster

The Nudgy State -- Foreign Policy, Joshua E. Keating

The Poverty Lab -- The New Yorker, Ian Parker

Understanding and misunderstanding randomized controlled trials -- Angus Deaton and Nancy Cartwright

Improving Education in the Developing World: What Have We Learned from Randomized Evaluations? -- Annual Review of Economics, Michael Kremer and Alaka Holla

Protecting Insecure Farmers with Lorenzo Casaburi -- VoxDev  

Opinions and views expressed by guests are their own and do not reflect those of the International Rescue Committee.