Experimenting with Aid Information
Randomized controlled trials have garnered increasing attention in the development community, particularly with the high-profile work of economists Esther Duflo, Abhijit Banerjee, and their colleagues at the Jameel Poverty Action Lab at the Massachusetts Institute of Technology. Randomized controlled trials provide a method of research to social scientists that allows for the isolation of causal mechanisms while minimizing risks to human subjects (Green and Gerber 2003). This summer, a group of 15 students from BYU will be travelling to Uganda to work as research assistants on a randomized controlled trial led by AidData principal investigators Michael Findley and Daniel Nielson. We will be studying how to improve the transparency and effectiveness of the aid sector in Uganda. I will be joining the group as a representative of the College of William and Mary.
Of the roughly $150 billion in foreign aid committed to developing countries annually, studies suggest that it is often the case that a relatively small portion of this money actually reaches the intended beneficiaries. Often, a large portion of the diverted money is lost to corruption and bureaucratic inefficiency (Svensson 2000, Knack 2001). Of the money that does reach the right hands, it often ends in unsustainable projects that do not produce the intended results due to inefficiencies or project abandonment, or other factors.
Breakdowns in the service provider-recipient relationship contribute to the capture of foreign aid funds by corrupt officials and bureaucratic inefficiencies. One problem occurs in information provision. It is not a lack of information driving this breakdown, but a failure to centralize these sources in a useful way. Studies have suggested that individuals and organizations with access to useful information are far more likely to play an effective oversight role (Miller 2005, Gordon and Huber 2002). Often times, the most useful information regarding where aid is needed and whether aid dollars are being spent effectively is held by citizens in developing countries. However, these citizens generally lack the tools and access needed to provide direct feedback on project status or impact.
Our project this summer will investigate the use of crowdsourcing to solve this information breakdown. Crowdsourcing refers to leveraging the wisdom of the crowd to answer a question or solve a problem that would traditionally be posed to a specific actor. For example, in the business world, companies may use crowdsourcing to poll consumers to name a new product. AidData will be partnering with UNICEF and Ushahidi to run a randomized controlled trial in Uganda to test which incentive mechanisms (e.g. reimbursement, social networks, public praise, immediate feedback, and entry into a lottery providing prizes to the winners) are most effective in recruiting Ugandan citizens to provide useful information on development needs and outcomes. The application of incentive mechanisms will be randomized across districts in Uganda, so that results can be compared against control districts to isolate the effect of the treatment. This randomized controlled trial will provide insight into the causal mechanisms that drive improvement in the effectiveness of foreign aid provision, and I am excited to have the opportunity to work on the project.
Alena Stern ’12 is an AidData research assistant at the College of William and Mary