Rating the 2016 project performance disclosure practices of 50 donors

Despite the fact that most funders of overseas development projects are now signatories to major transparency initiatives like IATI and the Open Government Partnership, only two donors systematically publish standardized project performance ratings.

December 19, 2016
Bradley C. Parks, Daniel Koslovsky
A heat map displaying IEG project performance ratings for all sub-nationally geo-referenced IDA and IBRD projects approved between 2001 and 2011.

The above heat map displays IEG project performance ratings for all sub-nationally geo-referenced IDA and IBRD projects approved between 2001 and 2011.

Despite the fact that most funders of overseas development projects are now signatories to major transparency initiatives like IATI and the Open Government Partnership, only two donors systematically publish standardized project performance ratings (see how the donors fared, below).

Such data can help international development organizations better understand which attributes correlate with higher- and lower-performing projects in their portfolios. Georeferenced project performance ratings can also reveal where investments are yielding relatively strong or weak returns within countries, as shown in the heat map provided below.

Figure 1: IEG Project Performance Ratings, 2001-2011

The above heat map displays IEG project performance ratings for all sub-nationally geo-referenced IDA and IBRD projects approved between 2001 and 2011.

However, to date, the global aid transparency movement has mostly focused on inputs. It has not shed much light on the outputs of development projects.

Some aid agencies and development banks do regularly conduct ex post project performance evaluations, but these studies are usually conducted for the benefit of internal audiences. Few are ever made public, and in the rare cases when disclosure does occur, it is usually by posting individual project performance evaluations in “pdf ghettos” that are difficult to digitize and analyze. Project portfolio datasets, like the one that the World Bank's Independent Evaluation Group publishes each year, are exceptions to the general rule of nondisclosure.

Which donors produce project performance data, and who discloses what?

To get a better baseline understanding of the current state of play, we sorted 50 donors into four tiers of project performance data production and disclosure.

  • Full Disclosure: A donor that produces standardized project performance scores and publishes these project-level data in a format that is amenable to analysis by external users (e.g. datasets made available in excel, .xml, etc.).
  • Partial Disclosure: A donor that produces standardized project performance scores but does not publish these project-level data in a format that is amenable to analysis by external users (e.g. project data buried in individual pdfs, only portfolio-level summary statistics published).
  • No Disclosure: A donor that produces standardized project performance scores but does not publish any of these data.
  • No Scores to Disclose: A donor that does not systematically generate standardized project performance scores.

Bilateral and multilateral project performance data

We found that only two exceptional donors, the World Bank and the International Fund for Agricultural Development (IFAD), publish a comprehensive dataset of standardized project performance scores. Twenty four donors make it into Tier 2, partially disclosing such data. That leaves 24 donors—48% of all institutions—in Categories 3 and 4, which either produce no standardized project performance scores or fail to disclose any such data.

Why so little disclosure?

Nearly fifteen years ago, Harvard economist Lant Pritchett offered an explanation for why we see a persistent and pervasive pattern of underinvestment in public sector evaluation. In an article with the memorable and provocative title “It Pays to Be Ignorant,” he argued that public sector organizations have weak incentives to invest in evaluations of their programs because doing so can erode support for these types of programs among political authorizers and overseers and potentially also generate unwanted pressure for course corrections.

Consider U.S. foreign assistance. Most U.S. federal agencies that are involved in the design and delivery of foreign assistance projects do not maintain standardized project performance evaluation rating systems. However, if they did, and if they regularly published the data contained in such systems, it is not difficult to imagine aid skeptics in the U.S. Congress using the availability of data on poorly-performing project to justify cuts to the foreign aid budget. Indeed, the Millennium Challenge Corporation (MCC) has encountered a very similar challenge in its admirable attempt to implement a policy of publishing all of its rigorous program evaluations.

In short, even if aid agencies and development banks want to promote greater organizational accountability and learning by producing and publishing standardized project performance metrics, the costs and risks of investing in strong project performance evaluation systems seem to outweigh the benefits—at least in most cases.

Daniel Koslovsky is currently a Research Associate at NERA Consulting. He graduated from the College of William and Mary with a BA in Economics and Government in May 2016. He previously served as a Senior Research Assistant at AidData.

Brad Parks is the Executive Director of AidData at William & Mary. He leads a team of over 30 program evaluators, policy analysts, and media and communication professionals who work with governments and international organizations to improve the ways in which overseas investments are targeted, monitored, and evaluated. He is also a Research Professor at William & Mary’s Global Research Institute.