Re-conceptualizing How We Evaluate Aid: Be Wary of Managing by the Numbers

Aid agencies which are given a wider scope for independent action perform better than those that are tightly supervised.

April 18, 2014
Dan Honig

In the aid community, the recent push has been towards management by measurement. There is a growing belief that a focus on measurable results — the number of vaccines delivered or mortality rates — is the best way to make aid function more effectively. Where it is possible to measure, this is true. However, most aid does not aim at targets that can be measured to this level of accuracy frequently enough for management by numbers to work. For most types of aid, the answer is not to manage.

business analytics

My research, drawing on the world’s largest database of development project outcomes (including 14,000 projects over 40 years) and case studies, finds that aid agencies which are given a wider scope for independent action perform better than those that are tightly supervised. More agency autonomy translates into more empowered in-country personnel. The fewer individuals on the ground that are required to defend their decisions to distant supervisors, the more creative and less conservative they will be — taking smart risks, rather than acting to ensure that they never make a mistake.

What kind of appropriate measurement can be employed when the purpose of an activity is not to deliver concrete, measurable objects but, instead, skills and expertise? One common donor agency measure for this kind of activity would be to count the number of individuals trained in some task. This count is then reported on as demonstrating the success of activities. The problem with this is that gaining expertise is not as simple as attending a training. This type of measurement gives those delivering the training and those supervising them an incentive to maximize the number of people being trained, regardless of whether or not they are the appropriate individuals to be trained or are likely to use the skills in the future. In these situations, the push for immediate data to monitor performance hinders rather than helps the accomplishment of laudable objectives.

The data suggests that the value of more independent agencies is best shown when things get messy. In more fragile states and in projects with hard-to-measure outcomes, flexible agencies are more adept at doing their jobs. This finding is consistent with other research; for example, microfinance loan officers who are more closely supervised and less flexible will fail to incorporate qualitative information, excluding some profitable lending opportunities. This is because they tend to rely only on the numbers and ignore real (but hard to justify) judgments about the character of borrowers and the likelihood of repayment. The lesson? Intuition and on-the-ground expertise matter.

The focus on delivery has the potential to transform the lives of millions while costing us nothing. As Jeffrey Sachs notes, “it is possible to use our reason, management know-how, technology, and learning by doing to design highly effective aid programs that save lives and promote development.” We need to use our management know-how to change the way we do business in order to ensure that we maximize the impact of our foreign aid dollars.

Where measurement works, it should be used to better deliver vaccines or more efficiently build health clinics. The problem is that most of development – that which focuses on governance and rule of law, but also that which focuses on health systems rather than health clinics – doesn’t look like that. For those goals, we need to give up control. In these cases an over-emphasis on measuring the immediate results of aid encourages everyone to pass the test rather than learn the lesson, to meet indicators that do not translate into improved lives and societies.

In order to change the way business is done we have to be bold enough to understand that development is messy. To manage by numbers will work in some areas – such as vaccine delivery – and we should do more of it there. But when it can’t work, we need to stop forcing the square peg of measurement and more control into a round hole.

Dan Honig is a PhD Candidate at Harvard’s Kennedy School as well as someone who’s seen aid from the perspective of both donors and recipients. Prior to beginning his studies he was special assistant, then advisor on aid, to successive Liberian Ministers of Finance and remains involved in aid design in a number of developing countries.  The research to which he refers can be found here or at danhonig.info. Courtesy image: creative commons.

No items found.