Do strong monitoring and evaluation systems and high levels of staff supervision make World Bank projects more effective?
Thursday, January 12, 2012 Authors:
Several weeks ago we released a short post announcing the release of a fresh dataset from the World Bank’s Independent Evaluation Group (IEG), containing assessments of almost 10,000 World Bank development projects. In that post, we examined some basic descriptive statistics, breaking down project success by region and by year. Here we will delve a bit deeper and explore the possible linkages between Quality of Monitoring and Evaluation (QME), the Quality of Project Supervision (QPS), and project success. “QPS” measures the intensity of staff oversight during project implementation, while “QME” assesses the credibility of the project's performance indicators and data.
To assess project success, we convert the IEG's six-point measure to a binary variable, with one adjustment from our previous post. Instead of assigning projects rated by the Bank as 'moderately successful' to the satisfactory category, we assigned them to the unsatisfactory category. This procedure was undertaken to mitigate a potential upward bias in how the Bank evaluates its own projects.
The QME variable is divided into four categories in the IEG dataset: high, substantial, moderate, and negligible. Over two-thirds of projects were rated in the bottom two categories, indicating substantial room for improvement in QME. The graph provided below demonstrates a strong positive correlation between QME and project success.
Projects with high QME ratings were successful 93% of the time, while projects with negligible QME ratings were successful only 3% of the time. Further analysis might shed light on the nature of this relationship. For example, it may be the case that donors find it more difficult to create strong performance indicators and incrementally monitor project performance in countries with ineffective governance or deficient infrastructure. And this may, in turn, affect project performance.
The QPS indicator is measured on the same six point scale as the project outcome indicator, so we perform a similar process to transform it into a binary variable. We classify projects rated 'highly successful' and 'successful' as satisfactory, while we classify projects rated 'moderately successful', 'moderately unsuccessful', 'unsuccessful', and 'highly unsuccessful' as unsatisfactory. Overall, projects received high scores on the QPS indicator: 75% of projects qualified as satisfactory. This circle graph provided below compares (a) the number of cases in which a project's QPS score and final outcome measure corresponded, with (b) the number of cases in which these two indicator values disagreed.
Only 3% of projects with low levels of project supervision had a final outcome rating of 'satisfactory'. However, 27% of projects that received a 'satisfactory' rating on the QPS indicator received a final outcome rating of 'unsatisfactory'. This pattern suggests that effective supervision is a necessary, but insufficient, predictor of project success.
A more thorough analysis is needed to determine the precise linkages between the quality of monitoring and evaluation, the quality of supervision, and project success, but our preliminary results support the current emphasis on strengthening monitoring and evaluation systems and improving project supervision.
This post was written by Ben Buch and Doug Nicholson. Ben and Doug are AidData Research Assistants at the College of William and Mary.
Tags: monitoring and evaluation