Walking the talk: Lessons learned and looking ahead to wave two of AidData’s Omnibus Survey
As international pressure mounts to deliver on commitments to the Sustainable Development Goals (SDGs), it is ever more important for development actors to understand how they can maximize their effectiveness and be helpful partners.
As international pressure mounts to deliver on commitments to the Sustainable Development Goals (SDGs), it is ever more important for development actors to understand how they can maximize their effectiveness and be helpful partners. The first wave AidData’s omnibus survey, the 2014 Reform Efforts Survey, offered new insights to answer these questions from the perspectives of just under 7,000 leaders in 126 low- and middle-income countries. In analyzing their responses we better understand how these development leaders perceive development partner performance; the influence of external assessmentslike the MDGs and the World Bank’s Doing Business reports; and the state of governance data use.
Today, we head into a series of consultations on the second wave of the survey, set to field in late 2016. AidData is a learning organization; we believe we ought to apply the same introspection in our own work that we ask of development partners in our reports. We’re taking the time to reflect with others on lessons learned from the first survey wave and how we can continue closing the knowledge gap about what works in development, and why. We are expanding the questions we ask of development policymakers and practitioners around the world, as well as refining our methods.
Heading into today’s consultations, we are looking for feedback on the additions to two modules that were leveraged in the first wave of the survey: Development Priorities, Policies, and Programs and Sources of Development Assistance and Advice. Responses to these modules will help build upon what we learned through analysis of the 2014 Reform Efforts Survey, allowing us to treat the 2014 data as a baseline, while also pushing the boundaries of the knowledge frontier through the inclusion of additional questions. In light of the recent roll-out of the SDGs, we’ll be looking at alignment of government officials’ reform priorities compared to citizen preferences and working to identify gaps and opportunities for investment to advance these goals. Additionally, we’ll be taking a look at the influence of domestic development organizations and refining our Value for Money Index.
A proliferation of data and evidence for development doesn’t translate into development results unless the data and evidence are “fit for the purpose” and are actually used by leaders in low- and middle-income countries to achieve their development objectives. To help confront this challenge, we are introducing an entirely new survey module, the In-Country Demand for and Uptake of Data and Evidence. With the introduction of this series of questions, we seek to contribute to the evidence base regarding which sources of external and domestic data and evidence are actually used by decision-makers around the world, and why they are perceived to be useful. By asking the individuals who actually use (or are at least the intended audience for) data and evidence to make decisions in a given country, we hope to shed light on both the characteristics of what makes data and evidence useful and how these resources are useful used within the policy process.
Following the launch of two flagship survey reports in 2015, our team engaged in discussion with colleagues at the Center for Global Development, the World Bank, the Hewlett Foundation, and others about how we could improve future survey waves to continue contributing to this knowledge gap. As we prepare to field wave two of our survey, we are making a number of methodological improvements that we believe will yield robust results as we turn to collection and analysis of the data in wave two of this survey. One such example is testing the sensitivity of our development partner rankings and scores to different weighting specifications so that we can better report on potential bias. We are also looking at modifications to our questionnaire design that will allow us to more effectively identify (and confront issues related to) non-response. As with any novel approach, there is room for growth, and we recognize the importance of continuing to refine our methods to ensure that the data and analysis we produce is of greatest possible value to the individuals and organizations who are putting it to work.
We hope you’ll follow along here on The First Tranche as we share news about the development of our 2016 omnibus survey, and that you’ll stay tuned for the release of findings from the survey in early 2017.