Can AI help beat poverty? Perhaps with the help of humans

AidData research is exploring how AI approaches can extend the reach of survey data and power better evaluations of development programs.

April 23, 2025
Ariel BenYishay
A cyclist carries wood on a dirt road in Cambodia. Photo by Lukasz Janyst via Adobe Stock, used under the Standard license.

A cyclist carries wood on a dirt road in Cambodia. Photo by Lukasz Janyst via Adobe Stock, used under the Standard license. 

Not reserved for chatbots or self-driving cars, AI is increasingly used in international development research and evaluation. One area where AidData researchers have been kicking the tires is to see whether AI can help overcome the shortage of regular, representative survey data from the world’s poorest countries. A recent news feature in Nature (“Can AI help beat poverty?”), in which I provided some abbreviated quotes, delves into such efforts. Here, I’ll expand on how AI-powered approaches can help us evaluate aid investments and learn what works—and what doesn’t. 

The crux of the challenge is that foreign aid has been hampered by limited availability of the kind of regular data on poverty we take for granted in many developed countries. 

In-person surveys—like USAID’s Demographic & Health Surveys (DHS) and the World Bank’s Living Standards Measurement Surveys (LSMS)—offer powerful ways to assess country-wide conditions. But many anti-poverty programs are targeted to particular locations or parts of the population (say, families with young children living in specific districts of a country). Unfortunately, the DHS, LSMS, and other survey programs’ samples are not designed to be able to draw reliable estimates for specific parts of a country or its population. 

Moreover, these surveys are only carried out every few years at best and often at much longer intervals—thus missing many of the impacts of programs carried out in the intervening years. The surveys are also not feasible or consistent in capturing conditions for those affected by conflict, migration, drought, flooding or other manmade or natural shocks.

Given all of this, learning whether aid programs work often requires adding custom, expensive data collections alongside them. As a result, despite several decades of efforts to dramatically upgrade evaluation efforts, the vast majority of development programs are still not evaluated for their impact on poverty in a rigorous way. 

The good news is that satellite imagery and other new data sources offer promise in helping to fill the missing spatial and temporal coverage of survey programs. 

For example, satellite-based sensors can help us detect how well many crops are doing in a given season, as we did for a recent study of aid-supported irrigation efforts in Mali. Some metrics, like the indices of vegetation growth we used in this study, can be directly defined from satellite-based data, but others require more nuanced interpretation of ambiguous images to capture poverty or other development indicators. In these situations, the challenge is how to extract meaningful signals from a slew of images and data. 

This is where AI tools come in, offering the promise of pattern detection, computational flexibility, and an increased set of indicators that we are able to observe. As the recent Nature article described, a particular type of AI known as “computer vision” has been used to identify patterns in satellite images that correlate with the relative poverty of individual villages and towns (specifically, the physical asset holdings of these communities). 

This AI tool is trained on the poverty levels of a sample of communities in which survey data was collected, and then used to estimate the poverty levels of communities that were not included in the survey sample, but where satellite imagery is still available. Because the imagery is available at frequent intervals (new images are collected at least twice a month, depending on the satellite platform), this approach allows us to create a time series of poverty estimates with a high frequency. 

Just as importantly, because the images cover every part of each country, they can be used to measure poverty conditions for nearly all towns and villages in that country. They can capture the poverty impacts of many aid programs where no evaluations may have been preplanned and no dedicated survey data was collected (see, for example, this example evaluating the reductions in poverty due to electricity grid expansion in Uganda). 

Naturally, AI applications are not without their own sources of potential biases and blindspots. For example, one might be concerned that AI efforts to extract signals from satellite images might be more accurate in detecting men’s asset holdings than those owned by women or, more broadly, the kinds of physical changes that are associated with men’s improved well-being than women’s. 

Over the past few years, researchers on our AidData team have partnered with the Ghana Center for Democratic Development to explore gender bias in AI applications for estimating poverty using household survey data from Ghana. 

Our findings suggest that many of the limitations are actually driven by the so-called training data—in other words, the existing survey data we use to train and validate the AI models. For example, we compared how well AI models predicted the poverty rate among households with male heads versus those headed by women in Ghana. What we found is that, though the models did perform better for men, this was in part because the household surveys available to train the models didn't include as many female-headed households in their samples. This means that we would still get an inaccurate estimate if we only used those surveys to estimate household poverty, instead of the AI-powered approach. 

This work taught us some important lessons:

  • AI tools are really only as good as the training data we feed into them.
  • AI still heavily relies on human-designed and human-collected survey data to reflect poverty conditions—so we should think of AI not as a replacement for survey data, but as a powerful way to dramatically extend its reach. 
  • The biases and limitations of underlying survey data appear to transfer into biases and limitations for AI models as well. 
  • We therefore also need more and better survey data, including dedicated approaches to addressing hidden gender biases and more robust sampling methods.

Unfortunately, the recent U.S. government announcement that the DHS program is to be terminated suggests we’ll likely be getting less—not more—good survey data over the coming years. While AI-powered tools that use satellite data can compensate for this shortfall for a year or two, their predictive accuracy will degrade rapidly after that. We should think of AI as a powerful way to help us learn about poverty-reducing programs and ensure foreign aid continues to support exactly these initiatives.

Ariel BenYishay is AidData’s Chief Economist and Director of Research and Evaluation, and Associate Professor of Economics at the College of William & Mary.