US: Using satellite imagery, researchers of Stanford University have created an algorithm that provides detailed breakdowns of where impoverished people are in the world. This would allow us to better figure out how to allocate resources where they are most needed.
The researchers have created a deep-learning algorithm that allows computers to spot signs of poverty just by using satellite images — for example, by checking the condition of roads to see if the infrastructure is in disrepair, according to a statement. Up until now, such technology had been used to track crop conditions and deforestation, but now it could help us avoid wasting precious resources in the fight against poverty.
“We have a limited number of surveys conducted in scattered villages across the African continent, but otherwise we have very little local-level information on poverty,” study coauthor Marshall Burke, an assistant professor of Earth system science at Stanford and a fellow at the Center on Food Security and the Environment, said in a statement. “At the same time, we collect all sorts of other data in these areas – like satellite imagery – constantly.”
It’s a big deal because we have vast amounts of data on the world we aren’t even using: satellite imagery. In those millions upon millions of images is some critical information on poverty, it just takes someone to figure out how to access that data — and scientists are hopeful this algorithm is the key.
“There are few places in the world where we can tell the computer with certainty whether the people living there are rich or poor,” said study lead author Neal Jean, a doctoral student in computer science at Stanford’s School of Engineering. “This makes it hard to extract useful information from the huge amount of daytime satellite imagery that’s available.
“Without being told what to look for, our machine learning algorithm learned to pick out of the imagery many things that are easily recognizable to humans – things like roads, urban areas and farmland,” he added.