Powerful AI tool helps scan the globe instantly

Powerful AI tool helps scan the globe instantly

146
2
SHARE

Have you ever tried finding a needle in a haystack? Or a haystack on the planet? If the not the former, this new machine learning tool from Descartes Labs, a New Mexico-based startup that provides artificial intelligence-driven analysis of satellite images, can certainly help you with the latter.

[WATCH out for our upcoming edition: The tech-tonic shift: How AI & Deep Learning will disrupt the world?]

GeoVisual Search, which was released by Descartes Labs last week, is not a product yet but just a demo and is in a work-in-progress mode. But what it can do is simply amazing — just click on some piece of the earth, and the tool returns similar images in a second.  Click on a wind turbine, a solar panel or parking lot and the search returns a list of similar-looking objects and their locations on the planet. Such an ability was not available to the public on a global scale so far.

AI tool helps scan the globe
Images of China from the Planetscope constellation, arranged by visual similarity

“We released the demo of GeoVisual Search to show people what could be done when machine intelligence is applied to data about our planet,” Mark Johnson, CEO, Descartes Labs, told Geospatial World. “Companies can use this to better understand supply chains, researchers can better study the effect of humans on the planet, and governments can use this to keep their citizens safe.”

Descartes Labs hopes to go further by applying AI to locate patterns in those images that might not be immediately obvious to our eyes, thus helping us to locate changes in an area.

Johnson says Descartes Labs got the initial inspiration for GeoVisual Search from a project called Terrapattern out of Carnegie Mellon University.  “We wondered if this kind of visual search could be applied to the entire globe.  The goal of the project was to run machine learning at scale and lay the infrastructure for further machine intelligence projects this year.”

So how does it work?

GeoVisual Search attempts to find similar things in a set of images. This is broadly how Facebook uses machine learning for its face recognition technique. You don’t look the same in every picture, but there are visually similar elements that make up your likeness.

The first problem the team encountered was training the computer to figure out what exactly “similarity” meant. Since the human brain very good at recognizing patterns and faces, a similar technique was required on its computers. A “neural network” was built to teach it hundreds of “features” like shadows, colors, and edges – what is known as deep learning or machine learning. The computer uses those features to look at the entire Earth and look for other images that share similar features.

The second problem was to do this search over the entire surface of Earth — or 150,000,000 sq km landmass in total — in real time. For this, GeoVisual Search divides the earth’s surface into small, overlapping images — it extracts a “visual feature vector” from each image using a convolutional neural network; and gives a query image to search for “visual neighbors” in this feature space.

These images are further chopped into small, overlapping tiles, 128 pixels on a side. The end of the process is mapping 393,216 bits (the original 128x128x3 image) to 512 bits (the feature vector). These features form a compact representation of the visual information present in each image.

What are the imagery sources

GeoVisual searches over three imagery sources:

Aerial over US: The 1-meter imagery from the US government’s National Aerial Imagery Program (NAIP) and the Texas Orthoimagery Program of 48 states in America. Because the imagery is high-resolution, a user can locate even solar farms and orchards.

GeoVisual Search
Stadiums in china

Planetscope over China: This map uses the 4-meter composite images from Planet that cover China, Hong Kong, and Taiwan. With Planet’s record-breaking launch of 88 satellites recently, it will soon be able to image the entire Earth every day. Here one can locate solar farms and stadiums.

Landsat 8 over Earth: This map runs its machine learning skills on Descartes Labs recently released global composite that uses all the data from Landsat 8. Since this set of imagery is at 15-meter resolution, a user can locate larger-scale elements such as suburbs.

In the end, the vectors were pre-computed for all the tiles in each dataset — about 2 billion tiles for NAIP, and about 200 million tiles for Planetscope or Landsat 8. This computation was distributed across tens of thousands of CPUs on Google’s Cloud platform, so that the search can return a right match in real time.

So what’s next?

“Our plan is to release additional machine learning capabilities within our platform over the next year.  The goal is to make the analysis of satellite imagery easier and easier,” Johnson says.

Having taken the first step toward better understanding our planet through satellite imagery, the ultimate aim of GeoVisual is to be able to search the globe for any object a user defines and track how that object changes over time.

This could help us, for instance, better understand our renewable energy resources to make a policy recommendation to the government. In addition to other available data sources, a user will be able pinpoint and map the location of every wind turbine or solar panel on the planet and analyze how they have grown and at what rate over the past 10 years.

As a next step, GeoVisual Search is looking to understand specific objects and count them accurately through time. “At that point, we would have turned satellite imagery into a searchable database, which opens up a whole new interface for dealing with planetary data.”

But for now, just CLICK HERE and have fun exploring our amazing planet!

2 COMMENTS