The next-generation geospatial application is expected to come not through a single technology, but from linking multiple technologies. Deep learning has become the most popular approach to developing artificial intelligence. It empowers geospatial ecosystem by providing real-time near-human level perception
Identify objects and patterns from satellite images
Companies like DigitalGlobe, Airbus Defence and Space are already taking help of artificial intelligence and deep learning to process large volumes of satellite imagery to identify objects and patterns automatically in huge volumes of satellite imagery.
According to DigitalGlobe’s Big Data & AI team, algorithms are being trained to detect objects like airplanes, vehicles and even elephants, which require massive amounts of processing to be effective. In the research, they saw impressive results through the application of deep learning, which has been applied to image and video classification through open source frameworks. DigitalGlobe’s Geospatial Big Data platform, GBDX, provides the computational power to apply deep learning to earth observation. The platform also has crowdsourcing capabilities to rapidly discover and validate objects in our imagery to expedite algorithm validation and training.
From objects to patterns
According to James Crawford, Founder and CEO of Orbital Insight, the real power of utilizing the value of the large volume of satellite data lies in not just identifying objects, but patterns. Crawford thinks deep learning and AI automate the processes to create scalable insights from large amounts of data.
At Orbital Insight, Crawford’s team takes millions of satellite images and processes them. They use the Cloud-based GBDX platform of DigitalGlobe, which provides a pipeline letting them access and analyse the geospatial Big Data. Just by comparing current data with previously collected imagery, they are able to predict and answer the questions — if a Walmart in any locality was doing better than previous sale season; will the US corn yield be better this year; or which Chinese cities are growing the fastest.
Identifying objects from Street View data
Moving on from satellite data to the next level of data that is quite popular these days — the 360-degree panoramic photographs and Street View data-sets contain huge value of information, mainly street signs and house numbers, physical infrastructure such as road signs, lamp posts. But to automatically identify these objects is tedious, especially when the volume and frequency of data collection is getting unmanageable. AI and deep learning powered by image and object recognition algorithms comes to the rescue.
Way back in 2014, Google was able to identify and transcribe all the views of street numbers in France in less than an hour, thanks to a neural network. This contains 11 levels of neurons that they have trained to spot numbers in images. The company uses the images to read house numbers and match them to their geolocation. This physically locates the position of each building in its database.
Identify geolocation of any photo on the planet
Google has unveiled an application that can tell the exact geo-location of any photo taken on planet earth. The project called PlaNet which deploys the power of machine learning. It is a combination of neural networks with mapping technology. The value of this information is humongous.
Reading the traffic signals
Traffic sign post is a pretty important aspect of autonomous navigation. In practice, a large number of different sign classes needs to be recognized with very high accuracy. Traffic signs have been designed to be easily readable for humans. For computer systems, however, classifying traffic signs still seems to pose a challenging pattern recognition problem. Both image processing and machine learning algorithms are continuously refined to improve on this task. They are now making it possible to train the machines to identify traffic signs, the latest example is traffic sign benchmark database for Germany prepared through deep learning.
Driving autonomous cars
Machine learning has given us self-driving cars. From Google to Tesla, every autonomous car manufacturer is relying on AI and deep learning. They are collaborating to develop the sensor fusion solution, creating a full model of the environment surrounding the vehicle, using input from vision, radar, and LiDAR sensors as well as establishing a driving policy, including reinforcement of learning algorithms used to endow the vehicle system with the artificial intelligence required to safely negotiate complex driving situations. The future would truly be for very detailed street-level data collected continuously across major cities using terrestrial methods, and this data becomes a training data set for improving the self-driving cars. Some car manufactures are targeting to bring self-driving cars on roads by 2021.