With high resolution and large area coverage, combined with moderate resolution but high frequency, you can perform tipping and cueing applications that you won’t be able to do with either one on their own
DigitalGlobe‘s WorldView-4 launch around September will add to its arsenal of 30-cm imagery, the best available by far in the industry. Dr Walter Scott, Founder and Chief Technology Officer, talks about the new satellite, commoditization of data, Big Data initiatives and much more…
DigitalGlobe is well-known for its technology innovations in the high-resolution space. What is the unique value proposition that DigitalGlobe brings to earth observation industry?
One of the things that DigitalGlobe brings to the table is the quality of data that enables our customers to act with confidence on the ground. And that is a combination of a variety of factors. This includes our ability to offer high-resolution data that helps you to understand what is happening on the ground, and our ability to make that data accessible in a timeframe where you can act in time to have an impact. So, we make data available online in as little as 12-13 minutes after it has been collected — typically under two and a half hours. Increasingly, we are making it available for analytics at an extremely large scale by leveraging our platform technology.
Tell us about WorldView-4, which is coming up for launch later this year.
We expect to launch WorldView-4 later this summer; most likely in September. The launch will double the amount of 30-cm imagery capacity that DigitalGlobe is currently providing. This sort of accuracy that we are able to achieve from hundreds of miles away in space and being able to localize it to a point on the ground, to something less than the size of a Chevy Suburban, could be analogous to shooting the ear of a dime about a mile away. 30-cm resolution is the best available by far in the industry, and also with extremely high positional accuracy.
What kind of applications do you think will need such accuracies?
There are two things — resolution and positional accuracy. Resolution is essential for being able to know what it is that you are looking at. Positional accuracy is important because at the end of the day if you don’t know where something is, you don’t really know a lot about it at all — whether it is an engineering application or a utility application. Positional accuracy allows DigitalGlobe to be the world standard against which you are going to register your datasets.
Could you elaborate on DG’s geospatial Big Data platform
We have worked with the Amazon Web Services (AWS) department to instantiate access to both DigitalGlobe‘s library of any worthy data that goes back to 1999, as well as a growing number of other data sources, whether it is access to geotag social media or a third-party imagery data. Placing this data in the Amazon Cloud means that anybody who wants to perform analysis on that data, and has the ability, can do it. In fact, by placing data in the Cloud that already has the ability to compute, you don’t have the problem of data registers anymore. We have taken care of that for you. This is a subscription service. There are developer tiers and tiers that take the product into actual operation.
EO can play a key role in solving several of the society’s challenges, whether it is disaster management or climate change or poverty. How do you, as a company which has a reputation for innovative technologies, bring in this data to the use of the larger goals of the society?
Our data is helping our customers save lives and resources — be it aiding in disaster response like outsourcing data after the earthquake in Ecuador, helping free slaves, monitor climate change, mapping population in the developing world for delivery of vaccines or the delivery of Internet. We see the ability to observe the entire planet as almost a ‘macroscope’, seeing things that are larger than the human eye can see or even larger than the human brain can comprehend. That is where we are going to bring value to solve those problems in the world.
Are you also, as a corporate social responsibility, willing to open up this data for free?
We have a number of cases where we do provide our data for free. I mentioned the response to earthquakes. We also are making data available to support research, and recently announced tieups with NVIDIA and In-Q-Tel forward this. We have also, through the DigitalGlobe Foundation, made data available for a wide range of educational purposes.
However, at the end of the day we are also doing business here, so there needs to be a way to monetize the billions of dollars of assets that we have floating around in space. Giving it all away for free is not practical, but we do that in ways that we believe will have the biggest impact and still enable us to deliver an acceptable return for our shareowners.
From a business standpoint, we are seeing that pixel prices are hitting rock-bottom at this point. How are you reorienting DigitalGlobe to maintain profitability levels?
Firstly, I think it is a mistake to say that pixel price is hitting rock-bottom. You are seeing some degree of commoditization at the low-end, but physics fundamentally says if you want high resolution, you need big cameras. And we are not seeing commoditization at the high-end for the large area global high-resolution coverage that we are providing. Having said that, dealing with extremely large amounts of high-resolution data — say if you want to perform analytics on continent-scale data — is out of the realm of what every IT department can handle. But now, with the advent of Clouds, it is possible to do all of that in AWS environment without having to break the IT bank.
There has been a recent shift in DigitalGlobe’s strategy in moving toward small satellites as well. What is the rationale behind this decision and how do you balance these two aspects?
I see it as one plus one equals three. With high resolution and large area coverage combined with moderate resolution but high frequency, you can perform tipping and cueing applications that you won’t be able to do with either one on their own. As an example, if the large area coverage identifies something of interest, it is with high enough resolution that you know what it is and you decide to monitor it. The medium-resolution satellites perform high-frequency coverage. You know something changed, but you don’t know what actually the change was. But now you have the ability to tip the high-resolution satellite to go take a closer look. That dramatically shortens the decision cycle and the speed with which you can detect meaningful changes in the world and be able to act upon them.
The two types of datasets are highly complementary to each other. You can’t do with one on its own what you can do with the combination of the two. Small satellites result in low-resolution data. There is just no way around that. You get high revisits, but with that high revisit you don’t necessarily know what it is that you are looking at unless you have the ability to cross-cue the high-resolution satellite to take a closer look. That is what enables you to act with confidence as opposed to speculation.
We are seeing a lot of changes happening in the geospatial industry at this point. What, according to you, are the most disruptive business models that are driving this industry?
I think one is the democratization of access to data which is enabled by Cloud. As I was saying, it means that you don’t have to have an IT department the size of Google to be able to deal with this large datasets. So, public Clouds have made a huge difference in terms of making data accessible. Another trend is that, as a public Cloud user, you are swimming in the same ocean as everybody else. It means that making data cross-accessible and making applications cross-accessible becomes easier and that speeds collaboration. There is a phrase: ‘it’s wonderful to stand on the shoulders of giants’. It is even better if you can stand on the shoulders of many giants. And that is one of the things that those ecosystems, enabled by public Cloud, are able to have.
Geospatial technology is increasingly finding more relevance in solving the society’s problems. How are you looking at creating that kind of a value?
If you wanted to be a producer of geospatial data, in the past you had to be expert in a number of disciplines. If you wanted to derive value from satellite imagery, you needed to understand remote sensing, you needed to understand data science and those are relatively high hurdles for making it accessible to a large population.
So, we have put a lot of effort into lowering the barriers by taking care of the undifferentiated geospatial heavy-lifting. In the extreme case, we introduce something called Maps API, which is in partnership with Mapbox, and enables a developer with two lines of code to have access to the world’s most beautiful and most accurate basemap all the way up the technology stack to analyzing the data at extremely large scale.
Where do you see the new business opportunities?
We have certainly seen money from our core customers — the governments and the energy sector. While the energy sector has been undergoing tremendous commodity price pressure, we are finding that we are still adding value and we are able to maintain a position, even though the prices have hit rock-bottom. Where we are beginning to see new value creation is for applications that were previously not possible. So, whether it is Spaceknow which is providing an index of construction that is available on a Bloomberg Terminal or Inteloscope that is doing forest inventorying on a large scale, those are the applications that would not have been possible previously, and we are seeing quite strong growth in that area. Small base, but strong growth.