Home Articles A Mutual explosion of disciplines

A Mutual explosion of disciplines

Lawrie Jordan
Lawrie Jordan
Imagery Enterprise Solutions

The overlap between GIS and imagery technologies seems to have grown rapidly over the last two years. Can you elaborate on this?
Historically, GIS and imagery have occupied separate worlds. For a long time, the tools of digital cartography and imagery had no common framework to merge. We saw early on that GIS could be that framework as other enabling technologies emerged like wider bandwidth, faster CPUs and GPUs, bigger storage capacity, and so forth. And now those technologies are well developed enough to handle today’s imagery demands and also make imagery fit comfortably in a unified environment like a GIS.

Imagery is becoming more prevalent on a very large scale, and that’s largely due to the explosion of sensor technology. As the sensors become better and less expensive, imagery output increases, and that means there’s that much more data that organisations must manage, analyse and serve. That can be scary without a platform specially designed to work with imagery data and extract the information value these datasets have.

And that’s really what we are doing at ESRI and what our imagery partners are doing with us. It’s a mutual relationship of two enabling technologies. It’s not only good for science, helping both disciplines grow and expand the field of solutions to a huge extent but it’s also good for the imagery market.

What are the latest platform-specific innovations in image processing?
There’s a significant enrichment of the imagery tools in ArcGIS 10, some of which were there before but have never been exposed in a simple- to-use manner. ArcGIS 10 includes three major technology elements. One is the Image Analyst window, which exposes great functionality and is directly accessible in the main UI. The window also introduces some new technology that lets imagery specialists handle imagery as just another layer. On-the-fly processing enables these layers to create different image products instantaneously with no data duplication. We have, through the Image Analyst window, also accelerated the performance of display and roaming and things of that nature. So no longer is imagery sort of that bolted-on experience; it’s integral to what we do.

The second major innovation is called the mosaic dataset. The mosaic dataset solves a problem that’s prevalent in the imagery community. Typically, imagery professionals don’t work with one image but collections of images. The challenge lies in handling them elegantly. The mosaic dataset addresses that by taking the metadata and putting it into the geodatabase along with information on how to process the imagery. At the same time, it leaves the imagery in its raw form, and it allows the system to do on-the-fly processing. So you can do on-the-fly orthorectification, band composites, things like computation of vegetation indices, and so on and so forth — all on-the-fly without having to replicate the data. The image data stays in its original form. That’s both a savings in terms of disk space and resources. It’s also a savings in terms of computation because you only compute what you need to compute, not the whole dataset. The system is very dynamic and new imagery or parameters to process the imagery can be added or changed at any time. Now, you can treat whole collections like one data source that changes over time just like the world does.

The next piece of that is dissemination, or being able to serve out imagery. And the server itself opens up the sharing of image resources as image services. The net result is that, as you serve this image resource out, users are going to be able to better understand how the world is changing, better exploit the imagery for near-real-time applications, and make better decisions in the long-term.

Once users see image services as a dynamic stack, imagery will become the primary source of new data for the vector datasets and other raster datasets. And in the long run, what we envision is that the abstraction of the world that we call GIS is going to use imagery as a measurement, to update it and keep it current and fresh.

If you look at the traditional GIS dataset without imagery, it’s usually very interesting and, of course, provides for very deep analysis, right? But when you turn that imagery layer on behind it, you see all the contextual relationships that are otherwise lost. If you can have that resource available to you within moments of its collection, now GIS begins to move out of this realm of long-term decisions or strategic decisions into tactical short-term decision making. And, again, that real-time or near-real time element I think is going to change GIS and its impact on the world.

Can you elaborate more on the sensors and their daily output? How often is the information collected so that we can exploit it?
If you go back 10 or 15 years, there were a few satellites or airborne sensors and you hoped to get an image once every few days at best. If you look currently at the satellites in orbit or those that are in launch sequence, we’re approaching 50 or more satellites of different countries and numerous aerial sensors. Now the resolutions vary from hundreds of kilometers for weather satellites and scientific satellites down to sub-meter resolution for some of the other satellites and highly detailed aerial imagery. And what we’re enjoying right now is that they look at the same geography on a daily basis, or possibly several times a day depending on which satellites you’re looking at. So your opportunities of collection increase rather rapidly. You now have the ability to almost stare at targets.

And that’s going to be dramatic when you couple that with the existing geographic information. So take, for instance, situations where you’re monitoring things like port security, and you want to know what the situation around piers is. You want to know what’s going on out in the harbour, and what impacts are being made as a result of natural or man-made events. Your ability to look at that multiple times a day is critical. And more importantly you don’t want the ‘dumb’ image. Today’s sensors take data-rich pictures, so you can put that into the context of GIS where you can begin to do immediate analysis. The Gulf of Mexico right now is a good example of why we need that.

What other technological factors drive the merger of GIS and imagery?
Imagery is a very data-intensive business. People think they use a lot of data until they get into imagery. And it’s not unexpected to see hundreds of terabytes of data as small, temporary data stores. So what was unimaginable 10 years ago is practical now, and we’re still pushing the limits in terms of data storage. The whole computer technology advancement is part of it. Sensors have advanced rapidly, and one of the key aspects of that is the direct georeferencing aspect of sensors. If you go back as little as 10 or 15 years ago, you had to use traditional photogrammetry where you had to have ground control points, measure ground control points in the imagery, and compute attitude and position of the image. Very labor intensive, very time-consuming.

With the advancement of combined inertial and GPS technology, you’re able to point these sensors accurately enough so that you can get engineering-level quality out of some of the low-altitude sensors; and from space you’re able to actually get very near to target- level quality positioning. So that means now you can, in a geospatial context, use this imagery almost immediately as it comes off the sensor. And with a little bit of work afterwards you can get the best accuracy that we have ever enjoyed of this imagery.

We’re getting a wider variety of sensors due to a lot of technological changes in sensor technology. We have infrared technology now that is low-cost and very high fidelity. We’ve got a variety of radar platforms collecting data. We have hyperspectral data and multispectral sensors — and of course it’s all digital, which means it’s all readily accessible. The last part of that I’ll get into is the fact that in digital sensors, you’re signal-to-noise ratio, or your colour depth, is tremendous. Anybody who’s used to using film and went to a digital camera saw a little bit of that. It’s even more profound when you get into these remote sensing systems.