Musings on High Resolution Remote Sensing

Musings on High Resolution Remote Sensing

SHARE

A. R. Dasgupta
Honorary Advisor
GIS Development

The availability of sub-metre resolution colour imagery from satellites coupled with Internet based services like Google Earth and Microsoft Virtual Earth have resulted in an enormous interest in remote sensing among the general public. This interest is more in the nature of a ‘wow’ factor. The ability to see one’s home or familiar landmarks in an image taken from hundreds of kilometres above the earth elicits wonder and awe. I remember exactly the same reaction when ISRO sent up its first remote sensing satellite, Bhaskara way back in 1980s. We could see the Bhakra Nangal reservoir and the vast expanse of the Indus even at the low resolution of one kilometre. Some one even claimed to have ‘seen’ the shadow of the Qutab Minar! Soon, however, wonder gave way to the question ‘so what?’ Clearly, there had to be a more productive use of the satellite images. As we set to work interpreting the imagery we soon realised that we were only confirming our knowledge of geography captured by the images. Finding something new was proving to be difficult. So whether it is going beyond seeing the Bhakra Nangal reservoir at one kilometre or the roof of my house at 0.6 metre the problem of value derivation remains the same.

To be able to add or derive value from a remotely sensed image we need to consider several factors such as resolution, swath, signal to noise ratio, etc. Spatial resolution always catches popular attention because of the stories of military systems that allows the observer to read the number of stars on an officer’s epaulette! Resolution itself has three components: spectral, spatial and temporal. We also need to consider the coverage or swath and the signal to noise ratio of the data which indicates its ability to detect minor variations in the image intensity. The accuracy of orbit and attitude determination impacts the absolute and relative positional accuracies of the image. These characteristics of a remotely sensed image are a result of several compromises dictated by technological factors. As technology improves these compromises are reduced. However, at some point natural factors will put a cap on the technological capabilities.

Ideally, a remote sensor should be able to deliver images of chosen areas at chosen times with the desired spatial and spectral resolutions and quality and accuracy. This requires a stationary platform which can image on demand. Orbital mechanics dictate that a platform will be stationary above the earth only when its angular velocity matches that of the earth and this happens at an orbital height of 36,000 Km. At this height we need a very powerful telescope to provide the desired spatial resolution, very sensitive detectors and excellent attitude control and station keeping of the platform. The state of the art is imagery at one Km spatial resolution every half an hour from a geostationary satellite. This may be improved to hundreds of metres but ultimately the scintillation due to atmospheric turbulence, orbital and attitude perturbations will limit the achievable spatial resolution to several tens of metres at best. Today, metre and sub metre resolution imagery is available only from low earth orbiting satellites which are highly agile and use techniques like ‘step and stare’ to image designated targets. In the process they compromise on the time of imaging and the total coverage area.

Do we always need high resolution imagery? In the early days of satellite remote sensing geologists using low resolution imagery could trace long faults and fractures which they missed in smaller high resolution photographs from aerial surveys. This property of imaging large areas at a time, called a ‘synoptic’ view was extolled as one of the advantages of remote sensing from space. Applications scientists appreciated the synoptic view from satellite borne sensors and found that applications like flood mapping, crop inventory, drought monitoring could be implemented much better with imagery at resolutions of 70 to 20 metre.

One of the interesting observations relates to the use of statistical spectral classifiers. Low resolution imagery averages out local variations hence spectral based statistical segmentation techniques like Maximum Likelihood Classifier can be used with a limited number of training sets. At medium resolutions the variability in the data increases requiring more and more number of training sets. At sub metre resolution where individual objects become detectable such classifiers will be difficult to use. Such high resolution imagery interpretation requires the application of techniques of image understanding rather than statistical pattern recognition. Such image understanding software may be based on neural networks and on artificial intelligence techniques which can also take into account shapes and contexts of features and objects. It is interesting to note that such techniques were applied to low and medium resolution imagery to improve the classification accuracy of statistical classifiers but with limited success. However, a human interpreter who can, by looking at an image, differentiate between a canal and a river is actually using shape and context information. Indeed, image understanding is used by the defence establishment to interpret high resolution surveillance imagery and it is but a matter of time before these techniques move to the civilian arena particularly since the dividing line between civilian and military imagery has become blurred to the point of extinction.

The ‘step and stare’ technique of imaging results in imagery with oblique viewing angles. In urban environments tall structures appear with their apexes displaced away from the viewing direction and features on the side opposite the viewing side are occluded. Any mapping in such situations will require stereo pairs to be able to generate urban Digital Elevation Models and fill in the occluded areas. These will double the cost but the urban DEM is a product very much in demand by the communications industry among others and will offset the increased cost of mapping. High resolution imagery has brought in a number of new applications in the area of Location Based Services. This imagery can be used as a replacement of a map and in conjunction with GPS can be an excellent navigation tool. Such imagery used in LBS on a GPS enabled 3G mobile phone can become a significant application.

To be able to exploit these features the way imagery is delivered to the end user will also need to change. Google Maps and its clones have opened up a new approach where the end user accesses imagery freely or at a very nominal cost without having to own the imagery. Most users overlook the fact that the volume of data increases as the square of the resolution. Data storage and preservation problems will become significant as data assets increase. By trading assets for access users can be freed of this burden. This is what Spatial Data Infrastructure is all about. SDI can be expanded to include imagery access in the same manner as map and other data access.

High resolution satellite imagery is one of five major technologies which will significantly contribute to SDI. The others are Web 2.0, Photogrammetry, GIS and GPS. These technologies will enable applications that will be of use to the common person. However, low and medium resolution satellite imagery will continue to play a significant role in applications like weather forecasting, crop estimation, disaster management and environmental monitoring where a synoptic view is necessary.

Two other imaging technologies are also capable of high resolution imaging. Synthetic Aperture Radar is able to give metre level spatial resolution but the imagery interpretation requires special expertise. The other is Hyper-spectral imaging which provides very high spectral resolution. Both these technologies have been overshadowed by the sudden explosion of sub metre optical imaging. Will there be an application like Google Earth which uses these technologies and make them accessible to an average, non-technical user?