Home Articles Orthorectification For Multi Sensor Imagery Data fusion

Orthorectification For Multi Sensor Imagery Data fusion

Mudit Mathur
Squadron Leader, Indian Air Force
[email protected]

Space-based observation provides repeated, unrestricted access to every corner of the globe, in full compliance with international law. This capability provides early warning of situation even before any action has to be taken to deal with them. Risks can be assessed before they turn into threats. Image fusion has been the focus of considerable research attention in recent years with a plethora of algorithms (proposed), using numerous image processing and information fusion techniques. Yet what is an optimal information fusion strategy or spectral decomposition that should precede it for any multi-sensor data cannot be defined. The human brain routinely carries out information processing and fusion. The bio-target is to garner observations from various similar or dissimilar sources and sensors, extract the required information (inferences, illations) and combines or fuse these with an aim to obtain an enhanced status and identity of a perceived object.

This paper reports on the experiences gained using orthrectified images to integrate multi-sensor images for data fusion in order to benefit from increased spatial, spectral and temporal resolution in addition to increased reliability and reduced ambiguity.

A digital orthophoto is simply a photographic map that can be used to measure true distances. It is an accurate representation of the earth’s surface. “Digital orthophoto: A raster photographic image that is combined with differential rectification to remove image displacements caused by camera tilt and terrain relief.” Orthoimagery serves as a seamless base map layer to which many other layers are registered and can be combined with digital elevation data for 3-D modeling and slope and terrain analyses which can be easily mosaicked to create seamless images of larger areas.

Orthorectification Work Process Orthorectification and stereo intersection are two most important methods for preparing fundamental data for multi-sensor integration applications. Orthorectification transforms the central projection of the image into an orthogonal view of the ground with uniform scale, thereby removing the distorting affects of tilt optical projection and terrain relief. To create a digital orthophoto, several fundamental inputs are necessary:

  • Aerial photos / satellite imagery with a high-percentage overlap:, which can be obtained from scanning aerial photo diapositives or negatives on an imagequality scanner or by satellite sensor
  • Aerotriangulation (A.T.) results / ephemeris : , Ground Control Points (GCP) are determined either conventional ground surveys, from published maps, by Global Positioning System (GPS) surveys, or by aerotriangulation. These points are taken at visible physical features on the landscape. Depending on the type of algorithmic correction to be used, a minimum of 3 to 5 good GCP must be established. The relationship of the x, y photo coordinates to the real world GCP is then used to determine the algorithm for resampling the image. By using GCPs, the mathematical relationship between the real world coordinates and the scanned aerial photograph is determined and the digital image is resampled to create the rectified image
  • A digital elevation model (DEM) or a regularly spaced grid of masspoints, each containing an x, y, and z value. These elevations are collected from stereoscopic models by photogrammetric methods to form a digital elevation model (DEM). A more robust digital terrain model (DTM) can also be used because it includes strategically placed masspoints, dense break lines, and ridgelines.

For orthorectification, the resampling of the digital image involves warping the image so that distance and area are uniform in relationship to real world measurements.

Depending the on the needs of the aerial imagery in the GIS system, there are advantages and disadvantages to using either method. GCP orthorectification is a faster process while using DEMs for orthorectification is a more accurate process.

Usage Of orthoimagery
Digital orthoimagery provides visual information for the following partial list of applications.

  • Internal Security
  • Surveying & Mapping
  • Emergency Management
  • In disaster management & Public Safety Planning, Response, & Mitigation
  • Environmental Management
  • Tax Mapping
  • Transportation Management
  • Operations & Planning
  • Utilities Management
  • Land Planning and Zoning
  • Drainage Planning & Management
  • Agriculture
  • Insurance
  • Planning & Regulation
  • Natural Resource Inventories and Assessments

Limitations of Digital Ortho-photos
Though the orthoimagery / photo looks after plethora of issues in multisensor integration but limits itself in:

  • Expansion features, such as bridges, create problems in ordinary digital orthophotos. DTM data is captured at ground level, so bridges that are rectified with this data are “pulled down to the ground,” giving them a distorted appearance.
  • Elevated features (e.g., buildings,
    Fig. 1 Usage of ortho-imagery

    Fig. 2 Double mapping effects and True Orthophoto

    trees, power lines) also create a problem due to radial displacement. Distortion increases with the distance from the center of the aerial photograph-features, such as buildings, lean noticeably. The amount a feature leans depends on the percentage of overlap in the aerial photography and the height of that feature. The higher percentage of overlap in the aerial photography used, the less features will lean because the amount of photography used from the outer edge is reduced.

  • In an urban landscape there are unavoidably hidden areas for which no information on the original aerial image exists. The presence of hidden areas results in double mapping effects on resulting traditional orthophoto images

This distortion can have an impact on the functional and aesthetic features of a digital orthophoto. However these problems are limited in satellite imagery.

Traditional “True Ortho” is the solution for double mapping In recent years several orthographic rectification schemes compensating for double mapping effects were proposed for generating large-scale true orthophotos . These schemes use image based hidden area detection algorithms.

The remote sensing community has been more concerned with co-registration of images than the comprehensive image rectification concerns of the photogrammetry community. Terrain effects have been considered of minor impact by the remote sensing community until recently, when

  • Higher resolution systems became available,
  • A greater emphasis on satellite data integration with GIS for business applications occurred, and
  • Change detection and data fusion studies became more prevalent.

The emerging standard for remotely sensed imagery data transfer has identified the basic requirement for orthorectification processing as well as adherence to map projection and datum accuracy standards and annotation . Many sensor systems (e.g. AVHRR, MODIS, GOES, and Landsat) employ line scan designs that view off-nadir as much as 55 degrees, while other sensor systems with pushbroom imaging designs (e.g. ASTER and Hyperion) regularly acquire offnadir views of as much as 25 degrees. The development of automatic orthorectification and mosaicking system of procedures has relied on two key recent developments. The first is the general availability of DEMs with 1 arc second posting (nominally 30 m) for much of the world DEMs for the world’s landmass from the Shuttle Radar Topography Mission (SRTM) . This permits the preparation of orthorectified satellite imagery using similar techniques to those developed by the photogrammetry community for aerial photographs. These two developments provide the key datasets necessary to prepare a baseline image dataset to which all satellite imagery datasets having a pixel resolution of 10m or greater can be automatically orthorectified to sub-pixel accuracy.

Image fusion aims at the integration of complementary data to enhance the information content of the imagery, i.e. make the imagery more useful to a particular application. The definition of image combinations and techniques depends on the characteristics a dataset

Fig. 3 Scaling and sensor location: Final need for Orthorectification

Fig. 4 Multi source imagery (Right : Aerial, Left :Satellite Landsat)

Fig. 5 Orthorectified Satallite imagery

should have in order to serve the user . However, it is possible to summarize a general approach, which describes the overall processing chain needed in order to achieve image fusion. In the case of multi-sensor image data, the images have to be geometrically (orthometrically ) and radiometrically corrected, before being suitable for the fusion process, using collateral data such as atmospheric conditions, sensor viewing geometry, ground control points (GCPs), etc. .

An elementary pre-processing step is the accurate co-registration of the dataset, so that corresponding features coincide.

Depending on the processing stage at which data fusion takes place, it is distinguished between three different fusion levels: . Pixel, Feature, and Decision level Image fusion mostly refers to pixelbased data fusion, where the input data is merged applying a mathematical algorithm to the coinciding pixel values of the various input channels to form a new output image. Once the alignment of the dataset is established, it is possible to apply certain fusion techniques. The manifold fusion techniques can be grouped into Colour related techniques: This group comprises methods that refer to the different possibilities of representing pixel values in colour spaces. An example is the Intensity (I) – Hue (H) – Saturation (S) colour transformation. If a multispectral image is transformed from the RGB space into HIS, it is possible to integrate a fourth channel exchanging it with one of the elements obtained (I, H or S). There are many other techniques that follow the substitution principle. Of course, there are other colour transformations which suit the fusion concept (e.g. RGB or Luminance/ Chrominance – YIQ).

Statistical / numerical approaches (Pohl, 1996) : The second group of fusion techniques deals amongst others with arithmetic combinations of image channels, Principal Component Analysis (PCA) and Regression Variable Substitution (RVS) . Fusion by band combinations using arithmetic operators opens a wide range of possibilities to the remote sensing data user. Image addition or multiplication contributes to the enhancement of features, whilst channel subtraction and ratios allow the identification of changes. The Brovey transformation forms a particular method of ratioing, preserving spectral values, while increasing the spatial resolution. The PCA and similar methods serves the reduction of data volume, change detection or image enhancement. RVS is used to replace bands by linearly combining additional image channels with the dataset.

The experiences gained show very clearly that a major element of the operational implementation of image fusion with respect to visual image interpretation is the interactive component. The fine tuning of the image enhancement parameters, i.e. histogram value distribution, filter, assignment of colours etc., influences the success of the fusion itself. Similar types of areas and datasets require similar values for a successful merge. Crucial in the overall achievement of image fusion is the adjustment of the colours in the final product. The use of filters for noise reduction or edge enhancement is a sensitive matter in visual image interpretation. Depending on the scale and type of feature to be looked at, filters can help understand the image. In some cases however, it leads to a loss of detail which might be relevant for the application performed. It has to be decided on a case by case basis, if and when to apply filtering to VIR or SAR data. Change detection is vastly simplified, and SAR and multispectral imagery can be more easily and accurately interpreted.

The perspective and terrain distortion that presents a significant obstacle both to human and machine correlation can be overcome by orthorectification which provides a common perspective and thus both reduces mental labor and provides a practical first step for follow-on correlation algorithms. “This work is a collection of works done in the field of orthorectification and data fusion by various individuals and institutes. Author takes no claim in either designing the concept or its methodologies, however, direct integration of isolated works in the field of orthorectification and data fusion is been done in this article.”

Fig. 6 Orthorectified Imagery cross platform

Fig. 7 Diagrammatic representation of Orthorectification to data fusion