Sensor Orientation and Ortho-Rectification of hr Satellite Images:Review and Application with FORMOSAT-2

Sensor Orientation and Ortho-Rectification of hr Satellite Images:Review and Application with FORMOSAT-2


Thierry Toutin
Thierry Toutin
Natural Resources Canada,
Canada Centre for Remote Sensing
588 Booth Street, Ottawa,
Ontario, Canada K1A 0Y7
[email protected]

The sensor orientation and the orthorectification of high-resolution remote sensing images became a key issue in multi-source multi-format data integration, management and analysis for many geomatic and geoscientific applications. This paper first reviews the key issues of the geometric distortions sources and modelling (physical and empirical) during the sensor orientation and orthorectification. Some examples are then presented with the new Taiwanese high-resolution sensor of Formosat-2 to evaluate the impacts of accurate sensor orientation on the different processing steps.

Why ortho-rectify remote sensing images? Raw images usually contain such significant geometric distortions that they cannot be used directly with map base products in a geographic information system (GIS). Consequently, multi-source multi-format multi-date data integration for applications in geomatics geoscientific requires geometric and radiometric processing adapted to the nature and characteristics of the data in order to keep the best information from each image in the composite ortho-rectified products. One must admit that the new input data, the method and algorithms, the output processed data, their analysis and interpretation introduced new needs and requirements for geometric corrections, due to a drastic evolution with large scientific and technology improvements since the early time of earth observation (EO) satellites.

In fact, each image acquisition system produces unique geometric distortions in its raw images, which vary considerably with different factors, mainly the platform (airborne versus satellite), its orbit, and the sensor (rotating or push-broom scanners, visible/infrared (VIR) or microwave, low to high resolution). Consequently to integrate different earth observation data but also cartographic vector data into a GIS, the geometry of each raw image must be separately converted to an ortho-image so that each component ortho-image of the data set can be registered, compared and combined into the end-user cartographic database.

These geometric distortions (Table 1), including the map deformations, are predictable or systematic and generally well understood (Light et al. 1980). Some, especially those related to the instrumentation, are generally corrected at ground receiving stations and others, for example those related to the atmosphere, are generally not taken into account and corrected because they are too specific (acquisition time and location, lack of information on the atmosphere, etc.). All the remaining geometric distortions thus require mathematical models and functions to perform the geometric corrections of imagery (sensor orientation and ortho-rectification): either with heuristic and probabilistic models or with physical and deterministic models (Robertson, 2003; Toutin, 2004; Chen et al., 2006).

Table 1. Description of sources of image geometric distortions.

Platform(spaceborne or airborne) Variations of the osculatory orbit
Variations in platform attitude (low to high frequencies)
Sensor (VIR, SAR or HR) Variations in sensor mechanics (scan rate, scanning velocity, etc.)
Lens distortions (focal, decentering, etc.)
Viewing/look angles
Panoramic effect with field of view (FOV)
Measuring instruments Time-variations or drift Clock synchronicity
Atmosphere Refraction and turbulence
Earth Curvature, rotation, topographic effect
Map Geoid to ellipsoid to map

The 3D rational function model (RFM), as recently re-introduced by Madani (1999) for HR sensors, could be considered as a generalization of the heuristic and probabilistic models for EO sensor orientation. In fact, RFM included well-known solutions:

  • The traditional 2D/3D polynomial functions, such as the affine functions;
  • The direct linear transformation (DLT) as a perspective model; and
  • The collinearity equations being a 1st-order RFM without a priori information.

All these solutions can be easily applied without the knowledge of imaging geometry but can be useful for specific imaging conditions (few terrain distortions), restrictive type of sensors (only satellite HR with small FOV) and some geometric pre-processing (systematic georeferenced or map oriented) (Toutin, 2004; Jacobsen, 2006). They were never used with HR radar and airborne images.

On the other hand, the physical and deterministic models fully reflect the physical reality of the viewing geometry: they should mathematically model all distortions of the platform (position, velocity, attitude for VIR sensors), the sensor (lens, view direction, panoramic effect), the Earth (ellipsoid and relief), and the deformations of the cartographic projection. The final mathematical functions, which integrate the ephemeris and attitude data or the global positioning system (GPS) and inertial navigation system (INS) data, differ depending on the sensor, the platform and its acquisition geometry (instantaneous acquisition systems, rotating or oscillating scanning mirrors, push-broom scanners, radar). However, they are generally based on the collinearity condition for VIR (Light et al. 1980) and Doppler/range or radargrammetric equations for radar (Curlander 1982; Leberl, 1990). In addition, other conditions with pseudo-observations by assigning either relax a priori variances to the satellite’s osculatory and sensor parameters in relation to the expected accuracy of the orbit or weighted constraint equations on model parameters to conform physical or geometrical characteristics of the modeling can be added to take into account the knowledge and the accuracy of the meta-data.