Prof. Karsten Jacobsen
University of Hannover
Currently with IKONOS, QuickBird and OrbView-3 three commercially used very high resolution (VHR) optical satellites with ground sampling distances (GSD) of 1m and better are active. The number of VHR satellites will grow very soon with the announced systems IRS Cartosat-2 from India, Kompsat-2 from South Korea, EROS-B from Israel and Pleiades from France having a GSD between 0.7m and 1m. In addition the resolution will be improved by WorldView-1, WorldView-2 and OrbView-5 from USA down to 0.5m. The competition is improving the order conditions, making the data acquisition for geo-information products more economic. Of course not only the number of imaging satellites is important, also their imaging capacity. The imaging capacity is depending upon the storing and download possibilities and more important the agility and the requirement for a slow down mode. With not sufficient sampling rate or the requirement for collecting more energy because of missing transfer delay and integration – the electronic forward motion compensation – the satellites have to rotate during imaging to reduce the angular speed; this of course leads to a reduced imaging capacity.
slow down of imaging by permanent rotation of view direction slow down factor = b / a
Nearly original space images and images projected to a plane with constant height like IKONOS Geo and QuickBird OR Standard are used. The tendency goes to images projected to a plane, named by SPOT level 1B. The scene orientation has to respect the image product. All imaging satellites are equipped with a positioning system like GPS, gyros and star sensors. Based on this, the full orientation of each image line can be determined. The now available VHR sensors do allow a standard deviation of the ground coordinates better than 10m without control points. This may be sufficient for some purposes, but usually it has to be verified or improved. There are different orientation procedures in use:
- Rational polynomial Coefficients (RPC) – the direct sensor orientation from the satellite vendors do allow the determination of the relation between the image and the ground coordinates by a polynomial as function of the geographic ground coordinates X, Y, Z divided by another. Third order polynomials are in use, so with 80 coefficients the orientation information can be expressed. This can be improved by means of control points named bias corrected RPC method.
- For the scene centre or start of the scene, the view direction from the ground to the satellite is given in the header data distributed together with the images. Together with the information about the satellite orbit and the image progress, this allows the geometric reconstruction of the imaging for any ground point. Like with the preceding described method, this has to be improved by means of control points. For original images the ephemeris is given, allowing a similar scene orientation.
- The field of view is very small, allowing also some approximations. With the 3D-affine transformation the mathematical model of parallel projection may be used. It is not using any of the available orientation information, so at least 4 three-dimensional well distributed control points have to be used.
- The satellite images do have perspective geometry in the CCD-line direction, and a close to parallel projection in the direction of scan. With the direct linear transformation (DLT) a perspective model is used, also not based on pre-information about the orientation. For the 11 unknowns at least 6 well distributed control points are required.
- A reduced number of the RPC-coefficients can be computed based on control points. Such terrain dependent RPCs also do not use the existing orientation information. The minimal number of required coefficients can be determined with at least 6 three-dimensional well distributed control points.
In the area of Zonguldak, Turkey, the orientation of the different VHR sensor images has been compared with the different methods and a varying number of control points. The terrain dependent RPCs only have been used for a limited number of tests because the results at independent check points have been so poor, that only a warning can be given for this method. It is not possible to control the quality of the results with the residuals at control points.
The orientation based on RPC from the satellite vendors and the geometric reconstruction can be made without control points. Without bias correction, against check points, in the root mean square 6.2m differences has been shown for IKONOS images – this confirms the quality of the direct sensor orientation. The orientation with the approximations DLT and 3D-affine transformation do need at least 2 control points more than the theoretical required number to deliver acceptable results. With a high number of control points their result came also to sub-pixel accuracy. With bias corrected RPC and geometric reconstruction, starting from one control point in the average sub-pixel accuracy has been reached with slightly better results for the RPC solution. The inner accuracy of the IKONOS scenes is excellent, so a simple shift of the terrain corrected coordinates gave for both rigorous solutions even better results than a 2D-affine transformation. The location of the control points in the scene has been shown as unimportant.
IKONOS Zonguldak: Results at independent check points for the different orientation methods as a function of the number of control points
In the same area also a QuickBird scene has been used with similar results. The orientation with 3D-affine transformation and with DLT do require more and well distributed control points but did not reach the same accuracy at independent check points like bias corrected RPC and geometric reconstruction. This may be caused by the slow down factor of 1.7 used by QuickBird. The inner scene accuracy of QuickBird in relation to the GSD of 0.62m requires a 2D-affine transformation after terrain relief correction, so at least 3 control points should be used for bias corrected RPC and geometric reconstruction. Both methods do have similar sub-pixel accuracy.
QuickBird Zonguldak: Results at independent check points for the different orientation methods as a function of the number of control points
OrbView-3 was only available as slightly improved original sensor image. The scenes show very clear rotations during acquisition, so the first and the last lines are not parallel. This does not allow the use of 3D-affine transformation and DLT; they are limited to 5m up to 20m accuracy.
OrbView-3 is a quite less expensive satellite like IKONOS and QuickBird, it has no TDI, by this reason staggered CCD-lines are used – a combination of 2 CCD-lines shifted half a pixel in the CCD-line direction against each other. The pixels projected to the ground do have a size of 2mx2m, but they are over-sampled by 50%, leading to a GSD of 1m. By this reason the image quality is slightly less than for IKONOS and the control point identification was limited to a standard deviation of 1m. Like for IKONOS and QuickBird the direct sensor orientation without ground control was with 9.3m in the average below 10m. With the same control points like for the scenes described before it was not possible to reach sub-pixel accuracy with bias corrected RPC orientation. For the both used scenes after terrain relief correction based on a shift to the control points in the average root mean square errors of the ground coordinates of 1.9m and based on affine transformation 1.6m has been reached. In relation to the pixel size of 2m this is still sub-pixel accuracy but not in relation to the 1m GSD.
The accuracy of the scene orientation is important for the generation of digital elevation models (DEM) and for GIS products. For DEMs the best accuracy is important, for GIS-products this has to be seen in relation to the map scale which is indirectly specified also for GIS products. As a rule of thumb the GIS data acquisition requires an image resolution of approximately 0.1mm in the map scale. So with IKONOS and OrbView-3 scenes a map scale 1 : 10 000 is possible, for QuickBird 1 : 6000. The mapping accuracy is sufficient with 0.25mm in the map, corresponding to the preceding rule to 2.5 pixels. With all 3 VHR sensors this accuracy requirement has been reached.
GIS DATA ACQUISITION
For the GIS data acquisition the ground resolution, radiometric quality, the sun elevation and the view direction of the used scenes are important. In city areas a larger nadir angle may cause problems to look down to the streets. A lower sun elevation causes shadows, worsening the street identification.
influence of sun elevation to IKONOS image quality
sun elevation 63° sun elevation 41°
Of course also the atmospheric conditions are important, haze, cloud shadows and the haze around clouds are reducing the image quality. Independent from this, there are still some clear differences between the 3 analyzed VHR sensors. The effective image resolution can be identified with an edge analysis, but this is also depending upon contrast improvement which in most cases has been used. So for all 3 sensor types the GSD has been confirmed as effective resolution even if the details are not so clear in OrbView-3 images like in IKONOS.
1m GSD, sun elevation 46°
0.62m GSD, sun elevation 65°
1m GSD, sun elevation 63°
The QuickBird GSD of 0.62m shows a clear improvement of the object identification against the 1m GSD of IKONOS. With QuickBird the details required for German topographic maps 1:5000 can be recognized. IKONOS scenes are more suitable for the map scale 1:10 000. The image quality of OrbView-3 is not so good like for IKONOS, but it is still sufficient for mapping in this scale. It is more important to have good images with a limited nadir angle and a sufficient sun elevation.
The panchromatic images do have 4 times higher geometric resolution like the multispectral. With 2.4m GSD of QuickBird individual buildings can be identified and mapped, but if this is compared with the result of the data acquisition with panchromatic images, it is obvious, that the shape of the buildings cannot be digitized so accurate.
QuickBird multispectral QuickBird panchromatic green from multispectral red from panchromatic
IKONOS pan-sharpened IKONOS panchromatic
The multispectral information can be joined together with the panchromatic by pan-sharpening. A comparison of data acquisition with panchromatic and pan-sharpened images showed limited advantages of the pan-sharpened images – the objects can be identified and classified easier. Few small buildings located in shadows have not been recognized in the panchromatic scenes. A pan-sharpening is not possible for OrbView-3 with images from the same orbit because OrbView-3 cannot register panchromatic and multispectral simultaneously.
planned (dashed ellipse) and unplanned (solid ellipse) areas in OrbView-3 image
The data acquisition is quite depending upon the area. With wide roads, not so high buildings and regular location the data acquisition is easier than in unplanned areas. The different building size and orientation and the not regular, unpaved roads make it quite more difficult. In addition unplanned areas may be located in inclined areas, often located in shadows.
Digital elevation models can be generated with the VHR satellite images. It is possible to take stereo pairs from the same orbit, but the required strong rotation from one to the next view reduces the possibility of the acquisition of other scenes. Digital Globe mentioned that the data acquisition of a stereo model with QuickBird takes 9 times the capacity like for a single image, but the price is only 2.3 times as much. By this reason it is not economic for the satellite vendors to acquire stereo models and the number of stereo models from the same orbit in the archives is limited. Images taken not at the same day may be influenced by changes of the object and shadows, making the automatic image matching difficult up to impossible. With a QuickBird model having 10 days between both image acquisitions no problems existed, but with IKONOS scenes, having 2 month difference in time, the automatic image matching failed.
By simple theory the vertical accuracy is linear depending upon the height-to-base-relation. This is the case for open and flat areas, but in cities a height-to-base-relation of 1.0 is not usable. In one building the left side of a building and in the other the right hand side may be seen, this causes problems with the automatic image matching. In addition it is not possible to see down to the street level in both images together.
DEM based on IKONOS stereo model with height-to-base-relation 7.5
DEM after Median filter
Quite better and more complete results have been achieved in city areas with a poor height-to-base-relation like 7.5. It is not so easy to find a compromise between the accuracy and the completeness of a DEM in city areas. The optimal height-to-base-relation is approximately in the range of 3.
quality map of automatic matching of OrbView-3 model with h/b=1.4 white: r=1.0 black r=0.6
poor frequency distribution of matching OrbView-3 model with h/b=1.4
With an OrbView-3 model taken from the same orbit, having a height-to-base-relation of 1.4 not optimal results have been achieved. The DEM has large gaps and the correlation coefficients are poor.
The data acquisition for GIS using very high resolution space images is becoming more and more popular. With the increased number of satellites and good archives today it is quite easier to get optimal images than before. The scene orientation should be made with bias corrected RPC or with geometric reconstruction. The approximate orientation methods are not economic solutions and sometimes do lead to poor results.
The data acquisition for GIS is easier with pan-sharpened images, but it also can be made with panchromatic scenes. The higher resolution of QuickBird has an advantage, but IKONOS and OrbView-3 are also suitable up to the map scale 1:10 000. Especially in city areas the view angle should not be too large and the sun elevation should not be too small. For DEM generation a height-to-base-relation in the range of 3.0 is optimal for city areas. The scenes should come from the same orbit or the time difference should not exceed approximately 10 days to avoid a change of the object and quite different shadows.
Parts of the presented results have been supported by TUBITAK, Turkey and the Jülich Research Centre, Germany. Thanks are going to Dr. Gürcan Büyüksalih for the support of the research.