Home Articles Comparative study of different fusion techniques of IRS satellite images Cartosat-1 and...

Comparative study of different fusion techniques of IRS satellite images Cartosat-1 and LISS-IV

Anil Z Chitade
Research Scholar
[email protected]

Dr. S.K. Katiyar
Associate Professor
[email protected]

Department of Civil Engg,
Maulana Azad National Institute of Technology, Bhopal, Madhya Pradesh, India

The information about events in time or location of objects in space is often obtained using multiple sensors. Fast integration or fusion of such digital images collected from multiple sources and from different remote sensors has become critical for the success of multiplatform remote sensing missions.

This paper describes the comparative study of various methods of image fusion and analysis in terms of statistical parameters such as standard deviation, variance, co-efficient of variation, and Pearson’s Skewness factor. In this research, the Indian Remote Sensing Satellite image Cartosat-1 (PAN) with spatial resolution of 2.5 m and IRS 1D LISS-IV multispectral (3 band) image with spatial resolution of 5.8 m is used for fusion. After comparing all spectral and statistical parameter, normalised difference vegetation index (NDVI) value of each fused image is evaluated and finally suggested the suitability of the fused method for vegetation identification. Along with statistical parameters and NDVI values of each fused image, it can be concluded that the Ehler’s method is suitable where vegetation mapping is concerned.

Image fusion refers to the techniques that integrate complimentary information from multiple image sensor data such that the new images are more suitable for the purpose of human visual perception and for the computer aided processing tasks. The information about events in time or location of objects in space is often obtained using multiple sensors. The fast integration or fusion of such digital images collected from multiple sources and from different remote sensors has become critical for the success of multiplatform remote sensing missions. As the image fusion techniques have been developing quickly in a number of applications such as remote sensing, medical imaging, digital camera vision and military applications in recent years, the methods that can assess or evaluate the performances of different fusion technologies objectively, systematically, and quantitatively have been recognized as an urgent requirement. Most of the earth observation Indian satellites such as IRS1C, IRS1D, IRS-P5, and IRS-P6 provide panchromatic images at a higher spatial resolution than in their multispectral mode. The difference in spatial resolution between the panchromatic and the multispectral mode can be measured by the ratio of their respective ground sampling distances (GSD) and may vary between 1:3 and 1:7. This ratio can get worse if data from different satellites are used. The objective of image fusion is to combine the high spatial and the multispectral information to form a fused multispectral image that retains the spatial information from the high resolution panchromatic image and the spectral characteristics of the lower resolution multispectral image (Manfred Ehlers). Generally, image fusion methods can be differentiated into three levels: pixel level (iconic), feature level (symbolic) and knowledge or decision level. Of highest relevance for remote sensing are techniques for iconic image fusion, for which many different methods have been developed (Wald, L., 1997, Pohl, C., 1998, Zhang, Y., 2004). Many researchers have focused on how to fuse high resolution panchromatic images with lower resolution multispectral data to obtain high resolution multispectral imagery while retaining the spectral characteristics of the multispectral data and various image fusion methods have been developed. The most popular image-fusion methods are Brovey, Ehlers, HPF (High Pass Filter), Modified IHS, Multiplicative, Principal component, and Discrete wavelet Transform (DWT). Each image fusion method has its own suitability for a specific feature extraction.

In this research paper, emphasis is given to identify the suitable method for vegetation identification based on image fusion algorithm and NDVI of fused image.

Data resources and study area Investigations into the present work have been carried out for Bhopal city and adjoining areas in Madhya Pradesh State, India. For this research work, different spatial resolution data products of Indian remote sensing satellite sensors as mentioned in the Table 1 have been used. In order to cover the maximum range of land-use features, analysis of this work has focused on the vegetation and urban land-use of Bhopal city. All the work has been carried out using Erdas-Imagine V 9.1, MATLab7.5, and MS Office 2007.

Table 1. Multispectral Remote Sensing Datasets for the Study Site

Image fusion techniques
For any kind of remote sensing image, its spatial resolution and spectral resolution are contradictory factors. For a given signal to noise ratio, a higher spectral resolution is often achieved at the cost of lower spatial resolution. Image fusion techniques are therefore useful in integrating a higher spectral with higher spatial resolution image.

There are different algorithms for fusion of the images.

Brovey Transform. This method uses a ratio algorithm to combine the images. Since the Brovey transform is intended to produce RGB images, only three bands at a time should be merged from the input multispectral scene. The resulting merged image should then be displayed with bands 1, 2, 3 to RGB.

Ehlers Fusion. This method enables us to integrate imagery of different spatial resolutions (pixel size). Since higher resolution imagery is generally single band (for example Cartosat1 Panchromatic 2.5m data), while multispectral imagery generally has the lower resolutions (for example LISS_IV MSS 5.8 m data), this technique is often used to produce high resolution, multispectral imagery. This improves the interpretability of the data by having high resolution information which is also in colour.

Modified IHS: The Modified IHS (intensity, hue, saturation) resolution merge allows us to combine high resolution panchromatic data with lower resolution multispectral data, resulting in an output with both excellent detail and a realistic representation of original multispectral scene colours.

Multiplicative. This method applies a simple multiplicative algorithm which integrates the two raster images. This is computationally simple and the fastest method and requires least system resources. However, the resulting merged image does not retain the radiometry of the input multispectral image. Instead, the intensity component is increased, making this technique good for highlighting urban features.

Principal Component. This method calculates principal components, remaps the high resolution image into the data range of PC-1 and substitutes it for PC-1, then applies an inverse principal components transformation. The Principal Component method is best used in applications that require the original scene radiometry (colour balance) of the input multispectral image to be maintained as closely as possible in the output file.

HPF (High Pass Filter). Research into improving our wavelet-based resolution merge functionality led to advancement of the high pass filter (HPF) add-back method to the level at which it yields results comparable to redundant wavelets but with much smaller computation time and data space requirements.

Wavelet Transform. Wavelet-based processing is similar to Fourier transform analysis. The wavelet transform uses short, discrete “wavelets” instead of a long wave.

The image to image geometric registration process was used for registration of LISS IV, taking Cartosat-1 as reference image, because the original remote sensing image always have geometric transformation, an accurate geo-registration process, by which a transformation that provides the most accurate match between two or more images is determined. This is employed as a necessary preliminary process of image fusion. Before rectification of LISS IV, the images are digitally enlarged by a factor of two in both directions to generate a pixel size similar to Cartosat-1 PAN data. For ortho-rectification of images, 21 ground control points (GCPs) well distributed in the study area was collected using DGPS survey. The above images georeferenced using 2nd order polynomial as transformation functions with root mean square (RMS) error at sub pixel level. In the image rectification process Nearest Neighbor resampling method is used.

All bands of the original image were used for the fusion process by selecting the output format as float single. This was done to ensure the image conformity and avoid any loss of information. All the image processing operations were performed using ERDAS Imagine v9.1. The second step is the integration/fusion of geo-registered image. These fused images were then smoothened with a 3 by 3 low pass filter to eliminate the blockiness introduced by 2xdigital enlargement (Chavez et al., 1984; Chavez, 1986). The Cartosat-1 pan image was the master image and LISS-IV was the slave for co-registration.

After co-registration of both images at sub pixel levels, we performed various image fusion algorithms for fused image output generation. The obtained fused image output by various fusion algorithms is shown in Fig 1c to 1i. Statistical analysis of each method carried out separately and for better identification of vegetation, NDVI of each fused image is evaluated using simple mathematical formula ( NDVI= NIR- IR / NIR+IR).

Fig 1: Original image and the output of fusion of Cartosat-1+LISS-IV with various fusion tTechniques
a. Original Cartosat-1
b. Original LISS-IV
c. Brovey d. Ehlers
e. Principal Component, f. HPF
g. Multiplicative Method,
h. DWT
i. Modified IHS

The spectral characteristics in the data set generated by using above discussed fusion method with principal component spectral transform algorithm are compared with the spectral characteristic of the original LISS IV data. The comparison is made statistically. The NDVI value of each fused image is calculated.

Results and discussion
The statistical analysis of fused images was done by calculating the parameters (suggested by Pat S Chavez Jr. et.al, and Manfred Ehlers) and the same are given in Table 2 For all the images, we studied the statistical parameters of the histogram and especially the standard deviation, Co-efficient of variation, Pearson’s Co-efficient of skewness, bias and root mean square error (RMSE) of all images fused with each algorithm. The value of the standard deviation is correlated with the possibility to recognise different entities. The statistical control is necessary in order to examine spectral information preservation. The fused image (Cartosat 1 + LISS IV ) produced by HPF method using principal component spectral transform presents nearly the same minimum and maximum values with the original multispectral image for all the bands, which indicates very little variation between the original multispectral and the fused image produced by this method. The standard deviation values don’t really change. For example, the standard deviation of the first band of the image decreases from 27.724 (original multispectral image) to 27.62 (fused image by HPF). The spatial resolution of the fused image is found out with this combination is exactly same as 2.5 m compared with the original Cartosat-1 image. The comparative analysis of various statistical parameters for different fusion algorithm is shown in Chart 1 and Chart 2 respectively. The NDVI value of each fusion image is evaluated using Matlab v7.5 and Erdas Imagine V9.1.The output generated and NDVI model of image generated by Ehlers method is shown in Fig 2 and Fig 3.

Fig 2: Model for calculating NDVI for fused image by Ehler’s

Fig 3: NDVI Image generated from Ehlers

Table 2: Statistical Parameters of the Original and Fused Image with Different Image Fusion Techniques (Cartosat1 + Liss-IV)

Based on the above investigations, following are the conclusions:

For the Cartosat 1 and LISS IV image fusion produced by HPF with principal component spectral transform presents nearly the same minimum and maximum (0 and 255) values with the original multispectral image for all the bands, which indicates no variation between the original multispectral and the fused image produced by this method. The standard deviation just deviates with decimals.

After comparing the NDVI value of each output image generated with fusion algorithm, it is concluded that the Euler’s method of image fusion with principal component spectral transform gives higher value of NDVI and hence the suitable method for vegetation analysis is suggested.

All the fusion techniques improve the resolution and the spectral result. The HPF keeps exactly the statistical parameters of the original images. The statistical analysis of the spectral characteristics of the data indicate that the results generated by HPF with principal component spectral transform is least distorted and the low frequency noise problem can be remove automatically, colours are well preserved in the fused image, but unrealistic artifacts for spatial improvement are observed, for vegetation extraction the HPF fused image found better.


  • Baochang Xu, Zhe Chen – A Multisensor Image Fusion Algorithm Based on PCNN Proceedings of the 5th WorId Congress on Intelligent Control and Automation, June 15-19, 2004, Hangzhou, PR China.
  • Bernd Jahne, Digital Image Processing, 6th Edition, Springer International Edition, pp 449-454.
  • Boris Zhukov et al; Unmixing-based Multisensor Multiresolution Image Fusion- IEEE Transactions on Geoscience and Remote Sensing, Vol. 37, No.3, May 1999.
  • C.Pohl et al, – Multisensor Image Fusion in Remote Sensing: Concepts, Methods, And Applications –International Journal of Remote Sensing,1998, Vol.19,No.5,pp.823-854
  • Couloigner et al- Benefit of the Future SPOT-5 and of the Data fusion to Urban Roads Mapping- Int Journal of Remote Sensing, Vol.19, No.8, pp 1519-1532.
  • Ehlers, M.: Spectral Characteristics Preserving Image Fusion-based on Fourier Domain Filtering.In: Proc. SPIE, vol. 5574,2004, pp. 1–13
  • Hai-Hui Wang et al: A Fusion Algorithm of Remote Sensing Image Based on Discrete Wavelet Packet — Proceedings of the Second International Conference on Machine Learning and Cybernetics, Xi’an, 2-5 November 2003.
  • H Pande et al: Effect of N-dimensional Data Sharpening on Scene Classification – Journal of Geomatics vol.3 No.2 Oct 2009 pp. 45-51
  • H.J.M.Pellemans et al: Merging Multispectral and Panchromatic SPOT images with Respect to the Radiometric Properties of the Sensor – Journal of Photogrammetric Engineering And Remote Sensing Vol.59, No.1, January 1993, pp. 81-87.
  • Lucian Wald et al: Fusion of Satellite Images of Different Spatial Resolutions: Assessing the Quality of Resulting Images – Journal of Photogrammetric Engineering and Remote Sensing , Vol.63, No.6, June 1997, pp. 691-699.
  • Man Wang et al; Fusion of Multispectral and Panchromatic Satellite Images based on IHS and Curvelet Transformations – Proceedings of the 2007 International Conference on Wavelet Analysis and Pattern Recognition, Beijing, China, 2-4 Nov. 2007.
  • Pat S Chevez, Jr.et.al,-Comparison of Three Different Methods to Merge Multiresolution and Multispectral data : Landsat TM and SPOT Panchromatic- Journal of Photogrametric Engineering and Remote Sensing, Vol.57,No.3,March1991,pp 295-303.
  • Renbin Hou et al: A New Fusion Algorithm for MRI and Color Images Based on Mutual Information in Wavelet Domain- 2008 Congress on Image and Signal Processing.
  • Pohl, C., Van Genderen, J.L.: Multisensor Image Fusion in Remote Sensing: Concepts, Methods and Applications. Int. J. Rem. Sens. Vol.59, 1998 pp 823–854.
  • Robert A. Schowengerdt- Textbook of Remote Sensing – Models and Methods for Remote Sensing, IInd Edition, Academic Press Elsevier, pp 357-387.
  • Wald, L., Ranchin, T., Magolini, M.: Fusion of Satellite Images of Different Spatial Resolutions – Assessing the Quality of Resulting Images. Phot. Eng. & Rem. Sens. Vol.63, pp. 691–699(1997)
  • WU Lianxi et al- Remote Sensing Image Fusion Technique For Information Preservation-Journal of Geo-Spatial Information science ( Quarterly ), Vol.7, Issue 4, Dec 2004, pp 274-278.
  • Xiong-Mei Zhang et al- Contourlet-Based Fusion Algorithm and Its Optimization Objective Image Quality Metrics- Proceedings of the 2007 International Conference on Wavelet Analysis and Pattern Recognition, Beijing, China, 2-4 Nov. 2007.
  • Yi Yang -A Novel Image Fusion Algorithm Based on IHS and Discrete Wavelet Transform – Proceedings of the IEEE International Conference on Automation and Logistics-August 18 – 21, 2007, Jinan, China.
  • Zhang, Y.: Understanding Image Fusion. Phot. Eng. & Rem. Sens. Vol.70, 2004, pp 657–661

The authors are especially thankful to Dr. R.P. Singh, Director, MANIT Bhopal (M.P.) for his kind permission to use the resources and Satellite Data in Remote Sensing and GIS Center of Civil Engg Department, Maulana Azad National Institute of Tech Bhopal, and Dr. S Sriniwas Rao, Sr. scientist, National Remote Sensing Center Hyderabad (A. P.) for his valuable guidance in this research paper.