Home Articles Wavelet-based image fusion using “A trous” algorithm

Wavelet-based image fusion using “A trous” algorithm

PDF

Maryam Dehghani


Maryam Dehghani
M.Sc. Student Geological Survey of Iran, K.N. Toosi University of Technology Vali_asr St,Tehran, Iran, P.C. 19697
Tel: +98 21 8789357, Fax: +98 21 877 9476
Email: [email protected]

Introduction :
Recently multiresolution analysis is one of the acceptable methods to analyse the Remotely_sensed images. A new method baesd on discrete wavelet transform is proposed here. In this paper first we introduce standard methods to fuse images then we describe the proposed method and related algorithm to do discrete wavelet transform. Qualifed and quantified results of this method are compared with the other methods. At last, some important conclusions are stated briefly.

Standard fusion methods :

1-Principle Component method (PC)
To do this approach first we must get the principle components of the multispecral image (band1, band2, band3 or R,G, B). After that the first principle component which contains the most information of the image is substituted by the panchromatic image. At last the inverse PC transform is done to get the new R,G,B bands of multispectral image from the principle components.

2-Brovey method
The fused R, G, B bands are as follows:


3-HIS (LHS) method
This method is based on the transformation of RGB multispectral channels into HIS (Hue-Intensity-Saturation). The Intensity component is the most important component and can be defined as follows:


In this method, we substitute the Intensity component by the panchromatic image and then the inverse transformation (HIS to RGB) is done. To compare the results, all these methods were used in addition the proposed method based on wavelet decomposition.

Wavelet-based image fusion
Wavelet decompostion: wavelet decomposition is used for image processing very much. Wavelet transform produce the images in different resolution. Wavelet represention refers to both spatial and frequency space. It can show a good position of a function (here this function is the image) in spatial and frequency space.

There are different approaches to do wavelet decomposition. One of them is Mallat algorithm which can use wavelet function such as “Daubechies functions (db1, db2 ,…)” Here we use “a trous” algorithm, which uses dyadic wavelet to merge non_dyadic data in a simple and efficient procedure. In this algorithm for the discrete wavelet transform we must do the successive convolution with a filter.


To convolve the image and the filter we can use two methods:
  • Using convolution theory: Transform both the image and filter into the frequency space by using (fast) 2D Fourior transform (FT or FFT). Then multiply both of them together and finally do the inverse 2D Fourior transform (IFT) to get the convolved image.
  • To use convolution function directly.In each step we get a version of the image ,…. The wavelet coefficient is defined as follows:


So if we decompose an image I into wavelet coefficients, then we can write:
in which Ir is a residual image. In this approach all wavelet planes have the same number of pixels as the original image. There are two approaches for image fusion based on wavelet decomposition:

1-Substitution method (SUBRGB) :
In this method after getting the wavelet coefficients of multispectral and panchromatic images, we substitute some wavelet coefficients of multispectral image by some wavelet coefficients of panchromatic image. At last we do the inverse wavelet transform.

2-Additive method :
this approach can be done in two ways.

  • AWRGB(Using the of R,G,B components): in this approach we first only produce the wavelet planes of the panchromatic image and add them to R,G,B bands directly.
  • AWI(Using the Intensity component):here after getting the wavelet coefficients of the panchromatic image we add them into the Intensity component whish is extracted from R,G,B bands. After that we transform the HIS component (with a new Intensity) into new R,G,B.

There are different ways to get the Intensity-Hue-Saturation components from R,G,B bands. The algorithm we used here to get the Intensity componet is called “Hexcone” .This method is as follow :

Max = Maximum (R,G,B)
Min = Minimum (R,G,B)
Delta = Max-Min
Intensity = Max
If (Max # 0) then “Satuation = Delta/Max ”
If (Max = 0) then “Saturation = 0”
If (Saturation = 0) then ” Hue=0″
If (R = Max) then “Hue = (G-B ) /Delta” (Between yellow and magenta)
If (G = Max) then “Hue = 2 + (B-R ) /Delta” (Between cyan and yellow)
If (B = Max) then “Hue = 4 + (R-G ) /Delta” (Between magenta and cyan)
Hue = Hue * 60 (Convert Hue to degree)
If (Hue < 0) then "Hue = Hue+360" (Hue must be positive)
If (Hue >= 360) then “Hue = Hue – 360”

to scale the Hue and Saturation between 0 and 255 :
Hue = Hue * (255/360)
Saturation = Saturation *255
To do inverse transform (RGB to HIS) :
Hue = Hue * (360/255)
Saturation = Saturation /255
If (Saturation =0) then “(R,G,B)=(1,1,1)”
If (Saturation > 0) then “Hue = Hue / 60”
J = floor (Hue)
F = Hue – J
P = Intensity * (1 – Saturation )
Q = Intensity * (1 – ( Saturation * F))
T = Intensity * (1 – (Saturation * (1-F)))
If (J = 0) then ” (R,G,B) = (Intensity , T , P)”
If (J = 1) then ” (R,G,B) = (Q , I , P)”
If (J = 2) then ” (R,G,B) = (P , Intensity , T )”
If (J = 3) then ” (R,G,B) = (P , Q , T)”
If (J = 4) then ” (R,G,B) = (T , P , Intensity)”
If (J = 5) then ” (R,G,B) = (Intensity , P ,T)”

In the substitution method the wavelet coefficients of the multispectral image are discarded and substituted by the wavelet coefficients of panchromatic image completly but in the additive method most of the spatial information of both of images (panchromatic and multispectral) is preserved. The difference between two additive methods is that in the first method (AWRGB) the panchromatic wavelet coefficients are added in the same amount to the RGB bands while in the second case high resolution information only is added to Intensity component that leads to preserve information in the better manner.

Results:
Here we use the panchromatic and multispectral bands of the LANDSAT ETM satellite at the spatial resolutions of 15m and 30m. The methods we used in this paper are:

1-Additive wavelet method contains two approaches:

  • AWRGB (using RGB components)
  • AWI (using Intensity component)

2- Substitution wavelet method (SUBRGB) the standard methods used here are :

3- HIS (Hue-Intensity-Saturation) method

4- Brovey method

5- PC(Principle component) method

To do this we used MATLAB (version : 6.1) and the ERDAS IMAGINE 8.4 software (image processing software).

Since there isn’t any multispectral image at resolution 15m to be compared with the results, we used an inferior level of the images. So we reduced the spatial resolution of the panchromatic and multispectral bands into 30m and 60m respectively. Therefore the merged multispectral image has spatal resolution 30m.

The steps of using above different methods are as follow :

  1. to register the multispectral image to the same size of panchromatic image . To do that we selected about 30 points in both images and registeration was done within subpixel RMSE(Root Mean Square Error). This step is very important and need to be done very precisely because the wavelet-based image fusion method is very sensitive to that. If there is a little displacement between two images, the resulted images will have bad quality. To do the registation, the PCI V8.2.0 software(OrthoEngine module) was used.
  2. To do histogram matching between two images. Since the panchromatic and multispectral images used here ,belong to one sensor, the atmosphere and illumination conditions for both images are nearly the same. So this step can be omitted.
  3. Up to here the panchroamatic and multispectral images have been perpered to be merged.
  4. All fusion methods stated above can be used now.
  5. Accurscy assessment : to quantify the behavior of the fusion methods used here, we computed the correlation between the different solutions and the original multispectral image. The correlation coefficient is computed as follows :

where state for the mean value of the corresponding data set. High amount of the correlation shows that the spectral chracteristic of the multispectral image has been preserved well. Table 1 shows the correlations coefficients between the different solutions and the original multispectral image at 30m resolution. Table 1: correlation coefficients computed between different solutions and original multispectral image at 30m resolution.

Method Blue Green Red
AWI 0.8220 0.8189 0.8117
AWRGB 0.7950 0.7993 0.8046
SUBRGB 0.6899 0.6968 0.7046
Brovey 0.5988 0.6511 0.7011
PC 0.5533 0.5445 0.5663
HIS 0.6534 0.6431 0.6218

As you can see the correlation coefficients of different methods in the table 1, wavelet-based fusion methods (AWRGB, AWI, SUBRGB) can perserve the spectral characterictics of the image more than the others, because their related correlations have the highest values . Also comparing the resultant images visually, you can find that these wavelet -based fusion methods perfom betther than the others (fig. 1-b).

Figure 1 shows the original R band as an example and the merged one using AWI and a standard method like Brovey method.



Fig. 1. (a) The original R band of ETM (low-resolution multispectral image).(b) Result of the fusion using AWI method(Additive wavelet on Intensity compoenet) . (c) Result of the fusion using the Brovey method.
Conclusion:
The advantages of using wavelet-based methods are:
  • The spectral quality of the images is preserved better than using the other approaches (you can see that from the table 1).
  • In the additive wavelet-based method the detail information of both of images (panchromatic and multispectral) are used and non of them is discarded.
  • For image processing, to work in the frequency and spatial space to gether can be more efficient instead of to work in one space. So using wavelet-based method which use both of space to process the image is recommended.
  • Since the wavelet-based method use multiresolution analysis, it’s more useful than the other transform in frequency space like Fourior transform.
  • In HIS/LHS method the Intensity component is substituted by panchromatic image completely ,so the detail information in the Intensity is discared but in the additive wavelet-baed method using Intensity (AWI) ,the highest resolution features not presented in the multispectarl image are added to the fused image by adding the panchromatic wavelet coefficients to the Intensity component.
  • Since the wavelet coefficients (except the residual image) have zero mean, the total flux of the multispectral image is preserved.
  • “a trous” algorithm uses dyadic wavelet to merge non_dyadic data in a simple and efficient procedure.So it is better than the other algrithm such as “Mallat”. By using this algorithm to decompose the images all wavelet planes in addition to the residual image have the original image pixel size ,so we can use this algorithm to merge non-dyadic images.

References :

  • R.C. Gonzalez and R.E. Woods,”Digital Image Processing”,Addison_wesley publishing company.
  • J.Nunez, X.Qtazu, O.Fors, A.Prades, V.Pala and R.Arbiol ,”Multiresolution-Based Image Fusion with Additive Decomposition”,IEEE Trans. On Geoscience and Remote Sensing, vol. 37, no. 3, MAY 1999
  • M.Aziz Mohammadi,”Assessment Of Image Fusion Methods Applied to SPOT (PAN&XS) Image using Wavelet Decomposition ” , MSc. Thsis , K.N.T. university of Iran

Bibliography
Maryam Dehghani is a student of M.Sc. degree in remote sensing at K. N. Toosi University of Technology. She got her B.Sc. degree from Geodesy and Geomatics Engineering in Elmo San’at university. The area of her interests are: Digital Image Processing , Wavelet Transformatuon.