Home Articles Supervised classification of multi-temporal Remote Sensing images

Supervised classification of multi-temporal Remote Sensing images

Chi-Farn Chen, Yueh-Tan Li
Center for Space and Remote Sensing Research
National Central University
Chungli Li, Taiwan
Tel: (886-)3-4227151-7624 Fax: (886-)3-4254908
E-mail:[email protected]

This paper presents a supervised approach for classifying multi-temporal remote sensing images. One major disadvantage to use the supervised classification of multi-temporal data is that each image is required to select its training data, even the images cover the same area. An attempt to use the fuzzy training method to avoid repeated selection of training data in each image is proposed here. Theoretically, the fuzzy training method is able to deal with the problem of mixture of training classes, therefore, the classification map generated from first-period image can be automatically becoming the fuzzy training sites for second-period image. The proposed approach is tested by a series of simulated multi-temporal images. The results indicate that the method presented here has great potential to extend to the practical applications.

1. Introduction
In processing of multi-temporal remote sensing images, accurate and convenient classification is among one of difficult tasks in practical applications (Baber, 1985). This study aims to use supervised algorithm to classify multi-temporal images. One major procedure of using supervised algorithm is the collection of training data that is relatively time-consuming and labor-intensive (Lillesand, 2000). Furthermore, the image responses may change due to variability in time and space within the multi-temporal images (Richards, 1993). Therefore, the problem that faces the supervised classification of multi-temporal images is that the training data has to be repeatedly selected for each image within the multi-temporal remote sensing data (Schowengerdt,1997). For this reason, a concept is proposed: choosing training set and finishing classification in first-period image, then the training data of the following period images would be automatically generated from the first-period classification image. The most difficult part of the study is that the class positions, numbers, and contents may change in the following-period images. Thus it would generate the complicated mixture of the training classes when it attempts to automatically select the training data for second-period image. This study uses the fuzzy training method (Wang, 1990) to overcome the mixing problem of training classes. Basically, the fuzzy approach allows the heterogeneity to exist within the training sites and may contain mixing classes in the training data. Consequently, the change of class positions and contents in second-period images can be tackled by the characteristics of the fuzzy training data, while the detection of change of class numbers leave a key problem in the process of automated selection of training data. This problem is studied and the solutions are obtained from a series of analysis of fuzzy means and covariance matrix of the training data. A series of simulated data are tested, and the results indicate that the proposed fuzzy training method has the potential to automatically classify multi-temporal remote sensing images.

2. Method
The following section 2.1 discusses the fuzzy training method and the section 2.2 present the multi-temporal supervised classification.

2.1 Fuzzy training
Basically, the fuzzy training method is the training procedure of supervised fuzzy classification. In this procedure, the conventional mean and covariance parameters of training data are represented as a fuzzy set. The following two equations (Equ.1 and Equ.2) describe the fuzzy parameters of the training data:

where µc* is the fuzzy mean of training class c, åc* is the fuzzy covariance of training class c, xi is the vector value of pixel i, fc(xi) is the membership of pixel xi to training class c, n is the total number of pixels of the training data. In order to find the fuzzy mean (Equ.1) and fuzzy covariance (Equ.2) of every training class, it must know the membership of pixel xi to training class c first. In this study, the membership function is defined based on the conventional maximum likelihood classification algorithm with fuzzy mean and fuzzy covariance.


fc(xi) is the membership of pixel xi to class c, Pc*(xi) is the maximum likelihood probability of pixel xi to class c, m is the number of classes, n is the number of the bands. These equations would ultimately produce the fuzzy mean and fuzzy covariance for each training class. Consequently, the membership values would be used to describe the mixing classes in every training site.

3. Least-Squares Estimation
If the preceding functional and stochastic models are correct, a least-squares method yields the adjusted parameters/measurements that are the best linear unbiased estimates (Koch, 1999).

3.1 Parameter Corrections and Measurement Residuals
To begin with, the nonlinear equations (1-4) have to be linearized to form a system of error equations, of which the expansion point is at the available measurements and parameter approximations:

where the n*n coefficient matrix B contains the partial derivatives of Eqs. (1-4) with respect to the n*1 measurement vector (…,ri,tj,…). The n*1 vector v stands for the measurement residuals (…,Vri,Vtj,…). Analogously, the n*u design matrix A contains the partial derivatives with respect to the u unknown parameters. The u*1 vector x represents the parameter corrections (dt,dMb,dak,dbk,dcc) with k = 0, 1, 2, 3. The n*1 vector l is the reduced observation vector. The measurement error covariance matrix is denoted by s02, where s02Q is the a priori unit weight reference variance. A least-squares method, which requires a minimization of the quadratic form vTQ-1v, produces the parameter corrections and the measurement residuals, as follows:

where the u*u scaled covariance matrix Qx and the n*n scaled covariance matrix Qv refer to the parameter correction vector x and the measurement residual vector v, respectively. The covariance matrices can be obtained by using the law of error propagation. Both Leick (1995) and Mikhail (1976) detailed the derivation of Qx and Qv so that their explicit expressions are not repeated here. The v-vector quadratic form can lead to the a posteriori estimate s02 of a unit weight reference variance. The same v-quadratic form also leads to a chi-square (c2) test statistic:

where n-u represents the degree of freedom. a is a chosen significance level, e. g. at 5%, used to create the lower and upper bounds of the inequality equation (9), in the course of a global model hypothesis testing. If this testing fails, an analyst can state that, with a 1-a confidence level, the functional and stochastic models (1-4) are not in order.

3.2 Optimal Parameter Selection
Another statistical testing can be utilized to validate parametric significance when trajectory polynomial modelings, such as described in Eq. (4), are involved. For any element x of a parameter correction vector x (7a), an F-distribution test statistic can be given as the term on the left-hand side of the following inequality equation (Zhong, 1997):

where qx denotes the scaled variance of x. The F-distribution has (1, n-u) degrees of freedom. Upon choosing a significance level a, the upper critical value F1-a;1,n-u is read from an F-distribution look-up table. If the test quantity fulfills the inequality relationship (10), this parameter element x is considered to be insignificant. After its deletion, the new parameter set will have u-1 elements. The measurement vector and its error covariance matrix remain unchanged.

A repeated least-squares adjustment is performed by using the algorithmic equations (6-7). New u-1 parameter corrections and new n measurement residuals are estimated. Their acceptance is based on the required global model test, as indicated in Eq. (9). The following minimum criteria, all related to the quadratic form of the estimated measurement residuals (Zhong, 1997), serve as the optimization indices in order to distinguish between the old/previous and new/current estimation results:

where Eq. (11a) is identical to Eq. (8). If the new model has a better performance than the old one, the search for another possible insignificant parameter continues. This means that the significance testing, Eq. (10), will be invoked. If, on the other hand, the previous model produces more optimal results than the current model does, the old/previous functional model represents the sought-after model solution.

4. Experimentation And Analysis

4.1 Airborne Chaochou SAR-Image
The airborne SAR-image over a 5.0 km*14.0 km area near the Chaochou town, Figure 3 on the last page, was a result of the Canadian CV-580 GlobeSAR campaign in Taiwan, near the end of October, 1993 (INTERA, 1994). The nominal flying height was 7.1 km above a mean sea level, with the airplane cruising at ~120 m/s (240 knots). A ground range resolution was ~4.0 m; an azimuth resolution was ~4.0 m, too.

Ground control/check point coordinates (Xi,Yi,Zi) were digitized/interpolated from the available 1:5000 topographic photo-maps. The corresponding image point line/pixel coordinates were measured, using the ERDAS/Imagine software utilities. All the coordinates were independently measured by three operators. The averaged coordinate measurements were accepted and prepared in an input data file. In our radargrammetric processing, the measurements were treated as being independent and identically distributed.

4.2 Significant Parameters
For the monoscopic Chaochou SAR-image, its space resection deals with the determination of radar antenna’s orientation parameters. With regard to the range/Doppler and the trajectory modeling equations (1-4), significant polynomial coefficients can be identified by following the stage-by-stage significance testing and the optimality assessment algorithm, in terms of Eq. (10-11). The results are given in Table 1, according to which the optimal set of parameters produced at the second stage will have been selected.

4.3 Planimetric Accuracy
When the SAR-image orientation parameters are made available, they can be used for each image point to determine its planimetric ground coordinates (Xi,Yi) where the Zi-coordinate is assumed to be known. This point-by-point space intersection is conducted for the 30 independent check points, leading to the accuracy results in Table 2. It is made clear that a single-valued variable squint angle is more suitable than a constant zero squint. The root-mean-square errors also indicate that a tentative second-order polynomial modeling (5) of the squint parameter has the highest accuracy level, in terms of the planimetric point positioning with the airborne Chaochou SAR-image.

Table 1. Iterative optimal determination of significant orientation parameters for the Chaochou SAR-image

Stage-1 Stage-2 Stage-3
Parameter set:
(besides t and M)
a0,…, a3
b0,…, b3
c0, …, c3
a0,…, a3
b0,…, b3
c0, c1, c3
a0, a1, a3
b0,…, b3
c0, c1, c3
Parameter having
a maximum
F-test statistic
c2 a2
Minimum criteria:


Optimization No Yes No

5. Summary
The SAR-image range/Doppler equations are introduced so as to recognize the geometric squint parameter. Before Table 2. Planimetric point accuracy in relation to the squint, t, parameter modeling

Root-mean-square errors
X/Easting (m) Y/Northing (m)
t (= 0.0 deg) ± 6.2 ± 7.1
t as a variable ( = -0.38 deg ) ± 5.4 ± 6.1
t modeled by using a
2nd-order polynomial:
T0 = -0.053 deg
T1= 3.26×10-3 deg/pixel
T2= 2.44×10-6 deg/pixel2
± 4.8 ± 5.3

embarking on a polynomial modeling of the squint angle, a least-squares estimation algorithm and a parametric significance testing methodology are briefly given. They serve as a sufficient processing tool in order to obtain an optimal set of radar’s orientation parameters. In studying the space resection/intersection of the airborne SAR Chaochou image, a second-order polynomial description of the squint parameter yields an improved Easting coordinate accuracy of ± 4.8 m and an improved Northing accuracy of ± 5.3 m.

Based on the positive experimental outcome, some future SAR-image processing schemes are itemized here: (1) automated setting of a polynomial expansion order for the squint angle; (2) possibility of a first-order range-dependent modeling of the pixel-spacing parameter; (3) application of the proposed methodology to spaceborne Earth resources SAR imagery.

The writers are indebted to the Council of Agriculture for sponsoring the 1993 GlobeSAR campaign. Thanks also go to Mr. C.-T. Wang of the NSC Satellite Remote Sensing Laboratory for pre-processing the SAR image.


  • Curlander, J.C., Kwok, R., Pang, S.S., 1987. A post-processing system for automated rectification and registration of spaceborne SAR imagery. International Journal of Remote Sensing, 8(4), pp.621-638.
  • Dowman, I., 1992. The geometry of SAR images for geocoding and stereo applications. International Journal of Remote Sensing, 13(9), pp.1609-1617.
  • Gelautz, M., Frick, H., Raggam, J., Burgstaller, J., Leberl, F., 1998. SAR image simulation and analysis of alpine terrain. ISPRS Journal of Photogrammetry and Remote Sensing, 53(1), pp.17-38.
  • INTERA, 1994. GlobeSAR CV-580 campaign to Taiwan 1993 final report. Intera Information Technologies Ltd., Ontario, Canada, 56p.
  • Koch, K.R., 1999. Parameter Estimation and Hypothesis Testing in Linear Models. Springer-Verlag, Berlin.
  • Leberl, F., 1976. Imaging radar applications to mapping and charting. Photogrammetria, 32, pp.75-100.
  • Leberl, F., 1979. Accuracy analysis of stereo side-looking radar. Photogrammetric Engineering and Remote Sensing, 45(8), pp.1083-1096.
  • Lee, C., Theiss, H.J., Bethel, J.S., Mikhail, E.M., 2000. Rigorous mathematical modeling of airborne pushbroom imaging systems. Photogrammetric Engineering and Remote Sensing, 66(4), pp.385-392.
  • Leick, A., 1995. GPS Satellite Surveying. John Wiley & Sons, Inc., New York.
  • Mikhail, E.M., 1976. Observations and Least Squares. University Press of America, Lanham, Maryland.
  • Tannous, I., Pikeroen, B., 1994. Parametric modeling of spaceborne SAR image geometry. Application: SEASAT/SPOT image registration. Photogrammetric Engineering and Remote Sensing, 60(6), pp.755-766.
  • Toutin, Th., Gray, L., 2000. State-of-the-art of elevation extraction from satellite SAR data. ISPRS Journal of Photogrammetry and remote Sensing, 55(1), pp.13-33.
  • Wu, J., Lin, D.-C., 2000. Radargrammetric parameter evaluation of an airborne SAR image. Photogrammetric Engineering and Remote Sensing, 66(1), pp. 41-47.
  • Zhong, D., 1997. Robust estimation and optimal selection of polynomial parameters for the interpolation of GPS geoid heights. Journal of Geodesy, 71(9), pp.552-561.

Figure 3. Airborne Chaochou SAR-image (C-band, HH-polarization, ten-look) in slant-range projection, on 30 October, 1993; 30 control points shown by (?), and 30 check points by (-); terrain heights varying between 4.0 m and 84.0 m