Home Articles Segmentation of High Resolution Imagery

Segmentation of High Resolution Imagery

B. Krishna Mohan
Centre of Studies in Resources Engineering
Indian Institute of Technology, Bombay
Powai, Mumbai 400076, India
Tel: +91-22-25767684, Fax: +91-22-25723190
Email: [email protected]

S. U. Kadam
Reliance Infocomm Ltd.
Dhirubhai Ambani Knowledge City
Navi Mumbai 400709, India
Tel: +91-22-30373333, Fax: +91 22 2762 4213

E. P. Rao
Department of Civil Engineering
Indian Institute of Technology, Bombay
Powai, Mumbai – 400076, India
Tel: +91-22-25767345, Fax: +91-22-25723480

Introduction
High resolution imagery from resources satellites are revolutionizing the generation of geo-spatial information and building up geographic databases. The amount of detail presented by the 1-metre and sub-metre resolution imagery enables analysis and mapping of the terrain to a level not attempted before. Extraction of information from the high resolution imagery is a challenging task since the per-pixel methods adopted here are likely to fail due to their inability to capture the increased natural variability in the reflectance, and also the fact that each landcover category is comprised of several spatially adjacent pixels. Therefore the methodology to be adopted in such a case has to be region based or object oriented.

Another aspect of information extraction from remotely sensed images (for that matter from any images) is that the objects of interest exhibit tonal / textural structure at different scales. The meaning of scale here is that some objects may contain a large amount of fine detail, while some objects are (intensity-wise) flat with little variation across their spatial extent. Therefore the objects that indeed contain a lot of important detail are studied at one scale, while objects that have only noise induced variations are studied often at a coarser scale. Scale space approach to image analysis was introduced by Marr (1980) and has since then been extensively investigated. One of the most recent examples is the eCognition TM software (Definiens, 2003) wherein the image is segmented based on scale, texture, shape and color (tone).

Region Based Segmentation
Region Based segmentation is a technique by which the image is segmented by identifying regions of common property. This can be further divided into Region Growing and Region Split and Merge approaches. This paper focuses only on region growing, a procedure that groups pixels or subregions into larger regions based on some predefined criterion. The general idea of the region growing methods is to create a partition of the picture into homogeneous regions by any kind of process that starts with small regions and then grows them according to some homogeneity criterion.

The object-oriented image processing techniques rely on successful image segmentation of image features based on contextual information such as texture, connectivity, and multiresolution hierarchy. Cheng et al. (2000) define image segmentation as a process of dividing images into different regions such that each region is, but the union of two different regions is not, homogeneous. Image segmentation methods are split into two main domains: knowledge driven (top down) methods and data driven (bottom up) methods (Gao Yan 2003). See also, Lobo (1997) for a philosophical discussion on the interpretation of objects in natural scenes, and Bock and Lessing, (2002). Baatz and Schape (2000) discuss a fractal net evolution approach for image segmentation wherein metrics are defined for merging regions with their neighbors.

Multiscale Linear Feature Detection
One of the approaches to linear feature extraction is to regard the image as a two-dimensional function z(x,y) of the spatial variables x and y; and extract lines from this function using differential geometry properties. Steger (1998) used Guassian kernels in his method. Steger proposed an approach to line detection that uses an explicit model for lines and line profile models of increasing sophistication. A scale-space analysis is carried out for each of the models. This analysis is used to derive an algorithm in which lines and their widths can be extracted with subpixel accuracy. The algorithm uses a modification of the differential geometric approaches to detect lines and their corresponding edges. Since Gaussian masks are used to estimate the derivatives of the image, the algorithm scales to lines of arbitrary widths while always yielding a single response. The Gaussian kernels are given by equations 1, 2 and 3.

For discrete signals, the convolution masks are given by:


For the proposed line detector, the parameters that are taken into consideration are the line width w and its contrast h. To convert thresholds on w and h into thresholds the operator can use, first a s should be chosen such that


Scale-Space Tracking
The Gaussian operator involves compromises, One has to make a judgement and compromise between losing line pixels and eliminating noise pixels without distorting the shape of lines. On the one hand, by using a large value of s, the noise is reduced but one loses genuine line pixels. On the other hand, using a small values of s, local information about lines is not lost but noise and other unnecessary details are enhanced. To overcome this problem, Bergholm (1987) introduced the concept of coarse to fine feature tracking known as edge focusing. He has applied this principle for edge detection.

The steps in the line tracking algorithm based on Bergholm’s approach are as follows:

  • Create an initial coarse level line image using the Gaussian operator with an initial value of s = so.
  • Then choose a scale step S that is so small that line points with high probability do not move more than one pixel during the line focusing step.
  • Now apply the edge detector with s = so- S at pixels where lines were detected for s = so and at pixels in their immediate neighborhood. This means that at a finer scale image, line detection is performed only at a thin region around the lines at the coarse scale image.
  • The line points at previous level are discarded and only the finer scale line points are accepted.
  • Subsequent line points are detected in the same way i.e., line detection is performed in the border region of the image with s = so- S using the new value of s = so- 2S. Nnote that the threshold is used only for s = so and not at the finer scales.
  • Line focusing continues until Gaussian smoothing is quite weak (for example s = 0.6)

Some of the advantages of this technique are:

  • If weak lines at finer levels of resolution belong to an line segment that exists at a coarse level, then gaps may be filled by the focusing procedure.
  • Weak lines at finer levels of resolution not belonging to the coarse level line segments will normally be neglected using continuous coarse-to-fine tracking.

Watershed Transformation
A non-parametric method was first developed for contour extraction in grey images, which relied in defining the contours as watershed. (Beucher, 1992) This method has been considerably improved with the tools of mathematical morphology using watershed transformation. The non-parametric method of contour extraction using watershed transformation in the earlier stages lead to over-segmentation of images. Later a strategy called Marker Controlled Segmentation. In this approach, user defines object markers and background markers so as to differentiate between the two and later segment the image using the gradient image. The gradient image is often used in watershed transformation, as the main criterion of segmentation is image homogeneity. (Beucher 1992).

Watershed analysis is well recognized as being useful for image segmentation and has been made computationally practical thanks to a fast technique presented by Vincent and Soille (1991). Watershed analysis uses an image’s gradient magnitude as input to subdivide the image into low-gradient catchment basins surrounded by high-gradient watershed lines. The catchment basins consist of locally homogeneous connected sets of pixels. The watershed lines are made up of connected pixels exhibiting local maxima in gradient magnitude; to achieve a final segmentation, these pixels are typically absorbed into adjacent catchment basins.

Results and Discussion
In this paper an attempt is made to study different image segmentation algorithms from the point of view of segmenting images of different spatial resolutions including 5.8 metre IRS and 1-metre IKONOS images. The results of line detection using the IRS panchromatic image can be seen in Fig. 1 (a)-(d). The multiscale line tracking has been found to fill the gaps in the line map at sigma=3 and reduce the noise at sigma=1. Similar results can be seen with the 1-metre IKONOS images in Fig. 1 (e)-(h). To compare the textural segmentation and the line detection methods, the line network extracted is overlaid on the textural classification as can be seen in Fig. 2. The classes extracted using the textural classification include water, marshy land, open areas, densely built-up areas and sparsely built-up areas.


Fig 1(a) Original Image – 5.8 metre resolution

Fig 1(b) Line detection, sigma=1.0

Fig 1(c) Line detection, sigma=3.0

Fig 1(d) Multiscale coarse-fine line tracking

Fig 1(e) Original image – 1-metre resolution

Fig 1(f) Line detection using sigma=1.0

Fig 1(g) Line detection at sigma=3

Fig 1(h) Coarse-fine tracking

Fig 2(a) Landuse classification using textural features

Fig 2(b) Linear features overlaid on texture classification To compare region segmentation of the 1-metre IKONOS imagery, the fractal dimension texture image suggested by Sarkar and Choudhary (1995) using differential box counting was implemented and the result of clustering the original image and the texture image can be found in Fig. 3 (a)-(b). Again, the clusters indicate the categories such as building tops, water, roads, open areas and vegetation. The image is also segmented using a region growing algorithm based on the morphological watershed transform (Beucher, 1992). This result can be seen in Fig. 3 (c). The region segmentation visually compares well with the other segmentation results. A detailed classification system is being developed to utilize the watershed transform output as the first step, followed by computing the region statistics, region connectivity with neighboring regions and finally region classification using neural networks, maximum likelihood method, fuzzy and hard clustering. Integration of edge/line and region information is important for fusing information extracted about the image from different approaches. This approach is also being built into the above system.


Fig. 3(a) Fractal feature using Box-counting

Fig. 3(b) Classification based on texture

Fig. 3(c) Watershed based Segmentation
Acknowledgements
This work is supported by a research grant from Indian Space Research Organization under the ISRO-IIT(Bombay) Space Technology Cell. Contributions to the work reported here from the research associates J. Kannan and G. Bimal Raj, and former students G.M.Bhat and M. Lakshmikant is acknowledged. The encouragement received from Head, CSRE during the course of this work is gratefully acknowledged. The IKONOS image used in this academic research was downloaded from the website of Space Imaging Inc., USA (www.spaceimaging.com) who have been producing magnificent quality high resolution imagery for civilian applications since 1999.

References

  • Argenti, F., Alparone, L., and Benelli, G., Fast algorithms for texture analysis using cooccurrence matrices, IEE Proceedings-F, vol. 137, n. 6, Dec. 1990, pp. 443-448.
  • Baatz, M., and Schape, A., 2000, Multiresolution Segmentation: an optimization approach for high quality multi-scale image segmentation, URL:
  • Bergholm, R., 1987, Edge Focusing, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-9, pp.726-741.
  • Beucher, S., 1992, The Watershed transformation applied to image segmentation, Scanning Microscopy International, suppl. 6, pp. 299-314.
  • Bock, M., and Lessing, R., 2000, Remote Sensing, Formation of Objects and Determination of Quality, Internationales Symposium “Informatik für den Umweltschutz” der Gesellschaft für Informatik (GI), Bonn, URL: https://enviroinfo.isep.at/UI%20200/BockM300700.el.hsp.pdf
  • Cheng, H.D., Jiang, X.H., Sun, Y., amd Wang, J., 2001, Color image segmentation: advances and prospects, Pattern Recognition, vol. 34 , pp. 2259-2281.
  • Definiens-Imaging 2003, eCognition Software Version 3, https://www.definiens-imaging.com, Germany.
  • De Jong, S. M., and Burroagh, D. A., 1995, A Fractal Approach to the Classification of Mediterranean Vegetation Type in Remotely Sensed Images, Photrogmmetric Engineering and Remote Sensing, Vol.61, No.8, pp.1041-1053.
  • Dengru, W., and Linders, J., 1999, A new Texture Approach to Discrimination of Forest Clear-cut, Canopy, and Burned Area Using Air born C-Band SAR, IEEE Trans. On Geoscience and Remote Sensing, Vol.37, No.1, pp.555-562.
  • Gao Yan, 2003, Pixel based and object oriented image analysis for coal fire research, unpublished M.Sc Thesis, ITC, the Netherlands. (URL: https://www.itc.nl/library/Papers_2003/msc/ereg/gao_yan.pdf
  • Geman, S. and Geman, D., 1984,. Stochastic relaxation, Gibbs distribution, and the Bayesian restoration of images. IEEE Trans. Patt. Analysis and Mach. Intelligence, vol. 6, no. 6, pp. 721-741.
  • Haralick, R. M., Shanmugam, K., and Dinsteein, J., 1973, Textural features for image classification, IEEE Trans. on System Man and Cybernetics, Vol.3, pp.610-621.
  • Jain, A.K. and Farrokhnia, F. 1991,. Unsupervised texture segmentation using Gabor filters. Pattern Recognition, vol. 23, no. 12, pp. 1167-1186.
  • Keller, J., Crownover, R., and Chen, S., 1989, Texture Description and Segmentation through Fractal Geometry, Computer Vision Graphics and Images Processing, Vol.45, pp.150-166.
  • Lee De Cola, 1989, Fractal Analysis of a Classified Landsat Scen’, Photogrammetric Engineering and Remote Sensing, Vol.55, No.5, pp.601-610.
  • Lobo, A., 1997, Image Segmentation and Discriminant Analysis for the Identification of Land Cover Units in Ecology, IEEE Transcations on Geoscience and Remote Sensing, Vol. 35, No. 5, pp. 1-11.
  • Mandelbrot, B., 1982, The Fractal Geometry of Nature, Freeman Co., San Francisco.
  • Marr, D., 1982, Vision, Freeman Co.,, San Francisco.
  • Nina Siu-Ngan Lam, 1990, Description and Measurement of Land sat TM images Using Fractals, Photogrammetric Engineering and Remote Sensing, Vol.56, No.2, pp.187-195.
  • Pentland, A. P., 1984, Fractal Based Description of Natural Scenes, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.6, pp.661-674.
  • Ohanian, P.P., and Dubes, R.C., 1992,. Performance evaluation for four classes of texture features. Patt. Recognition, 25(8):819-833.
  • Santhosh Kumar, G., 1996, Textural Classification of Remotely Sensed Data using Fractal Methods, unpublished M.Tech Dissertation IIT Bombay.
  • Sarkar, N., and Chaudhary, B. B., 1992, An Efficient Approach to Estimate Fractal Dimension of Images, Pattern Recognition, Vol.25, pp.1035-1041.
  • Sarkar, N., and Chaudhary, B. B., 1995, Texture Segmentation Using Fractal Dimension, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.17, pp.72-77.
  • Steger, C., 1998, An Unbiased Detector of Curvilinear Structures, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 20, No. 2, pp. 113-125.
  • Vincent, L. and Soille, P., 1991, Watersheds in Digital Spaces: An Efficient Algorithm based on Immersion Simulations, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 13, No. 6, pp. 583-589.
  • Weszka, J. S.,1976, A Comparative study of Texture Measures for terrain Classification, IEEE Trans. on System Man and Cybernetics, Vol.SMC-6, pp.269-285.