Home Articles Automatic Change Detection of Buildings in Urban Area Based On High Resolution...

Automatic Change Detection of Buildings in Urban Area Based On High Resolution Satellite Imagery and Digital Maps

Farhad Samadzadegan
Dept. of Geomatics
Faculty of Engineering
University of Tehran
Tehran, Iran
[email protected]

Mana Nikfal
Dept. of Geomatics
Faculty of Engineering
University of Tehran
Tehran, Iran
[email protected]

Abstract
Rapid growing of population, a reliable solution is essential to monitor the changes and make the digital map revised. The need of fast decision making and complete control on problems is one of the biggest disturbances of organizers that necessitate the use of up to date information. Most traditional pixel-based classifications, are based on the digital number of each pixel. The high spatial resolution images, provide more details such as color, size, shape and texture. So the object-oriented processing become of interest.

This paper focuses on an automatic change detection method based on object based classification techniques. The first step of the object-oriented processing is the segmentation. Next the object based classification with use of spectral, textural and structural parameters will be used to classify the imagery. Lastly Some Structural Rules such as the ratio of areas and Homogeneity parameters are used to compare the classification image objects and the old digital map to detect the changes. The proposed methodology was tested on a 1:2000 scale digital map of the city of Qom in Iran by using Quick Bird scenes. The visual investigation and the accuracy assessment of the obtained results demonstrate the high capability of this method.

1. Introduction
As the urban population is growing worldwide, a reliable analysis of the urban land use is essential to develop a sustainable environment for citizens (T. Bailloeul, etc., 2005). However, in practice most of the processes for analyzing the changes are the manual methods like on-screen change detection that are time consuming and expert dependent.(Samadzadegan et. al., 2004)

With the advent of very high resolution optical sensors on board of observation satellites, detailed land use change mapping in urban areas is becoming affordable at a large-scale. One of the key applications for aerial and space image analysis is in monitoring and keeping track of changes on the ground. This could be for various purposes such as urban planning, agricultural analysis, environmental monitoring and military intelligence.
The most important objects in the field of change detection are buildings.

Change extraction of building is needed to revise building data effectively. The change may be “shape change” of roofs and walls, “texture change” of roofs and walls or “attribute change” such as owner’s name.

However, remarkable changes of buildings cannot be detected in urban dense areas since most of the buildings are built on full lot size due to very small lot area.

In most cases, information needed for image analysis and understanding is not represented in pixels but in meaningful image objects and their mutual relations. Therefore, to partition images into sets of useful image objects is a fundamental procedure for successful image analysis or automatic image interpretation (Gorte 1998, Baatz and Schape 2000, Blaschke et al. 2001). In this sense, image segmentation is critical for subsequent image analysis, and even image understanding further.

From the perspective of image engineering, conventional per-pixel approach for remotely sensed imagery is just at its low level — image processing level. To extend and expand the application field of remote sensing, the shift in this field from per-pixel image processing to object-based image analysis taking segmentation as its initial procedure is recommended.

One motivation for the object-oriented approach is the fact that, in many cases, the expected result of most image analysis tasks is the extraction of real world objects, proper in shape and proper in classification. This expectation cannot be fulfilled by traditional, pixel-based approaches (Baatz and Shape, 1999). Typically, they have considerable difficulties dealing with the rich information content of Very High Resolution (VHR) or moderate resolution such as Landsat TM or Spot data; they produce a characteristic, inconsistent salt-and-pepper classification, and they are far from being capable of extracting objects of interest.

In comparison to pixels, image objects carry much more useful information. Thus, they can be characterized by far more properties such as form, texture, neighborhood or context, than pure spectral or spectral derivative information (Baatz and Shape, 2000). In this paper we propose a novel approach for detecting changed building layer of a highscale land use digital map using a recent very high resolution image.

The proposed approach relies on three steps: first is segmentation of the image to produce image objects, second is an object based classification by use of suitable features. And finally detecting the changes by comparing the classified objects and the old digital map using some rules to reduce the vagueness and increasing the reliability of the results.

2. Methodology
The approach that has been used in this methodology for the image processing, is straightforward without complexity to prepare the images for detecting urban growth and subsequent mapping. Figure1 shows the flowchart of proposed algorithm.

Figure1: Flowchart of the proposed algorithm

2.1 Preprocessing
Import imagery and check imagery for radiometric Quality:
Each satellite image was imported and checked for radiometric quality which is essential for visual interpretation co-registering the image data(raster) and the map data base(vector) In order to use the objects in the digital map database as training areas for the determination of the class characteristics, image data (raster) and the map database (vector) is to be

2.2 Image Segmentation
2.2.1 General Concept: Segmentation is the process of completely partitioning a scene into non-overlapping regions in scene space. (Schiewe, J., 2001) Segmentation algorithms have been developed within Pattern Recognition and Computer Vision since the 1980’s with successful applications in disciplines like medicines or telecommunication engineering. However, due to the complexity of the underlying object models and the heterogeneity of the sensor data in use, their application in the fields of Remote Sensing and photogrammetry was limited to special purpose implementations only.

With the advent of high resolution as well as multi-source data sources the general interest in segmentation methods has become evident again.
There exist various methods created for image segmentation, but the one chosen for this study is called multi-resolution segmentation, implemented in the eCognition software. It is a bottom-up region-merging technique, whereby each pixel is considered initially as a separate object, and subsequently pairs of image objects are merged to form bigger segments (Darwish et al., 2003).

2.2.2 Homogeneity Criteria: To determine the factors that aspects on segmentation quality, two parameters have to be explained. (1) Spectral heterogeneity, hspectral, and (2) shape heterogeneity, hshape. (Baatz and Schape, 2000) hspectral, is a measure of object heterogeneity change resulting from the potential merge of two adjacent objects. Similarly, the overall shape heterogeneity, hshape, is based upon the change in object shape before and after the merge being considered. In this case object shape is described two ways: first, compactness and second, smoothness.

Compactness is a function of object perimeter and number of pixels within the object, whereas smoothness is a function of object perimeter and the perimeter of the object’s bounding box. Together, spectral and shape heterogeneity evaluate to a single value that is indicative of the overall heterogeneity change (Yun Zhang, Travis Maxwell., 2004). This value is the so-called ‘fusion’ value, f, for the potential merge between two objects and is given by:

Where w is the user assigned weight associated with shape heterogeneity (Definiens Imaging, 2004a). The merge between two objects will be considered if the fusion value falls below a user specified threshold referred to as the scale parameter. At this calculation:

Where n is the object size, l the object perimeter and b the perimeter of the bounding box. If the smallest growth exceeds a heterogeneity tolerance defined by the user, the process stops (Brunett and Blaschke, 2003, van der Sande et al., 2003).

Segmentation is not an aim in itself. As regards the object-oriented approach to image analysis, the image objects resulting from a segmentation procedure are intended to be rather image object primitives, serving as information carriers and building blocks for further classification or other segmentation processes. In this sense, the best segmentation result is the one that provides optimal information for further processing (Hofmann, Puzicha and Buhmann, 1998).

One of important aspect of understanding the content of an image is information about context. There are two types of contextual information: global context, which describes the situation of the image (basically time, sensor and location) and local context, which describes the relationships of objects to each other within a certain area of the image – usually neighborhood relationships. (eCognition User Guide).

Scale is also giving certain context information. To make the objects aware of their spatial context it is necessary to link them. By linking the objects a network of image objects is created. When taking scale into account, different sized image segmentations represent different scale levels. Linking the different sized image objects hierarchically represents their (semantic) scale relationships. Anyway, by linking the image objects they are able to communicate and to tell each other their mutual relations. Now each object ‘knows’ its neighbors, its sub- and its super-objects. From a classification point of view now the objects non-intrinsic properties such as neighborhood relationships or being a sub- or super-object are describable (See fig. 2).

2.2.3 Evaluation of segmentation results: Segmentation methods do not always produce perfect partition of the scene but prepare either too much and small regions (over segmentation) or too less and large segments (under segmentation). The first effect is normally a minor problem because in the following classification step neighboring segments can be attached to the same category a posteriori. Natural objects tend to be Stronger partitioned than regular artificial objects. Methods for the evaluation of segmentation results are discussed for example by Hoover et al. (1996) or Zhang (1996).

There exist two factors of evaluating the segmentation results:
Qualitative Criteria: a strong and experienced source for evaluation of segmentation techniques is the human eye.
Quantitative Criteria: average heterogeneity of image objects weighted by their size in pixel should be minimized or average heterogeneity of pixels should be minimized. Each pixel is weighted with the heterogeneity of the image object to which it belongs.

Figure2: Hierarchical network of image of image objects (Sunil Reddy Repaka, etc., 2004)

2.3 Object based Classification
Classification is the process of connecting the classes in a Class Hierarchy with the image objects in a scene. After the process of classification each image object is assigned to a certain (or no) class and thus connected with the Class Hierarchy. Segmentation is not an aim in itself. As regards the object-oriented approach to image analysis, the image objects resulting from a segmentation procedure are intended to be rather image object primitives for further classification or other analysis. In this sense, the best segmentation result is the one that provides optimal information for further processing (Hofmann, Puzicha and Buhmann, 1998). Variety of information that are gained from different sensors (specially at dense area) will decrees the efficiency of common classification techniques and make the use of all spectral, textural and structural elements of an object useful. There exist four major features that can help to classify the objects. In brief they are:

Layer Value: that uses radiometric information of an object to classify it. Such as: mean, brightness, stdDev, Ratio, so on. Shape: set of the features that use shape parameters to recognize and classify an object. This factor uses 2 important parameters to do this task, which are covariance matrix of the spatial distribution of the pixels and bounding box. Hierarchy: set of features that use the information of super or sub objects to define which class description is valid it.

Texture: All features concerning texture are based on sub object analysis. The texture features are divided in two groups: texture concerning the spectral information of the sub objects and texture concerning the form of the sub objects.

The classification can be performed using only one master class (e.g. building) or it may include other classes (water, roads, forest, grass etc.) which might improve the classification result (Brian Pilemann Olsen, etc., 2002).

2.4 Implementation
2.4.1 Study area and Data set: The proposed automatic change detection methodology was tested on a 1:2000 scale digital map and a pan-sharpen Quick Bird scene of the city of Qom, Iran, the map have been produced in 1999 from aerial photographs by National cartographic centre (NCC) of Iran. The satellite imagery was acquired on 2005. During these six years time lapse between the generated digital map data and the Quick Bird image acquisition, considerable changes have occurred in the city. (Fig. 3)

Figure3: Quick Bird pan sharpen patch (a), Corresponding 1/2000 map (b),

2.4.2 Experiments and results: The method that is proposed at this paper has been implemented on eCognition software that uses a bottom-up region-merging technique for segmentation whereby each pixel is considered initially as a separate object, and subsequently pairs of image objects are merged to form bigger segments (Darwish et al., 2003). In eCognition, the user can control the segmentation process with various parameters, but the most important ones are the scale parameter and composition of homogeneity criterion. The scale parameter is “an abstract term which describes the maximum allowed heterogeneity for the resulting image objects” i.e. the bigger the scale value, the bigger the segments. (Hurskainen, etc, 2004).

There are no clear suggestions for the usage of these parameters, after experimentation, the best results for this study were obtained when parameters of scale, shape and smoothness were given values of 30, 0.7 and 0.4, respectively. Because of maximum heterogeneity of the objects exist at this area, the selected weight of shape parameter, was more than spectral heterogeneity. Selection Of segmentation parameters depends on the study area, the aim of segmentation, heterogeneity of objects, Resolution of Data, density of neighboring objects and so on.

When the segmentation is done and the results are visually satisfactory, the next task is to build the class hierarchy. Two classifiers are implemented in the software: the first is a traditional nearest neighbor (NN) classifier, where the user collects samples from the image to train the system; the second is based on fuzzy membership functions. Usually if a class can be separated from other classes by just a few features or only one feature the application of membership functions recommend, otherwise the nearest neighbor is suggested.

In this study, however, because of the spectral and structural heterogeneity of Buildings, the NN classifier was mostly used. The class hierarchy is very simple, consisting of six super-classes: Building, shadow, road, trees, Greenland, bare land with sub-classes for building, shadow, road. The “building” super class has sub-classes for roofs that have different spectral values (e.g. brown, white and asphalted roofs). The “road” super-class has subclasses for dirt roads, free ways and main road. Because of large spectral and structural heterogeneity of buildings, the use of features for better classification is inevitable. So all three element of structural, spectral and textural have been used to classify the imagery.

The accuracy assessment of classification results, confirms the high capability of this proposed method to classify the objects. The user accuracy of 0.84 and 0.8 for two class of buildings and roads, and the overall accuracy of 0.85 shows the high level of distinction, between even similar objects of the spectral point of view(such as roads and building which have asphalted roofs). After the evaluation of the object shapes and classification, building polygons of the digital map, overlaid on polygons of imagery which classified as” building”. Because of some vagueness and ambiguity of classification and recognized objects, using some structural rules is inevitable. This stage is then begun by the binary opening morphological operators. In this way only the objects of interest will remain and gaps or holes are excluded from the extracted regions.

After using this operator, an area thresholding rule between objects that are classified as” buildings” and conjugate polygons of digital map, is used to decrease the vagueness. In this case the ration of two classified and conjugate digital map polygons would be calculated. If this ratio doesn’t satisfy the selected threshold (that is 0.55), it would be mark as “removed building”. On the other hand, the decision to which class (removed, same) an object belongs is made by measuring the percentage of area which are classified to the same object class as the object itself belongs to. Optionally the form and the homogeneity of the correctly classified objects in a polygon are used. Very small or narrow objects are evaluated less strict than normal objects (Volker Walter, 2000).

The polygon that could not pass the homogeneity test, should be checked by an operator to decide which group of “same” or “removed” buildings does it belongs. After decision of the removed and same buildings, due to the good accuracy of classification, the remain buildings that are extracted of classified imagery would be assign to “added buildings”.

The accuracy assessment and Visual inspection of the obtained results, demonstrates the high capability of proposed method for change detection of buildings.

This technique could recognize same, removed and added buildings with accuracy about 0.99, 0.66, and 0.86 respectively.

Figure4: change detection results on; overlaid polygons (a); Quick Bird Imagery (b). Blue and yellow polygons demonstrate Added and removed buildings. Red polygons are overlaid From map to imagery

2.5 Conclusion
The obtained results by applying our proposed strategy on different kinds of objects, established the high capability of our proposed change detection strategy. The method presented above for change detection has its strengths in simplicity, straightforwardness, cost-effectiveness and relative accuracy. We believe our proposed strategy has demonstrated a promising and comprehensive solution to a complicated problem; however, we are still far from reaching to a perfect solution for a fully automatic system.

6. References

  • Baatz, M. and A. Schape (2000). Multiresolution Segmentation – An Optimization Approach for High Quality Multi-Scale Image Segmentation. Angewandte Geographische Informationsverarbeitung XII, Ed. J. Strobl et al. AGIT Symposium, Salzburg, Germany, 2000. pp. 12-23.
  • Baatz, M. und A. Sch