Home Articles Use of LiDAR in Extraction of 3D City Models in Urban Planning

Use of LiDAR in Extraction of 3D City Models in Urban Planning

Ajinkya Jadhav
Graduate in Geology
Pursuing M.Sc. in Geoinformatics
Symbiosis Institute of Geoinformatics, Science, India

Deeksha Gambhir
Graduate in Geography
Post Graduate (Diploma) in Environmental
Pursuing M.Sc. in Geoinformatics
Symbiosis Institute of Geoinformatics, Science, India

Abstract:
In this growing phase of urbanization and industrialization there is an emergent need of proper town planning systems. There are many techniques which are currently used for town planning but the most accurate, fast and versatile measurement technique is LiDAR. With LiDAR data, 3D city models can be obtained. The 3D city models in urban areas are essential for many applications, such as military operations, disaster management, mapping of buildings and their heights, simulation of new buildings, updating and keeping cadastral data, change detection and virtual reality. In most of these cases the models of buildings,?urban features, terrain surface, and vegetation are the primary features of interest. This paper presents a study, which aims to generate 3D city models from LiDAR data. 3D models are prepared by acquisition of laser scanning data then joining the point clouds, further modeling the point clouds and then transforming the coordinates of point clouds. Since manual digitization and surface reconstruction is very costly and time consuming, the development of automated algorithms for feature extraction is of great importance. On the other hand LiDAR data (Light Detection and Ranging) is a relatively new technology for obtaining the Digital Surface Models (DSMs) of the earth’s surface. This data when combined with digital orthophotos can be used to create highly detailed Digital Surface Models (DSMs) and eventually Digital City Models. In this research different method of automatic feature extraction either from LiDAR or the combination of LiDAR and digital orthophotos are evaluated and compared in a GIS environment.

INTRODUCTION:

URBAN PLANNING:
Urban, city, or town planning is the discipline of land use planning which explores several aspects of the built and social environments of municipalities and communities.
The urban areas in the developing world are under constant pressure of a growing population. Efficient urban information system is a vital pre-requisite for planned development. The increasing demands in urban planning and management sectors call for co-ordinate application of Remote Sensing and Geographic Information System (GIS) for sustainable development of Urban areas. There is an urgent need to adopt Remote Sensing and Geographic Information System approach in urban development and monitoring process for implementing pragmatic plan of Urban development. The plan must incorporate an integrated approach of spatial modeling using Remote Sensing Data, GIS database and GPS solutions.

EXTRACTION OF BUILDINGS:
Building detection is the process of obtaining the approximate position and shape of a building. While buildings extraction is an area of image processing which involves using algorithms to detect and isolate various desired portions of an image. This process deals with the determination of the geometric and topologic properties of the detected buildings. The extracted information contains shape size and roof type of the detected buildings. On the basis of which 3D city models are constructed. 3D city model is a three dimensional representation of a city or urban environment. A 3D terrain model is usually added to provide the landscape context for the buildings. Most GIS systems support this method but the 3D city model that is generated does not contain detailed information, such as facade geometry or textures, and the height of the buildings may not be very accurate. To overcome the problem of inaccurate building height data, photogrammetry and laser scanning methods have been developed to capture 3D city models from aerial images or airborne laserscan images (LiDAR).

In recent years, an active sensors have been developed that can measure 3D topography directly: Light Detection and Ranging (LiDAR). Airborne LiDAR has become an accurate, cost-effective alternative to conventional technologies for the creation of DSMs at vertical accuracies of 15 centimeters to 100 centimeters.

LiDAR TECHNOLOGY:
Airborne laser scanning is a comparatively recent technology for capturing data of the topography of the earth. It became feasible through the availability of lasers with special attributes and the Global Positioning System (GPS). LiDAR has revolutionized the acquisition of digital elevation data for large scale mapping applications.

A typical LiDAR system is operated from a plane, a helicopter or a satellite. The instrument rapidly transmits pulses of laser which travel to the surface, where they are reflected. The return pulse is converted from photons to electrical impulses and collected by a high-speed data recorder. Since the formula for the speed of light is well known, time intervals from transmission to collection are easily derived. Time intervals are then converted to distance based on positional information obtained from ground/aircraft GPS receivers and the on-board Inertial Measurement Unit (IMU) that constantly records the attitude (pitch, roll, and heading) of the aircraft. The system is an active system, so the data can be gathered during the day or night. LiDAR systems collect positional (x, y) and elevation (z) data at pre-defined intervals. The resulting LiDAR data is a very dense points cloud. The accuracy of the LiDAR data is a function of the flying height, laser beam diameter (system dependent), the quality of the GPS/IMU data, and post-processing procedures. Accuracies of ±15cm (vertically) can be achieved.


Figure 1. Airborne LIDAR System

However, the traditional manually building extraction from raw imagery is highly labor-intensive, time consuming and very expensive. During the past two decades many researchers in photogrammetry, remote sensing and computer vision communities have been trying to study and develop the automatic or semi-automatic approaches for building extraction and reconstruction. For monocular image, shadow analysis is often used to estimate 3D information and assist building detection. 2D building roof hypotheses are generated from extracted linear features by perceptual grouping. These hypotheses are then verified by 3D evidence consisting of shadows and walls. Obviously building detection from monocular image is extremely difficult since it generally leads to ambiguous solutions.

The 2D buildings footprints are detected and extracted either from LiDAR data or the combination of LiDAR and orthophotos data. Using photogrammetry, a 3D city model is produced from stereo aerial imagery in a semi-automatic way using special software tools. The 3D data can be exported into several commercial 3D formats for visualisation. Laserscanning data provides an accurate height model with semi-automatic extraction of buildings.

SEMIAUTOMATIC BUILDING EXTRACTION FROM LIDAR DATA
In this method, urban features are extracted from LiDAR by using a semi-automated process which is done by LiDAR Analyst by first finding the bare-earth, slope and aspect. The DTM is then subtracted from the DSM, and the remaining data is classified into buildings, trees, and other features (shrubs, cars, etc.) based on shape and their three dimensional curvature. To extract buildings, one must specify the minimum cut-off area and height threshold, above this threshold an object is classified to a building. Moreover, one must specify a threshold for the roof slope to detect the roof edges which represent the building outline. Most buildings are orthogonal in shape with flat or sloped roofs while trees and shrubs are more dome-shaped with some irregularities within them. Therefore the texture variance provides a good separation between buildings and trees. Both the building and the tree exaction processes require the LiDAR data and a numeric parameter representing a cut-off threshold. The cut-off threshold parameter is the 3-D curvature of the shape. The higher the value, the more curved the shape must be in order to be returned. Therefore, when extracting building one must specify a low texture variance value, which will return fewer trees and more buildings. The returned buildings require post processing and cleaning to best represent the original shape of the building.

Although LiDAR analyst is very capable software for extracting buildings from high resolution LiDAR data, it is less successful for building extraction from low resolution data.

EXTRACTION OF BUILDING INFORMATION FROM NORMALIZED DSM (nDSM)


Figure. 2 Normalized Digital Surface Model

A last pulse Digital Surface Model (DSM) is created from the original LiDAR points using inverse distance weighting. A gradient image is then created by differentiating the DSM. A DTM is created. A rule-based algorithm is then applied to detect large buildings in the data (Rottensteiner et al. 2003). A smaller structural element is used to create a finer DTM, but buildings detected in the previous process have there corresponding heights substituted from DTM of the previous process. The process is continued until a minimum size for the structural element is reached. In order to remove any of the residual artifacts caused by the smallest structural element size, the final DTM is created by re-interpolating the surface but excluding LiDAR points classified as “off-terrain”. A normalized DSM (nDSM) identifying areas that lie above the terrain is created. The normalized DSM is the difference-model between the DSM and the DTM.

nDSM = DSM – DTM. Hence all objects in the nDSM stand on elevation height zero. (Vozikis, 2004).

BUILDING EXTRACTION USING ORTHOPHOTOS AND LiDAR DATA
A pseudo band from LiDAR data and RGB bands from orthophoto was used for classification and feature extraction with an image classifier Feature Analyst developed by Visual Learning System’s (VLS).

It utilizes a method that takes into account spectral characteristic of the pixel’s surrounding a known pixel this method is called input representation.

Spatial contex of the feature example, concrete parking lot and concrete roof tops have similar spectral signature, but spatial context is different for different target. The concrete parking lots may have gravel sidewalk, green grass, trees or other feature along the. The orthophotos provide the spectral, spatial, edges, texture and context information while LiDAR provide the elevation information. The building extraction was performed in a single process using an input representation. The extracted trees were masked while extracting the buildings. Masking the trees ensures that large dry trees will not be confused and extracted falsely as buildings. Further Douglas and Peucker generalization and Bezier simplification methods were used to simplify the buildings polygons.


Figure 3. Building example: a) Aerial image. b) Raw LIDAR point triangulation. c) Final estimated surface.

CONCLUSION:
This review paper describes that the semiautomatic method of building extraction from LiDAR data is very simple, but it?requires high resolution LiDAR data, moreover, it is not capable of extracting smooth building edges and it fails to extract the houses using a low resolution LiDAR data. The drawbacks which came up due to earlier methods urged to use another way for extracting building foot prints from LiDAR data. The second method which focused on extracting the buildings by thresholding and masking the normalized DSM is very successful in industrial, desert areas and other areas with little or no vegetation, and it was capable of extracting both buildings and houses but it was less successful in areas where the appearance of trees is similar to buildings. Therefore a third method was evaluated, in this process the spectral, spatial, texture, context and elevation information are exploited to produce more accurate urban models. The quality of the feature extraction improved by integration a “pseudo” band from the normalized DSM, which is included in the classification process with the three RGB bands from the orthophoto. The object-oriented Classification showed how the extraction of buildings from the normalized DSM improved by using the combination of the spectral, edges, spatial and context information from the orthophotos and elevation information from the nDSM. This method improved building extraction and provided excellent results. LiDAR was used in the final stage of the research to provide the buildings heights.

REFERENCES:

  1. Feature Extraction and 3D city modeling using airborne LiDAR and high-resolution digital orthophotos – A Comparative study (Sulafa Ibrahim)
  2. Urban Planning (Wikipedia)
  3. Constructing a GIS-based 3D urban model using LiDAR and aerial photographs
  4. Automated Building Extraction and Reconstruction from LIDAR Data
  5. Fusion of LIDAR Data and Aerial Imagery for Automatic Reconstruction of Building Surfaces