
×
Assessment of earthquake damages by image-based techniques
von Mehdi RezaeianAlthough natural disasters are inevitable and it is almost impossible to fully recoup the damagecaused by the disaster, what the humans can change is the way they respond to disasters. The whole disaster management procedure requires urgent developments toward better, more elaborated and appropriate means for facing natural risks. This is already realized and accepted as a high priority task by many organizations, governments and companies in all over the world. Their prediction and preparedness along with an effective disaster management program can minimize the effect of damage. Thus, damage assessments have attracted significant attentions among researchers and practitioners of disaster management.
Remote sensing techniques both by space-borne and air-borne sensors could make a very effective contribution especially in response and recovery phases of disaster management. A fast ascertainment of the affected buildings and people is helpful for the disaster management in order to allocate the limited search and rescue resources (personnel and equipment) for each individual collapsed building. The possibility of a near real-time damage assessment could help the rescue operations. Furthermore, the spatial distribution of damages is very important for the emergency teams deployed immediately after the catastrophe. In this study, we mainly focus on rapid damage assessment methods using remotely sensed imagery data. Several kinds of remotely sensed data on damages may be available, such as aerial photography and digital imagery, satellite imagery, airborne laser scanning and synthetic aperture radar observations. Optical images can be interpreted visually as they depict the ground surface as it appears to the human eye. Within this realm, studies are based on either mono-temporal (only post-event images are available) or multi-temporal analysis (both pre- and post-event images are available). We present methods for local (buildings) damage classification using multi-temporal high-resolution optical images.
To evaluate methods and verifying numerical results, two datasets of Kobe and Bam were obtained from aerial images. The Kobe earthquake was a major earthquake in Japan. It was the first time in the world that a densely populated modern city area was directly hit by very strong ground shaking. The Bam earthquake in terms of human loss was the worst to occur in Iranian history. We base our verification on visual inspection of the stereo images using number of objective assessment criteria. Damage identification via space/airborne images is restricted to some structural type of visible damages and visibility of details relies on view direction as well as image resolution. In the Bam area, a multilevel damage scale (totally collapsed, partially collapsed, uncollapsed) appears to be adequate in representing the distribution of individual house damages. In Kobe, due to large scale and high quality colored images, it is possible to extract more details of damaged buildings based on EMS98 and a damage catalogue.
The drawbacks of traditional photo interpretation techniques are pertained first to the time and cost needed for manual processing of the data and second to the difficulty in supporting coherent interpretation criteria in case there are large numbers of photo-interpreters working in parallel for interpretation of wide areas in a short time. The major contribution of this dissertation is to find solutions in order to automate interpretation tasks entirely or partially. Damage interpretation is divided into four main tasks: identifying objects, collecting relative attributes and evidences, detecting damaged objects and classification in meaningful categories. This research is divided into two main parts. In part one (Chapters 5 and 6); it is assumed that objects of interest (i. e. buildings) are already identified using auxiliary pre-event data such as building polygons or 3D models. In this part, we will commence bi-level classification (i. e. collapsed and uncollapsed) and move toward multilevel classifications. We present and develop damage classification methods utilizing: only DSMs features, only imagery features and integration of both. In the second part (Chapter 7) we document the use of a novel methodology for assessing the damages of man-made objects and make efforts to extract the object and its damages at once. This method utilizes a methodology of applying Bayesian networks to a multi-view and multi-modal damaged object description.
Digital surface models extracted from airborne stereo image pairs before and after the earthquake can be used to identify collapsed buildings. However, simple pointwise comparison of DSMs generated automatically won’t be reliable evidence to detect damaged points. The critical parameter is an optimum threshold value, which cannot be defined generally due to stochastical behavior of the model in different areas. Hypothesis testing suggested in Chapter 5 shows that the normalized value of “average height differences” (AHD) reveals the results with the best overall accuracies. In this method, some undamaged sample buildings are employed for evaluating the mean and variance values. These buildings must be distributed almost uniformly in a test area including a variety of buildings shape. The experiments are extended by replacing the pre-event DSM with 3D model of buildings extracted before the earthquake.
In the course of this research several methods for extracting imagery features for debris detection are examined. First and second order statistical descriptors including standard deviation, entropy and homogeneity are evaluated. The assessments show that this kind of descriptors, measuring image amplitude in terms of luminance or tristimulus values, are less sensitive to soft damage and suffer from miscellaneous textures in high-resolution images in urban area. We propose “Regularity indices” to describe the appearance of the building as regular or irregular. Three kinds of classification methods: k-NN, Bayesian and SVM are used and compared. The classification results are evaluated by a cross-validation method and by an independent visual interpretation test set. The Support Vector Machine (SVM) classifier is a relatively new method that proved to be quite effective for damage detection. The integration of object (DSMs) and image space features is applied through classifiers for labeling three attributes of buildings (“Uncollapsed”, “Partially collapsed” and “Totally collapsed”). Regularity indices combined with normalized average height difference through an OVO-SVM classifier show that using multiple features can be useful to classify collapsed buildings automatically.
In Chapter 7, we develop a system that automatically interprets data produced by aerial sensors before and after an earthquake in order to arrive at a detailed damage map quickly after disaster. We assume no prior information about buildings position is available and the possibility of a near-real time damage assessment is examined and the results are compared. The proposed system applies image-understanding algorithms to recognize buildings prior to classifying the scene. Another aspect of proposed system is to handle “uncertainty” using Bayesian networks. The network provides pointwise analysis based on prior information of multi-image segments. For line detection and image segmentation, we design and develop a multi-stage line detection method, so called Hierarchical Permissive Hough Transform (HPHT) a modified version of HT. This algorithm iteratively detects obvious and obscure lines. To detect damaged points a symmetric form of Bayesian network includes two parts for detection before and after earthquake is suggested. The investigations with the Bam and Kobe dataset show that the proposed augmented Bayesian network improves the performance of the reasoning system. Within a building polygon, the presented method is able to detect and classify damaged points. We are able to extract detailed information about collapsed buildings. Empirical results show that the suggested approach is quite promising.
Remote sensing techniques both by space-borne and air-borne sensors could make a very effective contribution especially in response and recovery phases of disaster management. A fast ascertainment of the affected buildings and people is helpful for the disaster management in order to allocate the limited search and rescue resources (personnel and equipment) for each individual collapsed building. The possibility of a near real-time damage assessment could help the rescue operations. Furthermore, the spatial distribution of damages is very important for the emergency teams deployed immediately after the catastrophe. In this study, we mainly focus on rapid damage assessment methods using remotely sensed imagery data. Several kinds of remotely sensed data on damages may be available, such as aerial photography and digital imagery, satellite imagery, airborne laser scanning and synthetic aperture radar observations. Optical images can be interpreted visually as they depict the ground surface as it appears to the human eye. Within this realm, studies are based on either mono-temporal (only post-event images are available) or multi-temporal analysis (both pre- and post-event images are available). We present methods for local (buildings) damage classification using multi-temporal high-resolution optical images.
To evaluate methods and verifying numerical results, two datasets of Kobe and Bam were obtained from aerial images. The Kobe earthquake was a major earthquake in Japan. It was the first time in the world that a densely populated modern city area was directly hit by very strong ground shaking. The Bam earthquake in terms of human loss was the worst to occur in Iranian history. We base our verification on visual inspection of the stereo images using number of objective assessment criteria. Damage identification via space/airborne images is restricted to some structural type of visible damages and visibility of details relies on view direction as well as image resolution. In the Bam area, a multilevel damage scale (totally collapsed, partially collapsed, uncollapsed) appears to be adequate in representing the distribution of individual house damages. In Kobe, due to large scale and high quality colored images, it is possible to extract more details of damaged buildings based on EMS98 and a damage catalogue.
The drawbacks of traditional photo interpretation techniques are pertained first to the time and cost needed for manual processing of the data and second to the difficulty in supporting coherent interpretation criteria in case there are large numbers of photo-interpreters working in parallel for interpretation of wide areas in a short time. The major contribution of this dissertation is to find solutions in order to automate interpretation tasks entirely or partially. Damage interpretation is divided into four main tasks: identifying objects, collecting relative attributes and evidences, detecting damaged objects and classification in meaningful categories. This research is divided into two main parts. In part one (Chapters 5 and 6); it is assumed that objects of interest (i. e. buildings) are already identified using auxiliary pre-event data such as building polygons or 3D models. In this part, we will commence bi-level classification (i. e. collapsed and uncollapsed) and move toward multilevel classifications. We present and develop damage classification methods utilizing: only DSMs features, only imagery features and integration of both. In the second part (Chapter 7) we document the use of a novel methodology for assessing the damages of man-made objects and make efforts to extract the object and its damages at once. This method utilizes a methodology of applying Bayesian networks to a multi-view and multi-modal damaged object description.
Digital surface models extracted from airborne stereo image pairs before and after the earthquake can be used to identify collapsed buildings. However, simple pointwise comparison of DSMs generated automatically won’t be reliable evidence to detect damaged points. The critical parameter is an optimum threshold value, which cannot be defined generally due to stochastical behavior of the model in different areas. Hypothesis testing suggested in Chapter 5 shows that the normalized value of “average height differences” (AHD) reveals the results with the best overall accuracies. In this method, some undamaged sample buildings are employed for evaluating the mean and variance values. These buildings must be distributed almost uniformly in a test area including a variety of buildings shape. The experiments are extended by replacing the pre-event DSM with 3D model of buildings extracted before the earthquake.
In the course of this research several methods for extracting imagery features for debris detection are examined. First and second order statistical descriptors including standard deviation, entropy and homogeneity are evaluated. The assessments show that this kind of descriptors, measuring image amplitude in terms of luminance or tristimulus values, are less sensitive to soft damage and suffer from miscellaneous textures in high-resolution images in urban area. We propose “Regularity indices” to describe the appearance of the building as regular or irregular. Three kinds of classification methods: k-NN, Bayesian and SVM are used and compared. The classification results are evaluated by a cross-validation method and by an independent visual interpretation test set. The Support Vector Machine (SVM) classifier is a relatively new method that proved to be quite effective for damage detection. The integration of object (DSMs) and image space features is applied through classifiers for labeling three attributes of buildings (“Uncollapsed”, “Partially collapsed” and “Totally collapsed”). Regularity indices combined with normalized average height difference through an OVO-SVM classifier show that using multiple features can be useful to classify collapsed buildings automatically.
In Chapter 7, we develop a system that automatically interprets data produced by aerial sensors before and after an earthquake in order to arrive at a detailed damage map quickly after disaster. We assume no prior information about buildings position is available and the possibility of a near-real time damage assessment is examined and the results are compared. The proposed system applies image-understanding algorithms to recognize buildings prior to classifying the scene. Another aspect of proposed system is to handle “uncertainty” using Bayesian networks. The network provides pointwise analysis based on prior information of multi-image segments. For line detection and image segmentation, we design and develop a multi-stage line detection method, so called Hierarchical Permissive Hough Transform (HPHT) a modified version of HT. This algorithm iteratively detects obvious and obscure lines. To detect damaged points a symmetric form of Bayesian network includes two parts for detection before and after earthquake is suggested. The investigations with the Bam and Kobe dataset show that the proposed augmented Bayesian network improves the performance of the reasoning system. Within a building polygon, the presented method is able to detect and classify damaged points. We are able to extract detailed information about collapsed buildings. Empirical results show that the suggested approach is quite promising.