To avoid the time-consuming, costly, and expert-dependent traditional assessment of earthquake damaged structures, image-based automatic methods have been developed recently. Since automated recognition of structure elements is the basis by which these methods achieve automatic detection, this study proposes a method to recognize the wall between windows from a single image automatically. It begins from detection of line segments with further selection and linking to obtain longer line segments. The color features of the two sides of each long line segment are employed to pick out line segments as candidate window edges and then label them. Finally, the images are segmented into several subimages, window regions are located, and then the wall between the windows is located. Real images are tested to verify the method. The results indicate that walls between windows can be successfully recognized.
An earthquake may result in thousands of structures suffering different levels of damage. The safety assessment of these structures is of great significance with regard to victim accommodation, intensity evaluation, and emergency aid provision. The traditional strategy is for certified engineers or structure experts to carry out the assessment. To analyze the defects carefully, the inspectors must have direct access to the structures and must move from one structure to another only on foot, due to destruction of the traffic system. Therefore, the traditional method is time-consuming, costly, and expert-dependent [
To develop a real-time and cost-effective method, researchers have applied computer vision technology for the assessment of damaged structures [
Developing a recognition or detection method based on computer vision enables safety assessment to be achieved quickly. In the vista of the application of the methods, every person who can take pictures using cameras, mobile phones, or other devices can carry out the assessment task, with the images processed on local devices, on personal computers, or via cloud computing. With this intention, the method developed in this study is based on a single image, and there are no technical requirements for image acquisition.
Usually, there are few obvious and distinctive features on a wall itself, so this study describes how the windows next to the wall are used to locate it. Researchers have developed several methods for window detection from images [
The method first detects the line segments in the image by using a state-of-the-art line detection method. Then, the line segments are linked by a linking strategy, which is advanced by the authors. After selection of the long line segments, the color features are calculated to find and label the candidate window edges. The image is then segmented into subimages by the candidate line segments, and an assessment is conducted to identify a window region. Once the window regions are located, the wall between the windows is located.
The main purpose of the method described in this paper is to achieve automatic recognition. So, some preconditions are set to simplify the problem. (
The main idea is to achieve automated recognition based on an obvious feature of the wall: that it is between windows. In other words, if there is a region for which both sides are windows, the region is recognized as a wall between windows. The method proposed in this paper mainly consists of three parts (Figure
The framework of the proposed methodology.
The first step to segment an image into regions is to find the edges of the regions. Considering the purpose of this paper, what we really need are lines to separate regions. Since line segment detection is an important and basic problem in many image processing-based applications, a number of methods have been proposed in the last few decades, from the typical traditional Hough transform [
The LSD produces accurate line segments and controls the number of false detection instances at a low level by efficiently combining gradient orientations and line validation according to the Helmholtz principle [
The EDLines method [
The CannyLines method is developed to extract more meaningful line segments robustly from images. The Helmholtz principle, combining both gradient orientation and magnitude information, is used for verification. The line segments detected by CannyLines are longer than those of the other two methods. However, with its more complex steps, it is time-consuming.
Since this paper aims to achieve real-time detection, a method that is adaptive to different images without tuning of parameters and that has good accuracy and a high processing speed is the reasonable choice. Since the three methods are all parameter-free, and there is no large difference in their accuracy, this paper chooses the fastest method, EDLines, as the line segment detecting method.
Due to image imperfection (e.g., weak contrast, clutter, blur, and noise), problems still persist when applying the line segment detector to practical civil infrastructure images [
After selecting vertical or horizontal line segments, an algorithm, a modified version of that proposed by [
Illustration of constructed angles by the three pairs of endpoints.
If every two endpoints are connected as a line segment, all these line segments can be matched to three pairs (AB and CD; AC and BD; AD and BC), with no consideration of their orientation and orders. While the three angles (
The second criterion can be written as follows:
This means that if the smallest of the distances between every two end points of the two line segments is smaller than the tolerance, the two line segments (AB and CD) are considered to be “adjacent.” According to Kou et al. [
Illustration of two line segments being mostly parallel, close, and with no overlap but not meeting criteria (
But, obviously, this situation does not meet (
When two line segments are linkable, the top-most and bottom-most endpoints of the segments are used to construct a new line segment (Figure
Diagram of line linking. (a) Three short line segments 1, 2, and 3 are linked as a long line segment 4. (b) 1 and 2 are linked as 5 (short dashed), if collinearity is tested between 5 and 3, and then 4, the linking result, would be 6 (long dashed).
At this time, it is important that any new line segment should not be tested with other line segments to see if they are collinear. Or that would lead to a serious fault (Figure
( ( ( ( ( ( (7) (8) (9) (10) find the most close line segment (11) (12) (13) delete (14) newline ( (15) (16) (17) find the top most vertex (18) draw a line segment between
After linking, the long line segments, whose length is larger than a threshold, are picked out to be classified.
To determine which long line segment should be the candidate for the window edge, it is necessary to inspect the regions on both sides of the line segment. Therefore, the subimages of both regions are extracted from the image, whose shape and size are chosen experimentally. Since only rectangular windows are considered in this paper, a rectangle is used as the shape of the side region, with its longer side parallel to the line segment and having the same length. The side ratio of the rectangle is set as 0.5.
According to human intuition, when a window is seen from outside, the edge of the window usually has two obvious features. (
The digital image is specified by a RGB color model, which is a mixture of three primary colors: red, green, and blue. After extracting the side subimages of the line segment, the histogram of each primary image is calculated by dividing the range (0–255) into several bins
Taking
When all the long line segments in one image are processed as above, the average of all their contrast differences
After establishing the candidate edge line segments, the image would be segmented into several areas by the line segments.
As the line segments are always shorter or longer than the window edges, all the candidate line segments are extended to the edge of the image. Then, the intersection point of each pair of extended line segments and each line segment and image edge are calculated. Since all the line segments have been labeled above, each intersection point is also labeled (Table
The window position related to the point.
Point label | Window position |
---|---|
1 | Upper, left |
2 | Upper, right |
3 | Below, left |
4 | Below, right |
Diagram of the point label.
As mentioned above, the strategy to recognize a wall between windows is to find a region of which both sides are window regions. So firstly, the window regions should be located. To achieve that, all the subimages segmented during the image segmentation part are inspected with two constraints. (
During the inspection, the point where the subimage meets the two constraints would be recognized as a window region, and the four vertexes are recorded. Usually, there should be only two window regions. Vertexes 3 and 1 from the left region and vertexes 4 and 2 from the right are then used as the vertexes of the wall between windows regions.
In order to evaluate the overall performance of the proposed method, the algorithm is developed in Matlab 2014a. During the line segment detection step, the code provided by Topal and Akinlar [
The thresholds of the method are set experimentally as shown in Table
Threshold setting.
Threshold | Value |
---|---|
|
|
|
7 |
|
|
|
11 |
|
0.05 |
Note:
An example of the detection result. (a) The original image. (b) Candidate line segments. (c) Regions segmented. (d) Window section located. (e) Wall between windows extraction.
Figure
An example of the detection result. (a) The original image. (b) Wall between windows extraction.
Precision and recall ratios are used to measure the detection performance of the method. They are calculated as follows:
High precision means that many detected walls between windows are actual walls between windows, whereas low precision means that few detected walls are actual walls. Similarly, high recall means many actual walls between windows are correctly detected, whereas low recall means that few actual walls are correctly detected. Both sets of results represent the quality of the detection result of the method. A set of images captured by a mobile phone (Xiaomi 2) around the campus is used to test the method. Before testing, the images that do not meet the specified preconditions of the paper are removed manually. So, 20 images are actually tested. The resolution of the images is
Testing results.
Image number |
|
|
|
Precision | Recall |
---|---|---|---|---|---|
20 | 12 | 6 | 2 | 66.67% | 85.71% |
According to the results, the quality of the method is acceptable. Although the precision is not very high, the recall reaches 85.71%. This means that only a few of the actual walls would be detected as other objects. However, the
Example for incorrect detection: (a) original image. (b) The incorrect detection result. (c) The candidate edge line segments of the image.
The average time cost of the images is 8.3 s. Considering the purpose of the paper, this is acceptable. The time is heavily dependent on the complexity of the image. The length of time required increases with the number of line features and complex textures in the image. For the image in Figure
The traditional method to assess the damage of a structure element after an earthquake is time-consuming, costly, and expert-dependent. Incorporating image technology into the assessment can make the task simpler and faster and is especially feasible when there are many people who can take pictures of the damaged sites. Thanks to the preresearchers who have advanced image processing methods, we can now get more information from images. With the intention of simplifying the assessment, this paper tries to develop a method based on a single image.
As there are few studies on structure element recognition, especially recognition of walls, the paper develops a method to automatically recognize walls between windows from a single image. The method first detects the line segment in the image and then picks out the horizontal (near horizontal) and vertical (near vertical) line segments and links them to get longer line segments using several principles. The color features of the two sides of each line segment are calculated and used for selecting and labeling candidate window edges. While the image is segmented by the candidate line segments, window regions are located, and then the wall between the windows is located.
Real images taken by mobile phone cameras are used to test the validity of the method. The results show that the method can detect the wall between windows when there is only one target in the single image.
Although the method is for images that have only one wall between windows, it can be easily adapted to apply to two or more targets in a single image. Moreover, with only simple adaption of the region location strategy, other types of walls can be detected.
Since the precision of the method is not very high and it is mostly based on the feature classifier strategy, future work will be conducted regarding this issue. As the work of this paper aims to achieve automatic assessment of structural elements, automatic retrieval of defect information on the wall will also be attempted in future work.
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Innovative research group project (Grant no. 51421064) and general project (Grant no. 51578114) of Natural Science Foundation of China and the Fundamental Research Funds for the Central Universities (DUT16TD03) support this work. The authors would like to thank their sponsor.