Application and Analyzation of the Vision-Based StructureModel Displacement Measuring Method in Cassette Structure Shaking Table Experiment

In the shaking table test of large cassette structure, story drift is an essential set of experimental data.-e traditionalmethod of displacement measurement is limited to problems such as necessary full contact with the structure model for installation of sensors, large work of installation, and easily interfered by environment. -e noncontact displacement measurement method, such as optical measuring technology, can solve the above problems and serve as an effective supplementary method for traditional displacement measuring in the shaking table test.-is paper proposed a vison-based displacementmeasuringmethod. Predesigned artificial targetswhich act as sensors are installed on each floor of the cassette structure model. A high-speed industrial camera is used to acquire the series of the images of the artificial targets on the structure model during the shaking table test. A Python-OpenCV-based structural calculation program combining computer vision and machine vision is developed to extract and calculate the displacement of the artificial targets from the series of the images acquired. -e proposed method is applied in a shaking table test of a reduced-scale fifteen-floor reinforced concrete cassette structuremodel, inwhich the laser displacementmeter and the seismic geophone are also applied as a comparison.-e experimental results acquired by the proposed method are compared with the results acquired by the laser displacement meter and the seismic geophone. -e average error of the story drift obtained by the proposed vision-basedmeasurementmethod is within 5% and is in good agreementwith the laser displacement meter and the seismic geophone, which confirms the effectiveness of the proposed method.


Introduction
Cassette structure is a new type of the space structure system, which is independently developed in China.
ere is a growing study interest in the composition, characteristic, and performance of reinforced concrete cassette structure in high-rise structures under earthquake action. e shaking table test is a good method to analyse the seismic behaviour and the performance of the cassette structure in the high-rise structure. In the shaking table test, the displacement measurement technology is one of the most important research fields of engineering detection. e displacement measurement methods can be roughly divided into two types, contact and noncontact [1]. e contact measurement method is mainly realized by using classic traditional sensors which mainly include the LVDT (linear variable differential transformer), inertial sensor, and wire-type displacement gauge. e noncontact measurement method includes using various traditional noncontact displacement sensors or optical principle-based displacement measuring methods which mainly includes holographic interferometry [2], speckle interferometry [3], laser distance measurement, and vision-based measurement methods [4,5]. e measurement results of these contact methods are easily affected by the structure model, especially cracks and other damages of the structure under large earthquake action, which means that the displacement measurement requirements are more stringent and the reliability of the contact point connection is particularly important. In addition, the installation workload of the displacement gauge, seismic geophone, or other instrument with contacts becomes huge when the measuring points are too many. On the contrary, the visionbased method, as a kind of the noncontact measurement method, has no contact to the structure model and not interferes with the movement of the specimen, which is more reliable [6]. Compared to other optical-based methods such as holographic and speckle interference, version-based measurement method has the advantages of simpler equipment needed, lower requirements for the measurement environment, and a wider measurement range [7], which can replace traditional measurement methods in some situations or be an effective supplement to traditional measurement methods.
Many researchers have tried to apply the vision-based measurement method in the shaking table test. Ji [8], based on the principle of computer vision, using camera parameter calibration, image tracking, and three-dimensional point reconstruction technology, established a structural dynamic displacement test method using a consumer camera. Wang Xiaoguang et al. [9] gave a robust landmark matching algorithm and developed a three-dimensional full-field displacement measurement system for shaking table experiments based on the VS2010 development environment. Hyungchul Yoon et al. [10], using a consumer-grade camera, proposed a visual measurement method, in which the measurement mark points are selected manually. Zhou Ying et al. [11] used a consumer-grade camera as the acquisition device and adopted the characteristic optical flow technology based on point matching to realize the motion tracking of the target and obtain the displacement time course of the target. Han Jianping [12] compiled noncontact displacement measurement programs by MATLAB based on computer vision and performed displacement measurement in the shaking table test of a four-story reinforced concrete framefilled wall structure model. Even though, the vision-based method has seldom been applied in the shaking table test of the huge structure model, especially in the large cassette structure.
In the shaking table test of the large cassette structure, due to the complicated background of the artificial target, the recognition and location of the artificial target become exceedingly difficult. erefore, for the shaking table test of the fifteen-floor reduced-scale structure model, this paper proposed a noncontact measurement method. A designed artificial target is installed on the structure as a "sensor," and a single high-speed industrial camera is applied to acquiring the series of the target's image during the shaking table test. A calculation program is developed to extract the target from the complicated background and calculate the displacement of the artificial target. e measurement program is compiled by Python, and the OpenCV module is applied. e proposed method is applied in the measurement of story drift in the shaking table test, while the traditional displacement sensor is also applied as a comparison. e experimental results acquired by the proposed method and the traditional displacement sensor are compared to verify the effectiveness and precision of the proposed method.

Vision-Based Displacement
Measurement Method e technical route of the proposed vision-based displacement measurement method is shown in Figure 1.

Image Acquisition.
In this paper, the high-speed industrial camera hk-a4000-tc500 is applied, and fixed focus lens (80 mm) are used for image collection. e camera is connected to a calculation server equipped with a largecapacity solid-state hard disk through the optical fiber and control box, as shown in Figure 2. To ensure the quality and speed of the acquired images, the camera directly outputs grayscale pictures, which is convenient for later image processing and avoid the errors due to grayscale conversion.

Camera Calibration.
To locate the artificial target in real world coordinate, the corresponding relationship between the real world coordinate system and the two-dimensional image coordinate system has to be determined. erefore, a geometric model of the camera imaging must be established, the model parameters of which are camera parameters. Camera calibration is the procedure of determining camera parameters through experiments and calculations. In this test, considering that the high-resolution industry camera is applied and only the middle area of the artificial target image is used for calculation, the effect of optical distortion can be ignored [13]. Only the scaling factor is needed to be determined in the camera calibration.
SF (scaling factor) is the relationship between the image space and physical space, and the scaling factor calculation formula is given as follows [14].
e edge length D pixel of the artificial target in the image is acquired by performing subpixel corner point recognition on the images of the high-precision artificial target. e real size of the artificial target is known as D mm , as shown in Figure 3. In this paper, the value of D pixel and D mm is separately D pixel � 61.82118944 pixel and D mm � 220 � 2 √ mm. erefore, the conversion coefficient can be obtained by using Formula (1), which is SF � 220 � 2 √ /61.82118944 � 5.032691648 mm/pixel.

Computer Version-Based Artificial Target Recognition.
In this paper, the computer vision-based artificial target identification method is applied to extracting the image area of the artificial targets, which is fundamental to the subsequent machine vision-based positioning of the artificial target point. e extraction procedure is shown in Figure 1, mainly including image filtering, edge detection, contour detection, and mask generation. e whole procedure is shown in Figure 4.

Image Filtering.
e image filter will only be used in the computer vision-based artificial target extracting procedure. Some key information of the edge will be lost in the filtering procedure, such as Gaussian filtering [15] and median filtering, which will cause errors to the measurement results. To preserve the edge information, the edge preservation filtering (EPF) method is adopted in this paper. 2 Advances in Civil Engineering

Morphological Operations.
After binarization, there are some burrs and interference information on the edge of the image, and some interference information can be removed by multiple morphological dilation and corrosion calculation, as shown in Figure 4.

Edge Detection.
e improved canny edge detection method is used to detect the edge of the image after the morphological operation, which is the foundation of the contour extraction in the next step. e improved canny edge detection method is to calculate the gradient and  Advances in Civil Engineering direction of each pixel in the image, as shown in equations (2) and (3), and performs threshold filtering on the results to obtain the image edge. en, through the nonmaximum suppression of the double threshold method, as shown in Figure 5, the unnecessary edges are filtered; thereby, a more realistic image edge is obtained.
where G x and G y are the gradients of one pixel in x and y directions; θ is the direction of the gradient.

Contour Detection-Based Target Extraction.
Contour is a set of contour points of a connected region, as shown in Figure 6. e procedure of the contour-based target extraction is to first obtain the contour of the picture  by performing contour detection on the edge-detected image, and the detected contour is then approximated according to the principle of the minimum distance, through which some redundant contour is filtered out. Finally, the circumscribed rectangle of the obtained contour is calculated. e edge of the artificial target is a connected region, by performing contour detection on the edge-detected artificial target image, the contour of the artificial target, and other redundant information can all be detected. Because the contour of the artificial target is already an approximate square, its bounding rectangle is also an square and satisfies certain conditions, while other irregular bounding rectangles cannot meet these features. Filter conditions can be set according to this difference to remove the unwanted bounding rectangle and obtain the bounding rectangle that only contains the artificial target.
Finally, the bounding rectangle of the artificial targets are separated from the original image. According to the vertex coordinates and the width of the bounding rectangle, a mask can be made. e size of mask is the same as the original image, but the inside of the rectangular area is set to 1, while the outside is set to 0. e images that only contain the artificial targets can be extracted by performing the intersection operation of the mask and the original image, as shown in Figure 4.

Machine Version-Based Artificial Target Locating.
Two methods of artificial target locating are applied in this paper, which are the corner detection method and templatebased grayscale centroid method, respectively.

Pixel Level Corner Detection.
After the abovementioned computer vision-based processing, the image of the artificial target is successfully separated from the background. e subpixel corner detection is then used to calculate the center coordinates of the marked points. e main procedure is as follows: first, pixel level corner detection [16] and then corner detection on the subpixel level near the desired corner. e pixel level corner detection uses the Harris corner detection method, that is, a local window W(x, y) centered by (x, y) on the image slides (Δx, Δy) on the image. e eigenvalues of the matrix M corresponding to each pixel is solved, where the matrix M is where I(x, y) is the grayscale value of point (x, y), I x , I y is the partial derivative of x, y to I(x, y). When the eigenvalues λ 1 , λ 2 satisfy the condition: (1) λ 1 , λ 1 are large; (2) λ 1 ≈ λ 2 , the corresponding pixel is the corner point.

Subpixel Level Corner Detection.
e subpixel level corner detection is to search the real corner point around the pixel level corner detection point, as shown in Figure 7.
e subpixel corner detection is to solve the equation set under the conditions of the situation shown in Figure 6, which is as follows: (a) e image area near point p is uniform, the gradient of which is 0 (b) e gradient of the edge is orthogonal to the q − p vector along the edge direction Assuming that the starting point q is near the actual subpixel corner, all q − p vectors which satisfy the above conditions are detected, and equation (5) is satisfied. Many sets of gradients and related vectors q − p can be found around the p point, and the inner product of the q − p vector and the gradient of p is 0; the system of equations can be solved. e solution of the equation set is the subpixel accuracy coordinates of the actual corner point q.
where ∇I(p) is the gradient of p; q − p is the vector of points q and p.

Grayscale Centroid Method Based on Template
Matching. If the extracted images of the circle-shaped artificial targets are directly calculated by the grayscale centroid method, the centroid position of the same artificial target will be different due to the selected boundary of the artificial target, which will cause errors to the displacement measurement results as shown in Figure 8. Moreover, because the percentage of the artificial target in the whole picture is exceedingly small, it is impossible to directly perform the positioning calculation. To solve this problem, in this paper, the images of the artificial target area obtained by the computer vision method are further subjected to template matching to achieve more accurate segmentation. en, the following equation is used to calculate the centroid coordinate of each artificial target.

Displacement Calculation.
e displacement of the artificial target in the image coordinate system can be calculated according to the front and rear images. en, the actual displacement of the artificial target can be obtained by multiplying the displacement in the image coordinate and the scaling factor, as shown in the following equation, which is the displacement of the point where the maker is installed on the structure.  Table 1. e structural model in this paper is shown in Figure 9. e height of the structure model is 6.9375 m, and the plane size of the structure model is 2.7 × 2.7 m, and the ratio of height to width is 2.5. To avoid the structure torsion of the model and the need to rotate the model because of the different rigidity of the chief axis, the plan layout of the model is set to square, and the orthogonal diagonal sandwich plate is used as the floor slab. e material of the structure model is microconcrete, which is to simulate real concrete material. e design elastic modulus of the microconcrete is 1/5 of the concrete C50. e reinforcement is simulated by a galvanized iron wire, and the design yield strength is 300 Mpa. e weight of the whole structure model is about 21t, which meets the requirements of the seismic station. In order to get the seismic response of the structure under different ground motions, EI Centro #6 wave is selected in this experiment for analysis, and the working conditions are divided by the zoom factor into 8 and 10.

Installation of the Displacement Sensor.
e arrangement of the artificial target, laser displacement meter, and seismic geophone is shown in Figure 2. e laser displacement meter applied is Keyence IL-600, and the seismic geophone applied is e centroid of the marker is affected by the selected area size of the marker Template matching based on the centroid center method   e measuring range of the Keyence IL-600 is 200 ∼ 1000 mm, the sampling frequency of the Keyence IL-600 is 128 Hz, and the repetitive accuracy of the Keyence IL-600 is 50 μm * 4 . e sensitivity of the 941B seismic geophone is 0.3 m/s 2 , and the maximum range of the 941B seismic geophone is 20 m/s 2 .

Displacement Results.
In this paper, an industrial camera is used to acquire images of the artificial targets, and a measurement program is compiled in Python language to measure the x-axis model displacement of a reduced-scale fifteen-floor cartridge structure model in the shaking table test. Finally, the measurement results are compared with the traditional measurement sensor, laser displacement meter, and seismic geophone.

Displacement Time-History Curves of Each Floor.
Due to space limitations, this article only gives the results of the two working conditions of 21 (EI Centro #6 wave, scaling factor of 10) and 20 (EI Centro #6 wave, scaling factor of 8), as shown in Figures 10 and 11.

Error Analysis.
It can be seen from Figures 10 and 11 that the X direction horizontal displacement curve of the measuring point obtained by the geophone, the laser displacement meter, and the visual image measurement method basically coincides. In the early period, when the structure began to vibrate, the agreement was generally good, but in the middle and later periods, the curve of the geophone gradually deviated from the curve of the vision-

Advances in Civil Engineering
where d T (i) and d v (i) are the dynamic displacement values of traditional displacement sensors and visual methods, respectively; v d T and v d v are the average of the above two sets of data, respectively. e value range of ρ is 0∼1, 0 means nothing, 1 means perfect match. Under the working condition 21 and the working condition 20, the horizontal maximum displacement and the error and correlation coefficient of the displacement data obtained by the proposed and the laser displacement meter on each floor are shown in Tables 2 and 3.
It can be seen from Table 1 and Table 2, under the working conditions 21 and 20, the error of each floor is kept within 5%, which can meet the needs of displacement measurement in the field of civil engineering. e correlation coefficients are very close to 1, which proves that the results obtained by the laser displacement meter are highly consistent with the results of the image measurement.

Conclusion
In order to measure the story drift of the reduced-scale fifteenfloor reinforced concrete in the shake table test, this paper proposed a vision-based displacement measurement method, which combined Python programming, computer vision, and machine vision algorithms. e noncontact vision-based measurement method consists of four parts, artificial target image acquired by an industrial camera, the extraction of the artificial targets by computer vision, the positioning of the artificial targets by machine vision, and the corresponding measurement and calculation programs complied in Python, and the effectiveness and accuracy of the method was proved by a series of structural model shaking table tests, and the following conclusions were obtained: (1) Using the Python programming language, combined with related computer vision algorithms, under complex backgrounds, the marked points installed on the structural model can also be well extracted, which is the foundation for the positioning of the marked points and noncontact measurement (2) e average error of the horizontal displacement of each floor obtained by the proposed vision-based measurement method is within 5%, which is in good agreement with the laser displacement meter, and the correlation coefficient is much greater than 0.99 (3) e effectiveness and accuracy of the proposed method is verified and applied in the later shaking

Data Availability
e XLSX data used to support the findings of this study may be accessed by emailing to the corresponding author, Wang Yanhua, who can be contacted at 101010371@seu.edu.cn.

Conflicts of Interest
e authors declare that they have no conflicts of interest.