Advances in Instrumentation and Monitoring in Geotechnical Engineering

[Extract] Geotechnical instrumentation to monitor the performances of earth and earth-supported structures is increasingly becoming popular. Verification of long-term performances, validation of new theories, construction control, warning against any impending failures, quality assurance, and legal protection are some of the many reasons for geotechnical instrumentation. They are not only used in field situations, but in laboratories too. With the recent advances in materials and technology, and the need for more stringent performance control, there had been significant developments in the recent past in instrumentation and monitoring techniques.

Geotechnical instrumentation to monitor the performances of earth and earth-supported structures is increasingly becoming popular. Verification of long-term performances, validation of new theories, construction control, warning against any impending failures, quality assurance, and legal protection are some of the many reasons for geotechnical instrumentation. They are not only used in field situations, but in laboratories too. With the recent advances in materials and technology, and the need for more stringent performance control, there had been significant developments in the recent past in instrumentation and monitoring techniques.
We are thankful to Hindawi Publishing Corporation for inviting us to act as Guest Editors of this special issue. The main focus of this special issue is to document the recent advances in the instrumentation and monitoring techniques in geotechnical engineering. Authors were invited to disseminate their research findings and recent advances in the instrumentation and monitoring techniques in the field of geotechnical engineering. Apart from this, authors of about twenty papers, which were submitted to the 12th Conference of the International Association for Computer Methods and Advances in Geomechanics (IACMAG) held in Goa, India, from 1 to 6 October 2008, under the theme of "Geomechanics in the Emerging Social and Technological Age," were invited to upgrade/modify their manuscripts as per the requirements of the journal.
The paper titled "Real-time monitoring system and advanced characterization technique for civil infrastructure health monitoring" presents two successful field applications of the shape-acceleration array (SAA) system, at an active bridge realignment site in The Netherlands.
The paper titled "Nonparametric monitoring for geotechnical structures subject to long-term environmental change" presents a nonparametric, data-driven methodology of monitoring for geotechnical structures subject to long-term environmental change. For validating this methodology, data from a full-scale retaining wall, which has monitored for three years, has been used.
The paper titled "Field assessment and specification review for roller-integrated compaction" presents an overview of two technologies: compaction meter value (CMV) and machine drive power (MDP), an overview of factors influencing statistical correlations, modeling for visualization and characterization of spatial nonuniformity, and a brief review of the specifications being used by the professionals.
The paper titled "Experimental and numerical study of atrest lateral earth pressure of overconsolidated sand" presents an interesting experimental and numerical investigation of atrest lateral earth pressure resulted due to sandy soil adjacent to retaining walls.
The paper titled "Seeing through the ground: the potential of gravity gradient as a complementary technology" describes a multisensor device to locate buried services.
The paper titled "Stability evaluation of volcanic slope subjected to rainfall and freeze-thaw action based on field monitoring" aims at clarifying the aspects related to the stability of in situ volcanic slopes subjected to rainfall and freeze-thaw action.

Introduction
Restoring and improving urban infrastructure is recognized by the National Academy of Engineering as one of the fourteen grand challenges for engineering (NAE, [1]), and according to the 2009 ASCE Report Cards for Americas Civil Infrastructure, the current condition of U.S. infrastructure is rated "D" [2]. Aging civil infrastructure including bridges, levees, and dams in the US is calling for urgent measures focusing on maintenance, repair, and renovation. Geotechnical structures, compared to other types of civil infrastructure, are more vulnerable to nature and human-induced hazards. For example, Landslides in the Pacific Coast, the Rocky Mountains, the Appalachian Mountains, Hawaii, and Puerto Rico regions cause fatalities of 25 to 50 per year and direct/indirect economic losses up to $3 billion per year [3].
Structural health monitoring (SHM) is an emerging technique for the assessment of structural condition, hazards, and risks, consisting of three major components: sensing and instrumentation, data communication and archiving, and data analysis and interpretation. With the advent of todays powerful digital media and Internet, the needs for the first two components have been readily filled in many cases, but serious technical challenges still exist on the third component; how to process voluminous sensor data to obtain critical information for decision making? The research community is caught overwhelmed with the complex and extensive nature of field data associated with various factors of geotechnical phenomena. Some important challenges in processing field measurements are as follows.
(1) How can performance-related information (e.g., condition of drainage systems) be disentangled from the causes of various environmental factors (e.g., diurnal and seasonal temperature change)? (2) Field measurements are expensive and technically difficult, especially when the monitoring is long term. How can one perform reliable estimation with insufficient sensor data without sacrificing the accuracy? (3) Extensive modeling efforts are required in current structural health monitoring practices for geotechnical structures. How can one reduce modeling efforts for geotechnical structures, whose material and structural characteristics are various? 2 Advances in Civil Engineering (4) How can one deal with unavoidable and unpredictable sensor/instrument network problems and loss of subsets of sensor data, which are commonly encountered in field data collection?
This paper discusses reliable monitoring methodology for geotechnical structures that is subject to long-term environmental change with very limited sensor measurements. The objective of the methodology is to provide the information of when, where, and how confidently field engineers should be deployed to the monitoring site for potential hazards on structural performance. The methodology should be robust enough to deal with unavoidable malfunctioning of instrumentation devices during data collection.
This paper is organized as follows: some definitions and dilemma in current monitoring practices are discussed in Section 2. Sensing and modeling strategies of monitoring for complex geotechnical systems are discussed in Section 3. Understanding system identification techniques is important to develop reliable monitoring methodology. Recent developments of modeling and system identification techniques have been discussed: parametric approaches in Section 4 and nonparametric approaches in Section 5. A case study was conducted to demonstrate how monitoring methodology developed by the authors can be applied to realistic problems. The analysis results for a full-scale retaining wall subject to long-term environmental change are discussed in Section 6.

Some Definitions and Dilemma in Current Monitoring Practices
Inverse analysis and system identification techniques are necessary tools to evaluate current performance of civil infrastructure systems using field measurement data. A system in inverse analysis can be expressed with a causeresponse model, which consists of the causative force, system characteristics function, and system response as shown in Figure 1. The causative force is usually external forces (e.g., soil pressure), and the system response is usually the resulting deformation (e.g., displacement). The system characteristic function determines system properties with linear or nonlinear relationships between the system input and output associated with spatial and temporal variation of soil properties and highly variable soil conditions. When earth structures are exposed to significant environmental variation (e.g., temperature and precipitation), system identification becomes more complicated because the system response reflects the combined effects of loads and environmental factors. This is where the conventional parametric approaches of system identification become difficult to implement.
The nonparametric methods, on the other hand, are data-driven identification techniques that do not require a priori knowledge on physics of target systems. Consequently, without relying on idealization and simplification in modeling, the same data processing methodology is applicable to different structure types. The nonparametric methods are also advantageous in dealing with deteriorating structures since nonparametric models are more flexible in dealing with time-varying systems than the parametric ones, which are modeled with physical assumptions and would not be valid once target structures are damaged. So far, system identification of geotechnical structures is primarily done using the parametric methods. In long-term monitoring of geotechnical systems, however, there could be significant discrepancy between system behavior and corresponding models for two reasons. First, soil conditions are highly variable. Although high-fidelity models coupled with complex soil behavior are already available (e.g., coupled thermo-hydro-mechanical models), to collect all necessary sensor data for parametric identification is very expensive and it is usually not feasible. Due to insufficient data for sophisticated models, simpler models are often employed, which ignore many significant environmental factors. Consequently, parameter estimation becomes inaccurate due to oversimplification. Second, structures deteriorate over time. A common challenge in modeling deteriorating systems is that deterioration could result in not only changes in system parameter values but also transformation of the monitored system into different classes of nonlinear systems. Moreover, the characteristics of the damaged systems are usually unknown, so that the systems cannot be parametrically modeled prior to the occurrence of actual damage.
One drawback of existing nonparametric approaches is that physical interpretation on identification results is not as straightforward as that of the parametric methods, whose system parameters possess physical meaning (e.g., Youngs modulus). Although some nonparametric approaches were used in geotechnical applications, obtaining important performance-related information for decision making in maintenance has been rarely emphasized in this class of methods. For example, the nonparametric Artificial Neural Networks technique that will be described in Section 5.3 has been employed as an alternative approach to parametric regression methods using soil constitutive models (e.g., elastoplastic models) that will be described in Section 4.1, to identify complex nonlinear stress-strain relationship of soil. When soil strength is degraded, unlike the parametric methods, the nonparametric method could detect the change in soil mechanical properties, but it would not be able to interpret what types of physical change it is from the identification results. In order to overcome the above dilemma in current monitoring practices, it is desirable to take the advantages from both sides: modeling flexibility from the nonparametric methods and physical interpretation from the parametric methods.

Sensing and Modeling Strategies
To reduce high costs of sensor data collection associated with a high degree of spatial and temporal variability for geotechnical structures, the selection of what to be measured is a critical issue. Three options are possible in sensing: causative forces, environmental factors, and system response in Figure 1. The system response is desired to measure since the other two do not contain the information of system characteristics; the system response has the most abundant information about the entire system containing the effects Advances in Civil Engineering 3 Ambient temperature, rain, snow, humidity, etc.
Thermal pressure, soil weight, service loads, etc.

Environmental effects
Causative forces System characteristic function System response Figure 1: A schematic of the cause-response system model consisting of the causative forces, system response, environmental effects, and system characteristic function [4].  Using output-only (or responseonly) data in modeling, the proposed approach does not require defining explicit relationships between the system input, environmental effects, and the system output, which are required in conventional parametric approaches.
of all components of causative forces, environment and system characteristics function. Using data that contain the information of the system characteristics is particularly important when one deals with deteriorating structures. A challenge, however, in dealing with the system response data is that it is usually difficult to interpret raw sensor data directly due to interrelated effects of the components in the system. Thus, some kind of disentanglement techniques will be needed to decompose the data into more easily manageable and physically understandable forms.
To explain modeling strategies, Figure 2 summarizes the differences in system identification between parametric and nonparametric methods.
In nonparametric methods, response-only (or outputonly) data are processed to find mathematical relationships embedded in the data. In order to deal with complicated raw system response (or system output) data, some disentanglement techniques will be used prior to modeling. Once the system response data are processed, additional data of the causative forces (or system input) and/or environmental factors can be used as a posteriori information for physical interpretation. In model construction, therefore, the monitoring methodology does not require explicit relationships between the system input, environment and system output, which are generally not known in geotechnical applications.
The above sensing and modeling methodology has several important advantages over existing (parametric) approaches, particularly in monitoring applications.
(1) Oversimplification problems can be avoided especially when actual systems are complex and data are insufficient for sophisticated (parametric) input-output models since the modeling process is solely data driven using response-only data. (2) Modeling time and effort can be reduced significantly by using the same data processing procedures for different structure types since the proposed approach is not limited to a specific type of structure (i.e., 4 Advances in Civil Engineering the model is not based on physical assumptions). For the same reason, the same procedures can be used for different sensor types. (3) The proposed approach is more advantageous than conventional parametric approaches in dealing with deteriorating structures often associated with unknown time-varying system characteristics.

Review of Parametric Approaches
In this section, recent developments of the parametric approaches have been reviewed to provide background of parametric modeling, estimation, and optimization techniques.

4.1.
Modeling. Two parametric modeling approaches for geotechnical systems are discussed: soil constitutive model and coupled thermo-hydro-mechanical models.

Soil Constitutive Models.
There exist various soil constitutive models. In elastic model as the simplest constitutive model, the strain is assumed to be sustained under the applied load. Thus, the elastic strain is reversible, and if applied load is removed, the material springs back to its undeformed condition. Using elastoplastic models, the level of model complexity increases by adding the effects of irreversible plastic strains, and the soil is assumed to sustain both elastic and plastic strain. Therefore, if the load is removed, the soil sustains permanent plastic deformation, whereas elastic strain is recovered. Consequently, a key issue in the elastoplastic modeling exists in describing the material plasticity. A branch of plastic modeling is based on the concept of perfect plasticity [5]. Some examples include the Tresca model and the von Mises model for perfect plasticity in cohesive soils, Mohr-Coulomb model, Drucker-Prager model, Lade-Duncan model, Matsuoka-Nakai model, and Hoek-Brown model for perfect plasticity in frictional material. Another branch of plasticity modeling adopts the concept of critical states. In this modeling approach, the soil is characterized with three major parameters: the mean effective stress, shear stress, and soil volume (or void ratio) [6]. The original Cam clay model and the modified Cam clay model belongs to this category. The original Cam clay model was developed by researchers at Cambridge University as the first critical-state models that predict unlimited soil deformations without change in stress or volume when the critical state is reached in soft soil [7]. The modified Cam clay model assumes that the voids between the solid particles are only filled with water (i.e., fully saturated). The modified Cam clay models are formulated based on plasticity theory; when the soil is loaded, saturated water is expelled from the voids between the solid particles, and, consequently, significant irreversible plastic volume change occurs. Some limitations of the Cam clay models are described in Yu [5]. General descriptions on soil constitutive models can be found in Yu [5], Ling et al. [8], and Hicher and Shao [9].
The THM models express the sophisticated coupled relationships of heat and moisture transfer in deformable partially saturated soil [15]. The freezing process influenced by the interactions between water, temperature, and stresses in soil; water migrates to freezing fronts, and the frozen soil can contain unfrozen water below the freezing temperature; the water glaciation is influenced by the state of stress [38]. The formulation usually involves interrelated PDEs of thermoelasticity of solids (T-M) (interaction between the stress/strain and temperature fields through thermal stress and expansion) and poroelasticity theory (H-M) (interaction between the deformability and permeability fields of porous media). The conservation equations of mass, energy, and momentum are usually obtained with Hooke's law of elasticity, Darcy's law of flow in porous media, and Fourier's law of heat conduction [39]. The effects of precipitation to the moisture content in the soil were studied by Troendle and Reuss [40], D'Odorico et al. [41], and Longobardi [42]. For the numerical solution of the conservation equations, the finite element method (FEM) is usually employed [39,43].

Parameter Estimation.
For parametric models, the cause-response system can be expressed as y k = (x k , x k−1 , . . . , x k−1 | θ) k , y k = y k + η k + ε k , where y k : observed (or measured) system output at time step k, in which the dimension of y k is (1 × m), and m is the total number of observational points or number of sensors in in-situ measurements; y k : estimated system output based on employed geomaterial constitutive models. In geotechnical engineering, the finite element method (FEM) is commonly used for the numerical solution of the constitutive equations, thus yielding y k ; y k : residual between the observed output y k and estimated output y k . The residual includes the modeling error η k and measurement error ε k , which are combined together and usually undistinguishable for field measurements. In many applications, the residual is assumed to have y k ∼ N(0; Σ y ), in which Σ y is an (m × m) covariance matrix of y k ; h k : system function of given system parameter vector θ. In the most general case, h k is stochastic, time-varying, nonlinear dynamic function; θ: (p × 1) system parameter vector to be estimated; x: known system input vector with the memory of the l-th order. For static systems, l = 0. The goal of system identification is to find the "best" estimates of the system parameters θ that minimize the residual y k . Many optimal estimation algorithms are available for the best estimates, and they are usually classified into two approaches: parameter estimation methods and state estimation methods. The parameter estimation methods (also Advances in Civil Engineering 5 referred as the variational methods in some geotechnical literatures) are described in this section, and the state estimation methods (also referred to as sequential methods in some geotechnical literatures) will be described in Section 4.3.
In parameter estimation, the most general objective function can be expressed as where J o (θ): objective function for the observational (or measurement) information of the system output; J p (θ): objective function for the prior information of the system parameters; β: a positive scalar parameter, which adjusts the significance (weighting) between the observational information J o (θ) and the prior information J p (θ); W o : covariance matrix of the measurement error whose dimension is (m × m); W p : covariance matrix of the prior information error involving system parameters whose dimension is (m × m); θ p : previously known means of the system parameters θ. Three parameter estimation methods are usually employed in geotechnical applications: (1) least square estimation, (2) maximum likelihood estimation, and (3) Bayesian estimation.

The Least Square Estimation (LSE).
The objective function of the LSE corresponds to the case in which the adjusting scalar parameter β = 0 in (2), and the covariance matrix of the measurement error W −1 o = I in (3), where I is an (m×m) identity matrix, thus resulting in With the condition of β = 0, no prior information of the system parameters is used during the parameter estimation.  [55], and Xiang et al. [56].

The Maximum Likelihood Estimation (MLE).
In the MLE method, the observational information of the measurements is used, and the measurement data are weighted according to their significance (i.e., W −1 o / = I), but no prior information of system parameters is used in the parameter estimation (i.e., β = 0). Therefore, the LSE can be seen as a special case of the MLE. The objective function of the MLE is Some examples of using the MLE for geotechnical engineering applications are Ledesma et al. [57], Honjo and Darmawan [58], Ledesma et al. [59], Ledesma et al. [60], and Gens et al. [61].

The Conventional Bayesian Estimation (BE) and Extended Bayesian Estimation (EBE).
In the BE method, the system parameters are estimated using both the observational information of measurements and the prior information of the system parameters, with the same significance between these two information (i.e., β = 1) as The objective function of the EBE is more general than that of the BE, with the nonunit positive scalar adjusting parameter β as If the adjusting parameter β is small, the prior information of θ p has less contribution in the parameter estimation of θ, and vice versa. Optimal values of the adjusting parameter β can be determined, for example, with the cross-validation method [62], ridge regression method [63], and the Akaike Information Criterion (AIC) [64][65][66].
The conventional BE and EBE methods are more sophisticated than other estimation methods, while the Bayesian methods require more amounts of information on both observational measurements and prior knowledge of system parameters. Therefore, the availability of necessary information is important to apply the Bayesian methods.

State Estimation.
In state estimation methods, the system can be identified by estimating its state at each time step using so called filters. Therefore, the state estimation method is also referred to as the sequential estimation method. Among numerous types of filters, the Kalman filter-based algorithms would be most widely used in geotechnical applications, including (1) the linear Kalman filter method and (2) the extended Kalman filter method. Some application examples of the Kalman filter methods for geotechnical applications are given in the work of Murakami and Hasegawa [68], Kim and Lee [69], and Zheng et al. [70]. More general descriptions and details concerning the Kalman filter can be found in Mendel [71].
where z k : true internal state at time step k, which is evolved from the previous state z k−1 ; x k : known system input state at time step k; w k : stochastic process of noise with a zeromean, multivariate normal distribution of w k ∼ N(0, Σw k ); A k : linear state transition matrix, which is applied to the previous state z k−1 ; B k : input matrix, which is applied to the current system input x k . The observational (or measured) state of the system output can be expressed as where y k : observational system output; C k : observational matrix, which maps the true state space of z k into the observed space of y k ; v k : stochastic process of observational noise with zero mean Gaussian white noise of v k ∼ N(0, Σv k ). Using this underlying system model, the estimate of the state and error covariance matrix of the estimated state can be determined as where z k|k : updated state at time step k given observations up to and including time step k; P k|k : updated error covariance matrix of z k|k ; z k|k−1 : predicted state at time step k given observations up to and including time step k − 1. z k|k−1 = A k z k−1|k−1 + B k−1 x k−1 ; P k|k−1 : predicted error covariance matrix of z k|k−1 . P k|k−1 = A k P k−1|k−1 A T k + Σ wk ; y k : measurement residual; y k = y k − C k z k|k−1 ; S k : residual covariance matrix; = C k P k|k−1 C T k + Σ wk ; K k : optimal Kalman gain. K k = P k|k−1 C T k S k . The Kalman filter shown in (11) is an optimal estimator of minimum mean-square error z k − z k|k .

Extended Kalman Filter (EKF).
In the EKF, the underlying linear dynamic models are extended to nonlinear models as where f and h are nonlinear functions. Instead of A k and C k in the linear Kalman filter method, and, in the EKE, the Jacobian matrices of ∂ f /∂z and ∂h/∂z are used.
In summary, the system in the state estimation can be identified by estimating its state at each time step using filters. Using the Kalman filter methods, it is possible to incorporate prior information in the observation data during the state estimation. Since the underlying system model of the linear Kalman filter method is a linear dynamic system, this method is usually not applicable to nonlinear geotechnical systems. The extended Kalman filter method can be used to identify such nonlinear systems.

Optimization.
Once an objective function with respect to unknown system parameters is constructed as shown in Section 4.2, the solution procedure uses standard optimization techniques to find the optimal values of the system parameters. Numerous optimization algorithms have been developed and used for general purposes of optimization in every field of science and engineering. General descriptions of optimization algorithms can be found in Bertsekas [72].
In geotechnical applications, the aim of the optimization process is usually to calibrate geotechnical models by finding a set of optimal values of the model parameters. The optimal values of the model parameters can be found, using various optimization algorithms by minimizing the residuals between the measurement data (usually obtained from field or laboratory testing) and the synthetic data (usually obtained from the finite element analysis for the numerical solutions of the geotechnical models). In many geotechnical applications, however, the optimization surface contains many local minima and sometime is nonconvex due to the complexity of material behaviors and coupled effects of temperature, moisture, and loads.

Review of Nonparametric Approaches
Nonparametric approaches have been also applied in different geotechnical problems. In this section, recent developments of nonparametric data processing techniques for geotechnical systems have been reviewed.

Time Series Analysis.
In time series analysis, the dynamic response of target systems can be analyzed with a discrete time series expansion model of the system input and output. One kind of time series models is called an autoregressivemoving average (ARMA) model that can be formulated as where x k : observed (or measured) system input at time step k; y k : observed (or measured) system output at time step k; na: order of the moving average (MA) as nb i=0 b i x k−i ; nb: order of the autoregression (AR) as nb i=0 a i y k−i ; e: white, exogenous noise.
Using the ARMA model, the characteristics of the measurement time histories of the system input and output can be determined from the identification of the expansion coefficients (a's and b's) based on the measured system input and output. The optimal coefficient values can be determined, using various optimization algorithms as discussed Advances in Civil Engineering 7 in Sections 4.2 and 4.4. A general description of time series analysis methods can be found in Box and Jenkins [82].
Some application examples of the time series analysis methods for geotechnical systems include Glaser [83], Glaser and Leeds [84], Glaser and Baise [85], Baise et al. [86], and Glaser [87]. In Glaser and Baise [85], a technique for mapping the identified time series coefficients to relevant soil physical properties was discussed that is considered to be a parametric approach in their paper.  [88,89] and Huang and Attoh-Okine [90]. One advantage of using these techniques is in dealing with long-term natural processes, which are commonly observed nonlinear and nonstationary. The EMD and HHT are widely used in various fields of science and engineering: meteorology and atmospheric physics [91][92][93][94][95][96], earthquake engineering, structural health monitoring (SHM), and control for civil structures [97][98][99][100][101][102].

Time-Frequency
For any arbitrary time series x(t), an analytical signal z(t) can be obtained using the Hilbert transform. Let y(t) be the Hilbert transform of x(t) where P is the Cauchy principal value, and where In (15), it should be noted that the Hilbert transform is the convolution of x(t) with 1/t, which emphasizes the local properties of x(t). In addition, (17) provides the best local fit of x(t) using-time dependent functions of a(t) and θ(t). Finally, the instantaneous frequency is defined as In order to obtain physically meaningful instantaneous frequencies (IMF), Huang et al. [88] suggested the decomposition of a complex original time series into multiple so-called intrinsic mode functions that represents the oscillatory modes embedded in the original signal, and the instantaneous frequencies are determined for the decomposed IMFs. The signal x(t) can be expressed using the series of IMFs as where the IMF k is the k-th intrinsic mode function, m is the number of the IMFs, and r(t) is the residual.
The IMF is defined to have the properties of local zero means and the same numbers of zero crossings and extrema throughout the time series for the IMF to be only one mode of oscillation without complex riding waves. A difference from the Fourier-based signal processing methods is that the IMF is not restricted to be single banded and can be nonstationary. Several EMD algorithms have been developed using the so-called sifting process [104,105].
The HHT is a time-frequency analysis technique; combined with the EMD, a time-frequency plot can be obtained for each IMF to visualize frequency change over time. The HHT is similar to the wavelet transform (WT) as a nonstationary data processing technique, but the HHT is not limited by the underlying basis functions as the WT is.

Black-Box Methods.
One technical difficulty in the identification of complex (nonlinear) geotechnical systems is that the system characteristic function in Figure 1 is usually unknown beforehand, so that it is not possible to establish exclusive relationships between the system input and system output. This case is often encountered when systems identified are under field condition subject to various environmental effects, or systems are evolved into a different class of nonlinearity after unpredictable unknown structural damage. The black-box methods can be used when the physical relationships between the system input and the system output are unknown.
The Artificial Neural Networks (ANNs) technique, inspired by biological neural networks, has been shown to be a powerful tool for developing model-free representation of nonlinear systems. The ANNs consist of an interconnected group of artificial neurons that forms the input layer, hidden layers, and output layer for arbitrary multiinput multioutput (MIMO) systems in Figure 3. Employing various optimization algorithms, the input-output relationships could be determined by finding the optimal values of the weights and biases of the artificial neurons. Detailed description of the ANN method can be found in Fausett [106] and Gurney [107].
The ANN techniques have been used in a wide range of geotechnical applications including pile capacity, settlement of foundations, characterization of soil properties and behavior, liquefaction, site characterization, earth retaining structures, slope stability, tunnels, and underground openings [103]. Some technical challenges for the ANN modeling in geotechnical engineering are discussed in Jaksa et al. [108].

Response-Only Models.
Response-only methods are defined as the methods that use no system information in their data processing procedures. The blind source separation (BSS) is classified as one of these kinds. The BSS method is a multivariate, nonparametric techniques, which separate unknown system input (or "sources"), based on observed system output (or "response") without (or with little) information of the system input or system function. BSS includes several response-only techniques, such as the principal component analysis (PCA) for statistically uncorrelated multivariate system input, and the independent component analysis  (ICA) for statistically independent multivariate system input. General descriptions of the PCA and ICA methods can be found in Hyvärinen et al. [109].
The principal component analysis (PCA) method, also known as the proper orthogonal decomposition (POD) or the Karhunen-Loève transform, is a multivariate statistical technique [110]. Two algebraic solutions of the PCA are commonly used including (1) the eigenvector decomposition of the covariance matrix and (2) the singular value decomposition approach. The first solution will be described in this section. For an (m×n) observation data set X = [x 1 ; . . . ; x m ], where x i is an (n × 1) vector associated with sensor i, the goal of the algebraic solution is to find the orthonormal matrix of the principal components P, where which renders the covariance matrix C Y diagonal. The covariance matrix can be determined from such that where A is an (m × m) symmetric matrix, V is the (m × m) matrix of eigenvectors arranged as column, λ is the (m × m) diagonal matrix of the eigenvalues. The PCA is limited by its global linearity because the PCA removes linear correlations among the observed data and is only sensitive to secondorder statistics [111,112]. Some geotechnical applications of the PCA include Dai and Lee [113], Komac [114], Folle et al. [115].

Case Study: Monitoring for Full-Scale Retaining Walls Subject to Long-Term Environmental Change
In order to demonstrate the benefits of the nonparametric methodologies discussed in Section 2, a case study was conducted using a full-scale reinforced concrete retaining wall with the height of 13.59 m. Because the wall was placed only 9.5 m away from a high-rise residential apartment building, the collapse of the wall would result in a catastrophic disaster. The backfilled soil characteristics were not known, and the soil behavior (e.g., pore water pressure or soil temperature) was not monitored. The material properties of the reinforced concrete were also unknown, and the plan of the retaining wall was not available. The retaining wall was monitored for three years with three tilt sensors located at the upper, middle, and lower locations of the wall (13.14 m, 6.55 m, and 1.68 m from the ground). At the same locations of the tilt gauges, the surface temperatures were also measured. Therefore, a total six sensors (i.e., three tilt gauges and three surface temperature sensors) were used and wired to a data logger, equipped with a digitizer and local storage device. The sensor readings were sampled at once every hour (1 sample/hr) for all channels. Consequently, due to the lack of information in terms of measurement types, temporal and spatial resolution of measurements, and information on the monitored structure, conventional parametric identification approaches could not be used in this study. Furthermore, although the wall surface temperature data were collected, only tilt data were used in this analysis to demonstrate that important performance-related information on the retaining wall can be obtained using response-only data without relying on additional data of the causative force and environment in the data processing procedures. As described in Section 3, since the inverse analysis using response-only data is not based on explicit relationships of system input output, which cannot be accurately determined due to limited information of structural characteristics and sensor measurements, the oversimplification problem often observed in conventional Advances in Civil Engineering 9 (a) (b) Figure 4: A full-scale retaining wall used in this study. The wall is an L-type cantilever reinforced concrete wall 13.59 m high. The retaining wall is subject to long-term environmental variations [4].  parametric approaches would be avoidable. Environmental measurements will be used a posteriori information for physical interpretation of the inverse analysis results, which is commonly not straightforward in other nonparametric approaches. If this approach was successful, the expensive data collection cost could also be reduced ( Figure 4). The tilt time histories measured from the retaining wall are shown in Figure 5. The slope is in microradian, and the plus sign is for the slope towards the apartment side. The slope signals at all three locations were significantly affected by seasonal and daily variation: decreasing during summer and increasing during winter, and decreasing during days and increasing during nights as reflected in daily trends (not clearly shown in the figure due to scale). During this threeyear monitoring period, the wall behavior was affected by temperature change in addition to rain and snow falls, freeze thaw of backfilled soil, soil-structure interaction, and so on. Figure 5 also shows that the collected sensor data are partially incomplete. The lower sensor failed in Q1 2006 (approximately after one year). There were "missing" data for all sensors in Q4, 2006, for about three months due to instrument failure. These unavoidable and unpredictable sensor and instrumentation problems are frequently encountered in long-term field measurements, and the proposed nonparametric methodology should be robust to handle these kinds of problems. Therefore, the figure illustrates the lack of data available for the complexity of the given problem, which is commonly encountered in many geotechnical applications.
Three nonparametric data processing techniques were used: the empirical mode decomposition (EMD), the Hilbert-Huang transform (HHT) for single-channel (or Univariate) analysis, and the principal component analysis (PCA) for multichannel (or multivariate) analysis. A summary of the proposed nonparametric data processing approaches is provided in Table 1.
A brief description of the EMD-HHT was given in Section 5.2, and the analysis procedures of the EMD-HHT are summarized in Figure 6. Due to the complexity of the geotechnical system coupled with long-term environmental Table 1: A summary of the nonparametric identification approaches employed in the case study for a full-scale retaining wall, subjected to long-term environmental variations.

Data type Purposes
Empirical mode decomposition (EMD) Univariate To decompose nonlinear and nonstationary environmental variations of daily, seasonal and long-term trends from raw sensor measurements To decompose complex raw measurements into simpler and physically "well-behaving" intrinsic mode functions for better understanding of the system Hilbert-Huang transform (HHT) Univariate To obtain the instantaneous frequencies for nonlinear, non-stationary, time-varying systems The obtained instantaneous frequencies could be used to detect changes in "abnormal" system characteristics in time

Principal component analysis (PCA) Multivariate
To find interchannel relationships with multi-input data (note that the EMD and HHT are single-channel data processing techniques) To visualize the mode shapes of the system decomposed by the corresponding orthogonal principal components To quantify the energy of inter-channel motions for each mode shape and find the dominant one variation, the raw sensor data shown in Figure 6 are usually too complicated to be interpreted for performance assessment. Thus, a daily trend was disentangled using the EMD based on its period of one day out of the raw signal even with missing data for three months in the second year, and a sample result is shown in Figure 6(b). The disentangled daily trend of the slope is mostly influenced by the daily fluctuation of the wall surface temperature (i.e., the wall inclined toward the apartment during daytime and toward the backfill during night time). Once the daily trend was disentangled, the instantaneous frequency of the daily trend was obtained using the HHT as shown in the time-frequency plot of Figure 6(c). The time history of the daily trend has a period of one day, and the corresponding instantaneous frequency has a baseline frequency of one per day as shown in Figures 6(b) and 6(c). Occasional amplitude reduction is observed in the time history (e.g., 3/11, 3/15, 3/21, and 4/5 through 4/9) in Figure 6(b), and during these times, the corresponding instantaneous frequencies become significantly larger than the baseline frequency. Hourly precipitation records collected separately at the nearest weather station to the wall site are plotted in Figure 6(d). The precipitation data were not used in our analysis. Interestingly, the comparison with the instantaneous frequency in Figure 6(c) shows that the peaks of the instantaneous frequency concur with precipitation events, and the frequency decreases back to the baseline frequency (i.e., one day) when the precipitation stops.
These results demonstrate an important advantage of the nonparametric techniques over conventional parametric methods in monitoring applications. Without a priori information, physical assumptions and oversimplification of the monitored structure, the daily trend can be disentangled from a complicated raw slope signal. With the occurrence of the precipitation, the normal pattern in a slope signal (i.e., the system response in Figure 1) is "disturbed" due to the change of the structural characteristics with increased water content in the backfills (i.e., the system characteristics function). Consequently, the pattern of the disentangled daily trend is also disturbed in its amplitude and frequency. After the precipitation stops, the pattern in the raw slope time history returned to the normal condition with a working drainage system, which drain away excessive water in the soil, and so does the patter of the disentangled daily trend. After the precipitation stops, if the pattern of the disentangled signal did not go back to normal (i.e., the instantaneous frequency in Figure 6(c) did not go back to the baseline frequency), it could be concluded that the drainage system is not working properly. A critical difference between using the raw and the processed signals is that the raw signal is too complicated to recognize the precipitation effect because it is overshadowed by other dominant non-performancerelated effects, such as temperature as shown in Figure 6(a); the important drainage-related information can be extracted using the disentangled signal as shown in Figure 6(c).
The principal component analysis (PCA) technique was used as a multi-sensor analysis method.
The brief description of the PCA was provided in Section 5.4. In order to find the optimal window size, the statistics of the first PCA mode shape, which is associated with the largest contribution to the energy of the total wall motion, were calculated. Figure 7 shows the mean values of the eigenvectors in dashed lines with one-standard deviation (1 − σ) uncertainty in the shaded areas. The statistics were calculated with different window sizes (i.e., numbers of days) up to 60 days, and the window size of one-day duration includes 24 data points for the given sampling rate of 1 sample/hr. In the figures, since the expectation of the PCA mode shape begins statistically unbiased after 14 days (i.e., the mean and deviation values begin saturated), the window size of two week was selected for the PCA in this study. Figure 8 shows the PCA mode shapes with the error bars of one-standard deviation (1 − σ). In the figure, the mode shapes of the wall slopes were converted to the displacements using the known heights of the sensor location. The μ and σ in the parenthesis are the mean and standard deviation of the eigenvalue corresponding to each mode that is normalized to the sum of the eigenvalues of all modes. Although no  The daily trend was disentangled from the complex raw signal using the EMD. (c) The disentangled daily trend was processed using the HHT to obtain instantaneous frequencies over time. The baseline frequency remains at one per day, but some peaks were observed occasionally. (d) Precipitation records measured separately at a weather station near the wall site were compared in the same time scale. Concurrence was observed between the peaks in the instantaneous frequencies and precipitation records. The peak of the instantaneous frequency increased when precipitation began and decreased when precipitation stopped, which implies that the drainage system is performing satisfactorily.
physical characteristics information was used, Figures 8(a)-8(c) illustrates that the PCA mode shapes agree to the first, second, and third bending modes of a cantilever. The PCA eigenvalues show that the motion of the first mode is dominant: 97.3% of the entire motion energy with the standard deviation of 2.1%. This dominant motion is clearly due to the significant daily and seasonal trends shown in Figure 5 that could be mostly due to diurnal and seasonal temperature variation. For the purpose of structural health monitoring, this dominant low-order mode is less interesting since important information of condition assessment is performance related, not environment related. In addition, structural damage is usually localized phenomena, so that higher modes would have a better spatial resolution to detect.  8(e) shows that an excessive amount of the movement was realized after the damage of the bottom sensor that is unusual for the cantilever type of the wall structure. The mean contribution of the first mode to the total energy of the wall motion was reduced from 97.3% (with the standard deviation of 2.1%) to 82.3% (with the standard deviation of 14.3%) and that of the second mode increased from 2.3%   Based on the single-channel and multi-channel analyses results discussed in Section 6, the following important facts can be concluded for the general monitoring applications of geotechnical structures.  in the EMD-HHT in Figure 6), where the abnormal behaviors occur can be also determined.
(iii) Using the statistics (e.g., error bars) of the eigenvalues and eigenvectors of the PCA modes in Figure 8, the confidence levels of detecting abnormal behaviors can be quantified combining with the standard statistical hypothesis test or classification techniques. It should be noted that since the PCA modes are statistically uncorrelated (or statistically independent for the independent component analysis), uncertainty quantification can be done with three times of integral (for three slope measurements) for statistical tests, not triple integral. For example, it was observed that the cross-correlation values of the PCA eigenvalues between different modes are very low (less than 0.6404) as summarized in Table 2. This property is particularly important when a large number of sensors are used.

Summary and Conclusions
The modeling procedures of the nonparametric methods are data driven, not based on a priori physical knowledge of the monitored structure. Therefore, the methodology developed by the authors is not limited to a specific type of structure, but it could be applicable to a wide range of monitoring applications for different geotechnical structures. For the diversity of the characteristics of geotechnical structures, the nonparametric methodology could reduce modeling efforts significantly in various monitoring applications that has been technical barrier using conventional parametric approaches.
The important performance-related information (e.g., effects of drainage or malfunctioning sensors) could be obtained using a very limited amount of the response-only sensor data (i.e., three tilt time histories). The decomposition techniques used in this study could disentangle the response deformation data of the complex system subject to longterm environmental variations without the information of  the causative force, environment or structural characteristics. For example, since the precipitation records were not used in the EMD-HHT, it was demonstrated that oversimplification problems could be avoided using the response-only analysis techniques that is not based on exclusive input-output relationships. Therefore, the nonparametric methodology discussed in this paper could provide the important information of when, where, and how confidently engineers should be deployed to the site for potential performance hazards of monitored structures using a very little amount of information without sacrificing accuracy of the inverse analysis. The common practical problems of the unpredictable sensor/instrument network malfunctioning problems could be also effectively dealt with the nonparametric methodology.

Introduction
Roller-integrated compaction monitoring (RICM) technologies refer to sensor measurements integrated into compaction machines. Work in this area was initiated over 30 years ago in Europe for smooth drum rollers compacting granular soils and involved instrumenting the roller with an accelerometer and calculating the ratio of the fundamental frequency to the first harmonic [1,2]. Modern sensor technologies, computers, and global positioning system (GPS) technologies now make it possible to collect, transmit, and visualize a variety of RICM measurements in real time. As a quality assessment tool for compaction of earth materials, these technologies offer tremendous potential for field controlling the construction process to meet performance quality standards. Recent efforts in the United States (US) have focused attention on how RICM technologies can be used in road building [3][4][5] and relating selected RICM parameters to mechanistic pavement design values.
Several manufactures currently offer RICM technologies on smooth drum vibratory roller configurations for compaction of granular materials and asphalt, and nonvibratory roller configurations for compaction of cohesive materials.
The current technologies calculate: (1) an index value based on a ratio of selected frequency harmonics for a set time interval for vibratory compaction [1,2], (2) ground stiffness or dynamic elastic modulus based on a drumground interaction model for vibratory compaction [6][7][8], or (3) a measurement of rolling resistance calculated from machine drive power (MDP) for vibratory and nonvibratory compaction [9]. When the accelerometer-based measurement system provides automatic feedback control for roller vibration amplitude and/or frequency and/or roller speed control, it is referred to as "intelligent" compaction and offers the advantage of reducing the potential for drum "bouncing". The MDP approach has the advantage of working in both vibratory and static modes [9] and has its origin in the discipline of terramechanics. Recent finding from the Mars Exploratory Rover (MER) mission demonstrated that the MDP approach can be applied to determine Martian regolith cohesion and friction angle by monitoring the electromechanical work expended [10]. Future RICM technologies may provide information on soil mineralogy and moisture content but are currently only a subject of research.
Regardless of the technology, by making the compaction machine a measuring device and insuring that compaction 2 Advances in Civil Engineering requirements are met during construction, the compaction process can be better controlled to improve quality, reduce rework, maximize productivity, and minimize costs [11]. Recent advancements with global positioning systems (GPS) add a significant benefit with real time spatial viewing of the RICM values. Some of these technologies have recently been implemented on full-scale pilot earthwork and asphalt construction projects in the US [12][13][14][15][16][17][18], and its use is anticipated to increase in the upcoming years. Effective implementation of this technology needs proper understanding of the relationships between RICM values and traditional in situ point test compaction measurements (e.g., static or dynamic plate load test modulus, density, etc.). This builds confidence in the technology and provides insight into the key parameters affecting the machine measurement values.
The purpose of this paper is to provide: (a) an overview of two technologies-compaction meter value (CMV) and machine drive power (MDP); (b) a summary of field evaluation studies, (c) an overview of factors influencing the statistical correlations, (d) modeling for visualization and characterization of spatial nonuniformity, and (e) a brief review of the current specifications.

Overview of CMV and MDP Technologies
Compaction meter value (CMV) is a dimensionless compaction parameter developed by Geodynamik that depends on roller dimensions, (i.e., drum diameter and weight) and roller operation parameters (e.g., frequency, amplitude, speed) and is determined using the dynamic roller response [19,20]. CMV is calculated using (1), where C is a constant (used as 300 for the results presented in this paper), and A 2Ω = the acceleration of the first harmonic component of the vibration, A Ω = the acceleration of the fundamental component of the vibration [8]: According to Geodynamik [21], CMV at a given point indicates an average value over an area whose width equals the width of the drum and length equal to the distance the roller travels in 0.5 seconds. At least two manufactures have used the CMV technology as part of their RICM systems ( Figure 1). The Geodynamik system also measures the resonant meter value (RMV) which provides an indication of the drum behavior (continuous contact, partial uplift, double jump, rocking motion, and chaotic motion). RMV is not discussed in detail here, but it is important to note that the drum behavior affects the CMV measurements [6] and therefore CMV must be interpreted in conjunction with RMV [22].
Machine drive power (MDP) technology relates the mechanical performance of the roller during compaction to the properties of the compacted soil. The use of MDP as a measure of soil compaction is a concept originated from study of vehicle-terrain interaction [23]. The basic premise of determining soil compaction from changes in equipment response is that the efficiency of mechanical motion pertains not only to the mechanical system but also to the physical properties of the material being compacted. More detailed background information on the MDP system is provided in [9]. The basic formula for MDP is where P g = gross power needed to move the machine (kJ/s), W = roller weight (kN), a = machine acceleration (m/s 2 ), g = acceleration of gravity (m/s 2 ), α = slope angle (roller pitch from a sensor), V = roller velocity (m/s), and m (kJ/m) and b (kJ/s) = machine internal loss coefficients specific to a particular machine [9]. The second and third terms of (2) account for the machine power associated with sloping grade and internal machine loss, respectively. MDP is a relative value referencing the material properties of the calibration surface, which is generally a hard compacted surface (MDP = 0 kJ/s). Positive MDP values therefore indicate material that is less compact than the calibration surface, while negative MDP values would indicate material that is more compacted than the calibration surface (i.e., less roller drum sinkage).
In some recent field studies [13], the MDP output value has been scaled to MDP 80 or MDP 40 depending on the modified settings which are recalculated to range between 1 and 150 using (3) and (4), respectively, For the MDP 80  Effective use of RICM technologies is aided by the integration of GPS position information and an on-board computer monitor ( Figure 1) which displays the roller location, machine measurement values (i.e., CMV or MDP), vibration amplitude and frequency, and roller speed. Thus, the technology enables a roller operator to make judgments regarding the condition of the compacted fill material in real time. If real-time kinematic (RTK) GPS systems are used, those systems reportedly have position accuracies of about ±10 mm in the horizontal plane and ±20 mm in the vertical plane [24].

Field Evaluation of CMV and MDP Technologies
Field evaluation studies beginning in about 1980 have documented correlations between RICM measurements and various traditionally used point measurements. A summary of key findings from these different studies, types of rollers used, and materials tested is provided in Table 1 (a) nuclear gauge (NG), electrical soil density gauge (SDG), water balloon method, sand cone replacement method, radio isotope method, "undisturbed" shelby tube sampling, and drive core samples to determine moisture content and dry unit weight, (b) light weight deflectometer (LWD), soil stiffness gauge (SSG), static plate load test (PLT), falling weight deflectometer (FWD), Briaud compaction (BCD), dynamic seismic pavement analyzer (D-SPA), and Clegg hammer to determine stiffness or modulus, (c) dynamic cone penetrometer (DCP), cone penetration testing (CPT), "undisturbed" shelby tube sampling, and rut depth measurements under heavy test rolling to determine shear strength or California bearing ratio (CBR).
Most of the field studies involved constructing and testing controlled field test sections for research purposes and correlation development, while a few studies were conducted on full-scale earthwork construction projects where RICM was implemented as part of the project specifications [12,13].
Based on the findings from a comprehensive correlation study conducted on 17 different soil types from multiple project sites as part of the National Cooperative Highway Research Program (NCHRP) 21-09 project [25], the factors that commonly affect the correlations are as follows: (1) heterogeneity in underlying layer support conditions, (2) high moisture content variation, (3) narrow range of measurements, (4) machine operation setting variation (e.g., amplitude, frequency, speed, and roller "jumping"), (5) nonuniform drum/soil contact conditions, (6) uncertainty in spatial pairing of point measurements and roller MVs, (7) limited number of measurements, (8) not enough information to interpret the results, (9) intrinsic measurement errors associated with the RICM and in situ point measurements.
In general, results from controlled field studies indicate that statistically valid simple linear or simple nonlinear correlations between RICM values and compaction layer point-MVs (e.g., modulus or density) are possible when the compaction layer is underlain by a relatively homogenous and stiff/stable supporting layer. For example, Figure 2 presents simple linear regression relationships between CMV and in situ LWD modulus and dry density point-MVs obtained from a calibration test strip with plan dimensions of 30 m × 2 m. The test strip consisted of silty sand with gravel base material underlain by a very stiff fly ash stabilized subgrade layer. For this case, correlations between CMV and both LWD modulus and dry density measurements showed R 2 > 0.8.
On the contrary, many field studies summarized in Table 1 indicate that modulus-or stiffness-based measurements (i.e., determined by FWD, LWD, PLT, etc.) generally correlate better with the RICM measurements than compaction layer dry unit weight or CBR measurements. This is illustrated in Figures 3 and 4. Data presented in Figure 3 was obtained from several calibration and production test areas with lean clay subgrade, recycled asphalt subbase, recycled concrete base, and crushed limestone base materials compacted with a vibratory smooth drum roller. Data presented in Figure 4 was obtained from several calibration and production test areas with lean clay subgrade compacted using a nonvibratory padfoot roller. CBR measurements presented herein are obtained from DCP tests using empirical correlations between DCP index values and CBR [38]. Figures 3 and 4 clearly indicate that CMV correlates better with LWD modulus point-MVs compared to dry unit weight or CBR point-MVs. One of the primary reasons for this is that modulus measurements represent a composite layered soil response under an applied load which simulates vibratory drum-ground interaction. The density and CBR measurements are average measurements of the compaction layer and do not directly represent a composite layered soil response under loading. Although DCP-CBR measurements did not correlate well in the two cases presented in Figures  3 and 4, many field studies [13,25,34] have indicated that DCP tests are effective in detecting deeper "weak" areas (at depths > 300 mm) that are commonly identified by     RICM measurements and not by point-MVs obtained on the surface. This is primarily because of the differences in measurement influence depths. Accelerometer based roller measurements have measurement influence depths ranging from 0.8 m to 1.5 m depending on soil layering, drum mass, and excitation force [25,[39][40][41], while machine drive power based measurements range from 0.3 to 1.3 m depending on the heterogeneity in subsurface conditions [33]. On the other hand, most point-MVs have influence depths <0.5 m [41]. Statistical multiple regression analysis techniques can be used to account for heterogeneity in the underlying layers if the underlying layer RICM or in situ point MV measurements have been demonstrated [25]. High variability in soil properties across the drum width and soil moisture content also contribute to scatter in relationships. Averaging point measurements across the drum width, and incorporating moisture content into multiple regression analysis, when statistically significant, can help mitigate the scatter to some extent. An example of multiple regression analysis by incorporating moisture content into the analysis is shown in Figure 5 based on the data described in Figure 4. Results indicate that the correlation between MDP 40 and LWD modulus improved from R 2 value = 0.63 to 0.71, when moisture content is incorporated in to the regression analysis. MDP 40 versus CBR dataset did not show much improvement when moisture content is incorporated, although it was found to be statistically significant (as assessed by t and P statistics).

Spatial Analysis of In situ and RICM Measurements
RICM technologies offer a unique advantage of quantifying and characterizing "nonuniformity" of compaction measurement values. This topic presumably should be of considerable interest to pavement engineers. Vennapusa et al. [22] demonstrated the use of variogram analysis in combination with conventional statistical analysis to effectively address the issue of nonuniformity in QC/QA during earthwork construction. A variogram is a plot of the average squared differences between data values as a function of separation or lag distance and is a common tool used in geostatistical studies to describe spatial variation. Three important features of a variogram include sill, range, and nugget. Sill is defined as the plateau that the variogram reaches, Range is defined as the distance at which the variogram reaches the sill, and Nugget is defined as the vertical height of the discontinuity at the origin which mostly represents sampling error or short scale variations [42]. From a variogram model, a low "sill" and longer "range of influence" can represent best conditions for uniformity, while the opposite represents an increasingly nonuniform condition.
To evaluate the application of spatial analysis, a test section was created for comparison analysis of CMV and MDP with DCP index values. The comparisons are shown using theoretical and experimental variogram models, and Kriged surface maps were generated for in situ compaction measurements using the theoretical (exponential) variogram model. The theoretical variograms were fit to the experimental variograms by checking for its "goodness" using the modified Cressie goodness of fit approach suggested by Clark and Harper [43]. A lower Cressie "goodness" factor indicates a better fit. The study area was comprised of a compacted subgrade material (Edwards glacial till material, USCS classification: CL) and a scarified portion (to a depth of 200 mm) in a "Z" shape. The scarified portion was prepared intentionally to represent a common condition in earthwork construction resulting from utility trench Advances in Civil Engineering construction where the backfill may not be as compact as the neighboring unexcavated materials. After subgrade preparation, the area was mapped using a smooth drum machine in seven lanes using a vibration amplitude = 2.1 mm and frequency = 29 Hz. DCP tests were performed at 144 locations in the upper 200 mm (shown in gray circles on Figure 6) following roller mapping passes. The DCP test locations were strategically spaced such that the boundaries of compacted and uncompacted areas were captured during Kriging interpolation. CMV and MDP spatial data along with experimental and theoretical variogram models are shown in Figure 6. Log transformation of CMV was required to detrend the experimental variogram (details on detrending is explained in detail in Vennapusa et al. [22]). Kriged surface map and variogram model generated for the DCP index values are also presented in Figure 6. The univariate statistics (mean (μ), standard deviation (σ), and coefficient of variation (COV)) of the measurement values are also provided on the figure for reference. The compacted and uncompacted areas were generally well captured by the CMV/MDP and DCP index measurements; however, they were more clearly delineated by the DCP Index measurements.
Univariate statistics show that the COV of the MDP (89%) and DCP index (86%) measurements are comparable and are also significantly higher compared to that of CMV (39%). Similarly, the exponential variogram models of MDP and DCP index exhibit significantly lower range values than CMV. This suggests that the MDP and DCP index measurements have less spatial continuity and higher variability compared to CMV measurements. This high variability is likely due to differences in the measurement influence depths of these measurements as discussed earlier in this paper. DCP index values presented in Figure 6 are based on average values in the top 300 mm of the surface. In addition to using spatial analysis for visualization purposes, analysis from some field studies by the authors [36] has indicated that experimental semivariograms of RICM values sometimes show nested structures with distinctively different long-and short-range components. The nested structures are very likely linked to spatial variation in the underlying layer support conditions. These observations are new, have not been fully evaluated, and warrant more research. Further, a field study by White et al. [35] reported that variograms developed for two different spatial areas with similar univariate statistics (i.e., mean and standard deviation) showed distinctly different shapes of variograms with different spatial statistics, which illustrate the importance of spatial modeling to obtain better characterization of "nonuniformity" compared to using univariate statistics. This emphasizes the importance of dealing with "nonuniformity" in a spatial perspective rather than in a univariate statistics perspective.

Implementation of RICM Technology
A few countries and governmental agencies have developed specifications to facilitate implementation of RICM technologies into earthwork and hot mix asphalt (HMA) construction practices [44]. The international society of soil Advances in Civil Engineering mechanics and geotechnical engineering (ISSMGE) [39], Minnesota department of transportation (DOT) [45,46], Austrian [47], German [48], and Sweden [49] specifications require performing either static or dynamic plate load tests on calibration strips to determine average target values (typically based on 3 to 5 measurements) and use the same for QA in production areas. The ISSMGE, Austrian, and German specifications suggest performing at least three static plate load tests in locations of low, medium, and high degree of compaction during calibration process. Further, it is specified that linear regression relationships between roller compaction measurement values and plate load test results should achieve a regression coefficient, Although it is not clear yet what the right number of test measurements is to develop a field calibration, the experience of the authors shows that by increasing the number of measurements to 10-15 points, this substantially increases the statistical significance of the predictions. One of the major limitations of the existing RICM specifications is that the acceptance requirements (i.e., percent target value limits, acceptable variability, etc.) are technology specific and somewhat based on local experience. This limitation hinders widespread acceptance of these specifications into practice as there are currently at least ten different RICM technologies. Significant efforts are being made in the US in developing widely acceptable and technology-independent specifications [3][4][5]. Based on feedback obtained from various state and federal agency personnel in recent national level workshops conducted on this topic, White and Vennapusa [4] documented the following as the key attributes of RICM specification: (1) descriptions of the rollers and configurations,  Several new and innovative specification concepts have been proposed by researchers in the past few years [4,13,25]. These concepts primarily vary in the way the item number 10 listed above is dealt in the specification, that is, in the required level of upfront calibration work and data analysis which consequently leads to differences in the level of confidence in the quality of the completed work. A few of those concepts have been beta tested on demonstration level projects [25] but have never been fully evaluated on full scale projects to explore their limitations and advantages. More coordination between researchers and practitioners is needed to carefully evaluate these concepts and is a high-priority step forward to successfully implement the RICM technologies in practice. Further, integrating advanced analytical methods discussed in this paper (such as simple and multiple regression analysis to develop correlations and target values and spatial analysis to address nonuniformity) into a specification along with development of simple and ready-to-use software tools is much necessary to help advance the technology.

Concluding Remarks
RICM technologies with real-time display capabilities and 100% coverage of compaction data offer significant advantages to the earthwork construction industry. Integration of these technologies into practice requires proper understanding of the correlations between RICM values and compaction measurements (e.g., density, modulus) and factors that influence these correlations. This paper provided an overview of two technologies (i.e., CMV and MDP) and a review of field correlation studies documented in the literature. Review of the literature revealed correlations between the RICM measurements and various in situ tests used to measure density, modulus or stiffness, shear strength, and CBR. Results from field studies indicated that simple linear or nonlinear correlations between any of these measurements are possible if the compaction layer is placed over a "stable" and "homogenous" underlying layer. If the underlying layer is not stable or homogenous, correlations are adversely affected. For those cases, in general, relationships between RICM and modulus based measurements (e.g., LWD or FWD or PLT modulus) are better compared to RICM and dry density or CBR measurements. Multiple regression analysis can be performed incorporating properties of the underlying layers and moisture content to improve correlations. Other important factors that affect the correlations include narrow range of measurements, variations in machine settings, nonuniform conditions across the drum width, limited number of measurements, and measurement error associated with both RICM and in situ point measurements.
Geostatistical spatial modeling techniques can be utilized for data visualization and characterization of spatial nonuniformity using spatially referenced RICM data. To demonstrate the application of spatial analysis for visualization, an example CMV and MDP data set over a spatial area was used in comparison with DCP index measurements. Analysis results indicated that the compacted and uncompacted areas over the spatial area were well captured by all the measurements. Some field studies documented in the literature indicate that geostatistical semivariograms of RICM measurements can be used for construction process control and also to analyze variations in the underlying support conditions. These observations are new, have not been fully evaluated, and warrant more research.
Review of current RICM specifications revealed a potential limitation with the acceptance requirements (i.e., percent target value limits, acceptable variability, etc.) being technology specific and somewhat based on local experience. This limitation hinders widespread acceptance of these specifications into practice as there are currently at least ten different RICM technologies. Several new specification concepts have been recently documented in the literature with variations in the way calibration work is performed and acceptance requirements are established. These concepts require detailed field evaluation to explore their limitations and advantages. Integration of advanced analytical methods discussed in this paper (such as simple and multiple regression analysis to develop correlations and target values, and spatial analysis to address nonuniformity) into a specification along with development of simple and ready-to-use software tools are much necessary to help advance the technology. [

Introduction
In Hokkaido, Japan, there are over 40 Quaternary volcanoes, and pyroclastic materials cover over 40% of its area. Significant volcanic activities occurred in the Neogene's Quaternary period, and various pyroclastic materials such as volcanic ash, pumice, and scoria were formed during those eruptions. Such volcanic soils have been used as a useful construction material, especially on foundations or man-made geotechnical structures (embankments and cut slopes, etc.). However, research on volcanic coarse-grained soils from an engineering standpoint is extremely limited in comparison with cohesionless soils (Miura et al. [1]).
Recent earthquakes and heavy rainfalls in Hokkaido generated the most serious damage in the ground, natural slopes, cut slopes, and embankments, which are composed of volcanic soils (e.g., JSSMFE [2], JSCE [3]), as seen in the slope failure of a residential embankment due to the 1991 Kushiro-oki earthquake (JSSMFE [2]). Furthermore, cut slope failures attributed to freezing and thawing have also been observed in the Hokkaido expressway in spring and summer seasons. Figure 1 shows the mechanism of frost heaving in cut slope and failure modes in cold regions. In cold regions such as Hokkaido, Japan, slopes freeze from their surface with the formation of ice lenses during the winter season (see Figure 1(a)). Thereafter, the frozen soils thaw gradually from the ground surface until summer season. In the freezing and thawing sequence, the surface layer of a slope may exhibit high moisture content over the liquid limit of its soil owing to the melting of snow and thawing of the ice lenses. As a result, surface failure occurs at the boundary between loose thawing soil and the frozen layer by water infiltration due to both rainfall and snow-melting, because the frozen layer works as an impermeable layer (see Figure 1(b)). On the other hand, another failure due to the piping phenomenon of ground water may also be observed in spring season when pore water pressure increases over the strength of the frozen layer (see Figure 1(c)). Additionally, hollows of ice lenses created by thawing may generate loose structures in the frozen layer compared with before the freeze-thaw process (see Figure 1(d)). Because of this phenomenon, a deeper slope failure may be induced from summer to autumn seasons.  For the above reasons, natural disasters such as slope failure in cold regions is frequently induced in snow-melting season and is deemed to be caused by both the increase in degree of saturation arising from thawing water and the change in deformation strength characteristics of soil resulting from freeze-thaw action.     [6], and Kitamura et al. [7]). In particular, Yagi et al. [5] indicated the importance of the amount of limited rainfall and proposed a prediction method of failure based on field and experimental data for volcanic slopes. On the other hand, prediction methods of slope failure based on the monitoring techniques, for instance prediction using satellite systems and so on, have also been proposed (e.g., Kitamura [8]). Geotechnical problems on both freeze-thaw and frost heaving actions have been reported by many researchers (e.g., Aoyama et al. [9], Nishimura et al. [10], and Ishikawa et al. [11]). Mechanical behavior of frozen and thawed soils has been clarified through their efforts, and the importance of estimation on geotechnical problems has been pointed out.
Additionally, Harris and Davies [12], Harris and Lewkowicz [13] have investigated the deformation behavior of slopes subjected to freezing and thawing for slope stability. However, field, experimental, and analytical studies on slope stability due to freezing and thawing sequence have been rather limited.
The authors have similarly investigated rainfall-induced failure of volcanic slopes subjected to freezing-thawing and its mechanisms (Kawamura et al. [14][15][16][17]  having several slope shapes and water contents. Rainfall intensities were 60 mm/hr, 80 mm/hr and 100 mm/hr, and were accurately simulated through use of spray nozzle. During rainfall tests, pore water pressure behavior, deformation behavior, and variation in saturation degree were monitored, and the deformation of model slopes was estimated by the particle image velocimetry (PIV) method. The effects of the geometric condition of a slope, rainfall condition, geomechanical condition, and freeze-thaw action on the mechanism were clarified in detail. Of particular significance was the finding that the slip line is induced around the depth of the frozen area and can be evaluated by the dilatancy behavior of soil attributed to freeze-thaw action.
The purposes of this paper are to elucidate the aspects of soil behaviour in volcanic slopes by various monitoring instruments and to propose a prediction method for a rapid assessment of failure development in slopes. Field monitoring has continued to date from December 1, 2008. Presented herein are reliable data collected over this period, although there were intervals in which monitoring was not carried out.

Location of Monitoring Site and Monitoring Instruments
The monitoring site is located at a cut slope along the Route mainly of volcanic soil with silty soil. The monitoring site is shown in Figure 2.
In the present study, the following instruments were adopted in order to monitor soil behavior and temperature in the air and in the slope: (1) soil moisture meters (time domain reflectrometry type: TDR), (2) tensiometers, (3) thermocouple sensors, (4) clinometer (multiple inclination transducers), (5) settlement gauges, (6) anemovane, (7) snow gauge, and (8) Rainfall gauge, as shown in Figures 3(a) and  3(b). These instruments were basically set up at each depth of 20 cm. The specifications of instruments are shown in Table 1. The subscripts in the figures indicate the depths of position of the instruments. The symbols used in this study are also indicated in Table 1. Each data was collected within the sampling period of 10 minutes and was recorded into a data logger. The data for every 1 hour is depicted herein. The instruments in this study have been used for another cold region site in order to discuss slope stability subjected to freeze-thaw action and their validity has also been confirmed.
The index properties and the grain size distributions of soil samples taken from the slope are shown in Figures 4  and 5, respectively. As shown in Figure 5, the natural water content w N of the slope surface is almost the same as, or more than, the liquid limit w L of its soil. Owing to this, a part of the slope surface (the thickness of 20 cm) progressively eroded and flowed downward. The deeper parts in the slope were not destabilized because natural water content is lower than the liquid limit. It has been also confirmed that there were no differences in index properties due to freeze-thaw actions for 2 years. Figure 6 also illustrates the changes in the amount of surface water (including water from underground) at the maximum point from April 1, 2009 to November 27, 2010 as compared with variation of rainfall. These figures demonstrate that the amount of surface water varies through seasons and indicates the maximum value in the snow-melting season although the peak time differs for each year. Figure 7 shows the topography of the slope crown, which is depicted in 3D based on surveying. It is obvious from this figure that rainfall and snowmelt runoff easily gather around this monitoring area. As a result, the reason for the high water content in the slope may be derived from the topography and the characteristics of seepage in the slope. Figure 8 depicts the changes in temperatures (T A : in the air, T G : in the slope) during monitoring. In the figure, the number of freeze-thaw cycles of the slope surface (T G : 0 cm) was 44 times from December 1, 2008 to April 1, 2009 and 48 times from December 1, 2009 to April 1, 2010. As shown in Photo 1, which was taken in the winter 2010, a part of the surface was covered with snow and an ice layer, therefore, it can be said that this slope is located in a severe environment. On the other hand, Yamaki et al. [18] reported that the number of freeze-thaw cycles was 6 times during the winter season (from December 8, 2007 to April 1, 2008) in Sapporo, Japan, which is near the data collection site for this study. In comparison with other places in cold regions, it is also pointed out that this area is severe from a geotechnical perspective. For this reason, field monitoring was carried out on a volcanic slope under severe environmental conditions to clarify the features of soil behavior and to propose an evaluation method for slope stability.

Aspects of In Situ Volcanic Slope Subjected to Freeze-Thaw
Action and Rainfall. Figure 9 shows the relationship between pore pressure and temperature (T G : in the slope). In the figure, pore pressure, P, indicates a positive value at the depth of 20 cm and is around 0 kPa at 60 cm although these values vary through seasons. Therefore, the monitoring point in this slope can be evaluated and can be mainly discussed as saturated soil behavior. Figures 10(a) to 10(d) depict the changes in the consistency index, I C = (w L − w)/(w L − w P ) at each depth, based on the index properties shown in Figure 5. The volumetric water content θ gained by using soil moisture meters can be converted to water content w by the following relation w = (ρ w /ρ d )θ, where ρ w and ρ d are densities of water and dry soil, respectively. In this study, ρ din situ was 0.915 g/cm 3 according to the sampling data. In the figures, the values in the depth of 30 cm indicate around 0.5 or less. This fact implies that the slope may be destabilized by either the amount of rainfall or the increase of water from underground. It is also interesting that slope stability can be easily assessed using soil moisture and a simple index. Figure 11 illustrates the behavior of settlement gained by using settlement gauges. In the figure, a positive value means settlement toward the inside of the slope. As this figure   8 Advances in Civil Engineering indicates, significant changes were not recognized for each position during monitoring, except at the depth of 100 cm. Although the reason for variation at the depth of 100 cm was not made clear, the surface seems to deform toward the upper side due to frost heaving. On the other hand, changes in displacement using the multiple inclination transducers were observed over the 2 years (see Figure 12). In particular, it is noted that the displacement gradually increased year by year and its value induced in the winter season increases 7 times more that in the summer. The surveying results for the monitoring site are depicted in Figures 13(a) and 13(b). Surveying was performed on 9 points in the slope, as shown in the figures. The figures are illustrated as cross and plane sections. As shown in the figures, the slope deforms perpendicularly and in an upward direction on its surface in the winter season, and its direction changes to a gravitational course in the snow-melting season.
Harris and Davies [12] explained that surface displacements during freezing and thawing sequences are composed of "frost creep" and "gelifluction", as shown in Figure 14. In the figure, the frost creep denotes mass movement when frozen soil thaws and subsides with gravity-induced closure of voids in ice lenses. On the other hand, the gelifluction indicates mass movement with thawing soils slipping down the slope. A similar tendency has also been obtained from the results of a series of model tests (Kawamura et al. [14,15]). For the above reason, it is important for the evaluation of slope stability to monitor the deformation of the slope from winter seasons to summer seasons. Figures 15(a) and 15(b) show the changes in water content by volume θ for summer and winter seasons, respectively. It should be noted from the figures that volumetric water content increases with an increase of rainfall and then decreases with elapsed time over the summer season. In contrast, the water content increases with a decrease of temperature (less than 0 • C) in the slope for winter season and decreases conversely with an increase of temperature (more than 0 • C). This indicates that the changes in volumetric water content are induced during the seepage-drainage process. In the case of surface failure, it is said that one of the causes of surface failure for cohesionless soils is an increase of self-weight due to the expansion of an area with high water retention ability. In a previous study in which a series of rainfall model tests was performed on volcanic slopes (Kawamura et al. [14,15]), the model slopes suddenly failed at the peak of saturation degree after the degree of saturation gradually increased. After the failure, the saturation degree decreased, similar to the data revealed in field monitoring (see Figures 15(a) and 15(b)). Additionally, it is noteworthy that the value is constant when deformation proceeds, for instance it is around 38% in the slope although the magnitude of deformation is very small. If it is assumed that the slope fails progressively by integration of small displacements, this finding is significant for the assessment of slope stability on local areas. Hence, the failure may be predicted if water content at failure is defined. Figures 16(a) and 16(b) depict the typical changes in water content by volume θ for drainage process based on the zone in dotted line in Figure 15. In the figures, a fitting curve on the drainage process is also depicted as a solid line. As shown in Figure 16(b), variation in the data is observed. The reason for variation was due to the effect of rainfall in the winter season although the data was omitted here. It is conspicuous from the figures that the behavior of soil  moisture is explained by a simple expression according to the fitting curves based on the least-squares method. Namely the following expression can be obtained: where t and T are elapsed time from the peak of θ and one period from the peak of θ to the end of drainage process, κ and α denote the peak value and the reduction ratio of volumetric water content obtained from fitting curves and are 45.6 and 0.01 in the summer season, 45.3 and 0.006 in the winter season, respectively. It should be pointed out that both values in summer and winter seasons are almost the same although it is difficult to define its peak value in field data. As a consequence, it may be useful for disaster mitigation if such a relation may be simply defined for each slope. Further considerations will be required because the quantity of data is limited.

A Prediction Method for Surface Failure of In Situ
Volcanic Slopes Subjected to Rainfall and Freeze-thaw Action.
As mentioned above, except for the cases of failure due to the increase of ground water level, it is significant for the evaluation of slope stability to grasp variation in saturation degree (the difference in the water holding ability of volcanic slopes) for the seepage-drainage process.  The authors have proposed a prediction method for surface failure of volcanic slope by considering the characteristics of the ability of water retention (e.g., water content) and have revealed that a slip line is induced around the depth of a frozen area. A summary of these findings is provided in (Kawamura et al. [14,17]). Figure 17 shows the relationship between water contents at the initial w 0 and at failure w f based on a series of model test results. The details of volcanic soils used in a previous study were reported by Miura et al. [1]. Those typical volcanic soils in Hokkaido, Japan have been referred to as Kashiwabara and Touhoro volcanic soils. As shown in the figure, there are unique relationships between both water contents for both types of volcanic soil. The increment of water content at failure w f from the initial line becomes a steady state for each material although the relation varies according to freeze-thaw action. For instance, the following expression can be also obtained: where β and γ are coefficients, these values are shown in Table 2. From Table 2, it should be noted that these parameters of volcanic soils subjected to freeze-thaw process become almost the same. Consequently, it is also possible to evaluate slope failure due to rainfall and freeze-thaw process if such a relation can be obtained for the in situ slope. Similarly, monitoring data is shown in Figure 18. The maximum value for each layer (20 cm, 30 cm, 40 cm, 60 cm, 80 cm, and 100 cm) gained by soil moisture meter 1 was depicted based on Figure 17. In the present study, it is difficult to actually define slope failure for this site. Therefore, water content at failure was tentatively defined as the liquid limit to indicate slope instability, because the monitoring slope deformed gradually around the liquid limit. The failure line was predicted based on the liquid limit and γ = 0.8 in (2) of which the consistent value was indicated for both volcanic soils. It is evident from the figure that the maximum value is within the range of both limits for each depth and is slightly near to a prediction line based on the liquid limit. In particular, it was noted that the value at 30 cm was on the failure line. According to Figure 10(a), this data was collected on February 1, 2009. The reason for high water content is that the depth of 30 cm is strongly affected not only by surface water due to thawing and melting snow but also water from underground. From the results, (2) may explain well the field data in volcanic slopes and may evaluate slope stability.
Considering the results of this study, surface failure may be predicted if the depth of frozen area and the water holding capacity at failure in a slope are simply estimated by monitoring an index property such as water content. However, it is difficult to accurately define slope failure. In addition, the above results may change with variations in soil materials and because of this, changes in the slip line are predicted. At any rate, further considerations will be required.

Conclusions
In consideration of the limited results of field monitoring, the following conclusions were derived.
(1) A slope subjected to freezing and thawing deforms perpendicularly and in an upward direction on its surface in the winter season, and its direction changes to a gravitational course in the snow-melting season.
(2) According to the data collected using soil moisture meters, water content increased regularly with an increase of rainfall and then decreased with an increase of elapsed times in the summer season. On the other hand, it increased with a decrease in ground temperature due to freezing and thawing and then decreased with an increase of temperature. As a result, evaluation of soil moisture may be done for seepage and drainage processes despite variations in all seasons.
(3) Water content by volume became a constant value when deformation in the slope was induced, for instance it was around 38% in this case. Surface failure may be predicted if the depth of frozen area and the water holding capacity at failure in a slope is simply estimated by monitoring an index property such as water content.

Introduction
Earth pressure distribution behind retaining wall systems is a soil-structure interaction problem. Therefore, determination of earth pressure distribution at the back of the wall should be done interactively with the deflection of the wall. However, this is not the case in the current design practice. Practically, the hydrostatic earth pressure distribution behind the wall is adopted according to the at-rest, active, or passive earth pressure theories for both internal and external stability analyses. Furthermore, triangular distribution is typically assumed for of the lateral earth pressure for at-rest, active or passive conditions. This assumption can be true for walls that are free to move laterally or rotate around the toe with sufficient movement to initiate the sliding wedge (i.e., active or passive state). However, this is not the case for nonyielding walls that do not develop the limiting static active or passive earth pressure, because the movements are not sufficient to fully mobilize the backfill soil shear strength. Typically, all underground basements walls, tunnels, bridge abutments, culverts, and piles are examples of nonyielding structures that are in contact with soil. These structures usually undergo relatively very small movement which is insufficient to initiate the sliding wedge behind the wall and to relieve the pressure to its active or passive state. Examples of nonyielding walls are schematically shown in Figure 1. Compaction-induced earth pressure and the resulting stresses and deformations can be of serious concern in the design and analysis of these structures. This paper presents experimental and numerical models developed to study the vibratory compaction-induced lateral stresses acting against vertical nondeflecting walls. The experimental model provided reliable quantitative results for values of earth pressure at rest (K o ). Tests are conducted using the shaking table facility at the Royal Military College of Canada (RMCC). It should be emphasized that the stresses studied in this paper are static types only. In other words, the shaking table was not excited dynamically during the measurement of stresses mobilized behind the wall. The table was dynamically excited, however, to achieve the maximum density during construction stage to study the mobilization of at-rest stresses behind nonyielding walls.

Literature Review
Using the so-called "local arching" effect of the soil, Terzaghi [1] explained the parabolic distribution of earth pressure behind relatively flexible wall supported at two ends ( Figure 1). Geotechnical practitioners have traditionally calculated the at-rest earth pressure coefficient, K o against nonyielding walls using the 60 years old Jaky's formula [2], which simplified in a widely accepted form as where φ is the effective angle of internal friction of the soil. The measured values of K o observed in normally consolidated deposits seem to agree well with the simplified Jaky's equation (i.e., (1)), as reported by Schmidt [3], Sherif et al. [4], Al-Hussaini [5], and Mayne and Kulhawy [6]. Therefore, (1) is practically accepted as the horizontal-tovertical stress ratio in loose sand and normally consolidated soil Sherif et al. [7]. When the backfill behind the wall is subjected to compaction effort or vibration, the magnitude of at-rest stresses is expected to increase beyond values calculated with (1). Coefficient of earth pressure at rest, K o , in soil mass is influenced by various factors, particularly the previous stress history of the retained soil, which is represented by the overconsolidation ratio (OCR). Schnaid and Houlsby [8] reported values of K o in the range between 1 and 2 for overconsolidated deposits. Worth [9] proposed empirical relationship to calculate the coefficient of earth pressure at rest for overconsolidated sand as follows: In (2), Poisson's ratio μ = 0.1 to 0.3 for loose sand; and μ = 0.3 to 0.4 for dense sand. Mayne and Kulhawy [6] provide a summary of the effects of stress history on K o , including data compiled from over 170 different soils tested and reported by many researchers. They conduct a statistical analysis of this data and determine relationships between atrest earth pressure and soil stress history. Based on these results, Jaky's formula was found to have close agreement with the data for normally consolidated soil and deviated significantly for overconsolidated soil. Mayne and Kulhawy [6] provided a relationship between K o and OCR that builds on Jaky's simplified formula as follows: Cherubini et al. [10] found that values of K o , calculated using (3), are 3.5% less than the average measured values, which is practically acceptable. Hanna and Al-Romhein [11] compared the theoretical values predicted by Worth [9] and Mayne and Kulhawy    of Worth [9] were about 10% to 15% higher than the experimental values for OCR < 3.0, whereas it were 10% to 12% lower thereafter. Despite its practical significance and attractive simplicity, Jaky's formula and its derivative (i.e., (1) and (3)) claim the dependence of K o only on the soil internal friction angle, φ . However, Feda [12] proved theoretically that K o depends on soil deformation. The ignorance of soil deformation in calculating K o using Jaky's formula is considered a major deficiency, as stated by Feda [12]. Therefore, in order to come out with a more representative formula, analysis must include the effect of the overconsolidation resulting from the compaction and the deformation of soil-wall system.
An important aspect of vibratory compaction, which is not generally appreciated, is the increase of the lateral stresses in the soil due to vibratory compaction. Sand backfills are usually normally consolidated prior to compaction with earth pressure coefficient (K o ), approximately equal to values calculated with (1). Investigations by Schmertmann [13], Leonards and Frost [14], and Massarsch [15,16] have shown that subsequent compaction resulted in a significant increase of the horizontal stress in soil. Furthermore, laterally constrained densification of normally consolidated sand by vibration under an effective overburden pressure was found to increase the coefficient of earth pressure at-rest [6]. Peck and Mesri [17] evaluated theoretically the compactioninduced earth pressure. They found out that the lateral earth pressure near the backfill surface was closer to the passive conditions, whereas in the lower part, it was related to normally consolidated at-rest conditions. Experimental measurements by Massarsch and Fellenius [18], using CPT, concluded that the lateral earth pressure increases significantly as a result of vibratory compaction. Duncan and Seed [19] stated that the compaction of soil against nonyielding structures can significantly increase the near-surface residual lateral pressures to greater than at-rest values. However, lateral pressures are generally smaller at depths below backfill surface, which apparently as a result of structural deflections. They concluded that horizontal stress can exceed the vertical stress if a soil deposit is heavily compacted. In fact, Sherif et al. [7] concluded that horizontal stresses developed during compaction usually looked-in and do not disappear when compaction effort removed. This conclusion was confirmed by Duncan and Seed [19] who stated that about 40% to 90% of the lateral earth pressure induced during compaction may remain as residual pressures. In previously compacted soils (soils with previously "locked-in" compaction stresses), additional compaction resulted in a smaller increases in earth pressures during compaction than in uncompacted soils, and a negligible fraction of these increases may be retained as residual earth pressure upon the completion of compaction [20].
Quantitative studies of the at-rest earth pressure distribution behind rigid retaining walls have been conducted by Mackey and Kirk [21], Sherif et al. [7], Fang and Ishibashi [22], and Fang et al. [23], using reduced-scale model tests. Clough and Duncan [24], Seed and Duncan [25], and Matsuzawa and Hazarika [26] used the finite element method (FEM) to investigate the earth pressure distribution on nonyielding walls. Despite these extensive earlier studies, there still remain conflicting points regarding the magnitude and distribution of static stresses exerted against nonyielding retaining walls. In addition, little information has been reported regarding the variation of stress condition in the soil mass during the filling and compaction process. Also, the controversy over the point of application of the total static thrust exerted against retaining walls has not been yet resolved. This study is, therefore, undertaken to clarify and resolve the foregoing unknowns. An experimental investigation of the at-rest earth pressure of overconsolidated cohesionless soil acting on perfectly supported retaining walls was conducted. A scaled walls model with vertical rigid facing, retaining horizontal backfill, was developed in the laboratory. The model was instrumented to measure the horizontal and vertical reactions at the top and bottom of the facing panel; see Figure 2. The total earth force acting on the wall at different wall heights, and its point of application were deduced from the measured forces.   (Figures 2 and 3). The wall footing support comprised frictionless linear bearings to decouple horizontal and vertical wall forces [27,28]. Vertical and horizontal load cells were installed at the base of the facing panel to measure the forces transmitted to the footing (facing toe). A potentiometer-type displacement transducer located at mid-elevation of the wall facing was connected to record lateral deflection of the facing panel. Details of the experimental design and test configurations can be found in El-Emam and Bathurst [27]. The strong box side walls are constructed with 6 mm-thick Perspex covered on the inside with two layers of transparent polyethylene sheeting to minimize side wall friction.

Experimental Tests
Artificial silica-free synthetic olivine sand was used as retained soil. The soil properties are summarized in Table 1. All tests in the current investigation were performed with the same soil volume and placement technique. The soil was placed in 8 thin lifts, and each lift is 0.125 m height and compacted by lightly shaking each lift using the shaking table. To bring the sand lift to its dense state, the shaking table box was vibrated at frequency of 6 Hz for 5 seconds. Load cells readings were recorded after compaction of each individual lift. These processes were repeated until the model wall is fully constructed up to 1 m height. Once the model was fully constructed, it was shaken twice using the same compaction effort (i.e., frequency of 6 Hz for 5 seconds) in order to study the effect of repeated vibration on the mobilized at-rest earth pressure on nonyielding walls.

Numerical Model
The numerical simulations were carried out using the program FLAC [29]. The FLAC numerical grid for the simulation of the nonyielding wall tests is shown in Figure 4. tests. The backfill soil was modeled as a purely frictional, elastic-plastic material with Mohr-Coulomb failure criterion. This model allows elastic behaviour up to yield (Mohr-Coulomb yield point defined by the friction angle) and plastic flow after yield under constant stress. The soil model is described by constant values of shear and bulk elastic modulus for preyield behavior. Results of direct shear box tests on specimens of the same sand material have been reported by El-Emam and Bathurst [27,28,30] and are summarized in Table 1. They also carried out numerical simulations of the direct shear tests using FLAC code to back calculate the "true" peak plane strain friction angle of the soil and modulus values. The peak plane strain friction angle from the shear box simulations was φ PS = 58 • , which is consistent with the value predicted using the equation by Bolton [31] to convert the peak friction angle deduced from conventional direct shear box tests to the true plane strain friction angle of the soil. The high direct shear friction angle and, therefore, high plane strain friction angle is mainly due to the angularity of the soil particles. The electronic microscopic photograph for the sand used in this study showed that the sand particles are sharp angular to subangular in shape. Soil properties for the backfill sand used in the numerical analyses are summarized in Table 1.
A no-slip boundary at the bottom of the sand backfill was assumed to simulate the rough boundary in the physical tests (i.e., a layer of sand was glued to the bottom of the shaking table containing box). The vertical boundary at the right side of the model was designed as rigid wall to simulate the back wall of the strong box in the shaking table tests. The model wall facing toe boundary condition was modelled with twonoded one-dimensional beam elements with three plastic hinges (Figure 4). Four-noded, linear elastic continuum zones were used to model the full height-rigid-facing panel, shaking table, and far-end boundary. The facing thickness was 76 mm, as used in physical models, with a united weight of 17.24 kN/m 3 and linear elastic material properties. The material parameters adopted for the facing elements values are shear modulus, G w = 1000 Mpa, bulk modulus, K w = 1100 Mpa, and unlimited failure stress. These specific values of shear and bulk modulus were chosen to ensure high rigidity of the facing panel.
The interface between the backfill soil and the facing panel was modelled using a thin (15 mm thick) soil column directly behind the facing panel ( Figure 4). The soil-facing panel interface material properties were the same as the backfill properties except for the friction angle (φ). This value was computed from measured toe loads in the physical test wall according to Here, R Vi and R Hi are the measured vertical and horizontal force acting at the facing panel at different backfill height H i , respectively, and W f is the weight of the facing panel.
The average back-calculated values of the interface friction angle was δ = 0. However, average value of δ = 2 • was used to maintain numerical stability. Experimental results by El-Mhaidib [32] showed that the interface friction angle between smooth steel and uniformly graded sand could be as maximum as δ = 2 • for the level of normal stress applied in the current study. It should be noted that the numerical grid was constructed in layers to simulate soil placement in the physical model.

Experimental Results
Directions and locations of forces used for static earth pressure analysis are shown in Figure 5(  using load cells. The total lateral earth force, R Hi , and its point of application, y i , are calculated from  Figure 6(a) is the total horizontal force measured at the facing panel, R H , which is equivalent to the at-rest total lateral earth force applied at the back of the facing panel. According to Figure 6(a), the total earth force at the back of the facing panel, R Hi , calculated according to (5), increased nonlinearly as the backfill height increased. Variation of the vertical toe load with backfill height at different construction stages is shown in Figure 6(b) for the tested model wall.
The magnitude of the vertical toe load, R Vi , was generally equal to the self-weight of the facing panel, W f , for all construction stages. This results indicated that the facing panel is perfectly smooth, and therefore, a zero down-drag force is developed between the backfill soil and the facing panel. A slight reduction in the measured vertical load, R V , was noticed compared to the facing panel-self weight, for wall heights larger than 0.8 m. This may be attributed to the uplift force developed due to the soil over densification with larger height. Finally, the value of the front vertical force, R VF , is significantly smaller compared to the value of the vertical force measured at the back of the base plate, R VB . Taken together, the data in Figure 6(b) lead to the conclusion that for smooth and vertical nonyielding walls, the vertical load developed at the footing is generally equal to the self-weight of the facing panel, W f . The elevation of the resultant lateral earth force above the foundation of nonyielding wall, normalized to the backfill height is shown in Figure 6(c). The resultant elevation, y i , is an indication of the distribution of the lateral earth pressure over the backfill height, H i . The current theory of practice assumes a triangular distribution for at-rest earth pressure over the backfill height. Therefore, the design methodology usually assumes that the point of application of at-rest lateral earth force located at one third of the backfill height (i.e., y i = H i /3), above the wall foundation. Results shown in Figure 6(c) indicated that the resultant earth force is located approximately at 0.4 H i , for different backfill height. This is clear indication that the distribution of the at-rest lateral earth pressure is deviated from the theoretically assumed triangular shape. In this context, Terzaghi [1] reported that the distribution of the at-rest lateral earth pressure is closer to parabolic shape, with zero value at the backfill surface. Distribution of the at-rest earth pressure is studied using the numerical model developed and verified in the current study. Figure 7 represents the theoretical values of the lateral earth pressure at rest, P o , that is calculated with In (7), the unit weight of the backfill soil, γ = 15.7 kN/m 3 , H i , is the backfill height ( Figure 5) and K o is the at-rest earth pressure coefficient calculated according to (3)   This perfect agreement is clearer for backfill height larger than 0.4 m, which is attributed to the more densification of sand with larger heights. Results reported in Figure 7 clearly concluded that the old Jaky's formula (i.e., (1), [2]) is largely underestimating the at-rest earth pressure coefficient for overconsolidated sand (i.e., OCR > 1). However, the equation suggested by Mayne and Kulhawy [6], (i.e., (2)) can be used to predict the values of at-rest lateral earth pressure coefficient, provided that the overconsolidation ratio is determined accurately. Figure 8 presents the lateral earth force measured at the back of the facing panel normalised to the calculated lateral earth force. Equations (3) and (7) are used to calculate the lateral earth forces at different backfill height and different overconsolidation ratio. The figure indicated that the traditional Jaky's formula is underestimating the lateral earth force by more than 60% of the measured value. As the sand overconsolidation ratio increased, the normalized earth force ratio, R Hi /P o , decreased. At overconsolidated ratio OCR = 4, the normalized earth force ratio is getting closer to unity, which indicates a perfect agreement between the measured and calculated earth forces. In conclusion, the overconsolidation ratio of sandy soil is an important parameter in determining the static lateral earth force developed against nonyielding walls.
The construction of the model wall was finalized with the compaction of the last soil lift using the vibration procedures used previously with all soil lifts. Results presented in this paper were measured after the model was vibrated for the compaction of last soil lift. This is considered the first time when the model is fully vibrated (i.e., end of construction vibration). It was decided to vibrate the model wall two times in addition to the first time in order to report the effect of further vibrations on the resulted wall response. Figure 9 presents the measured vertical and horizontal earth force after each time the model wall was vibrated. It is clear that further vibration of the model wall has insignificant effect in both lateral earth force and its point of application. This is may be due to the higher overconsolidation ratio the sand backfill reached under repeated vibration during construction stages (i.e., OCR = 4). This higher OCR is an indication of the higher density of the sand. Therefore, further compaction beyond this density produced a little value of lateral earth force. Figure 9 also indicated that the vertical force at the bottom of the wall was slightly reduced with more vibration. This is attributed to the slight uplift force developed between the sand and the facing panel. This force was measured experimentally by a load cell attached at the top of the facing panel (Figure 2(a)).

Comparison between Predictions and Measured Responses
Calibration of the numerical model was focused on achieving a good agreement between the calculated and measured horizontal wall force at top and bottom, vertical force, and the location of the lateral earth force resultant at different  construction stages. It should be noted that the soil backfill in experimental model was constructed in 8 layers, which is replicated in the numerical model. During the construction of the numerical model, there were two options that could be used alternatively in order to compact each sand layer. The first option is to vibrate each layer using the prespecified horizontal motion that used in the experimental model. This method was found to be time consuming, and the final construction of the model took about 24 hours to execute in a personal computer. Alternatively, after the placement of each sand layer, a horizontal stress condition equivalent to K o Advances in Civil Engineering  is applied for this layer, and the model is taken to equilibrium under this stress condition before placing the next sand layer. This method was used successfully by Seed and Duncan [25] in modeling static compaction of 2 m-height nonyielding wall. K o is the at-rest earth pressure coefficient calculated using (3), with soil properties reported in Table 1. Figure 10 provides a summary of top, bottom, and total horizontal wall forces versus backfill height for both physical and numerical experiments conducted in this study. Figure 11(a) shows both measured and numerical values of vertical load at the footing of the wall. While Figure 11(b) shows the measured and predicted resultant elevation above the wall footing normalized to the backfill height. Shown also in Figure 11 are the weight of the wall facing (W f ) and the theoretical resultant elevation (y i ) for comparison. It should be noted that each point of the experimental results presented in Figures 10 and 11 is representing the measured response at the end of construction of each sand layer (i.e., sand placement and compaction). However, the numerical results showed both stages for each soil lift (see Figure 10(a)). Results presented in both Figures 10 and  11 indicate good qualitative and quantitative agreement between FLAC calculated total wall forces and the experimental results. It can be noticed that there are slight over prediction and underprediction in both top and bottom horizontal loads, respectively, for backfill height closer to 1 m. However, the total horizontal force at the wall facing is well predicted at different backfill heights, as shown in Figure 10(c). Figure 11(a) shows that both predicted and measured vertical load at the footing of the wall are closer to the wall facing own weight (W f ). Also, both measured and predicted values of the horizontal forces resultant elevation are in good agreement, indicating resultant elevation larger than the theoretically assumed value (i.e., H/3). Values of horizontal forces shown in Figure 10 are numerically recorded at the wall top and bottom to simulate the experimental setup and results. In addition, the lateral earth pressure at rest is recorded numerically at different locations of the wall back and at different backfill height. The recorded values of lateral earth pressure are used to back calculate the horizontal earth forces (P Ei ) and its vertical location from the bottom of the wall (y Ei ). Figure 5(b) shows the definition of both P Ei and y Ei . Back calculated earth pressure resultant and its location above the footing are shown in Figure 12, together with the experimentally measured values. Shown also in Figure 12 are values of lateral earth forces and its location above the wall toe predicted using (3) and (7). It should be noted that the numerical earth pressure value for each backfill height is calculated as the average earth pressure values during the placement of the soil layer and during compaction stage. Figure 12(a) shows great agreement between measured, numerical, and theoretical predicted values of lateral earth force at the back of the wall. A slight underprediction of the earth force can be noticed at backfill height equal to 0.6 m. The reason for this underprediction is not clear to the author. Both numerical and experimental values of the resultant elevation are in good agreement for backfill heights larger than 0.5 m (Figure 12(b)). In addition, oth numerical and experimental values indicated that the earth pressure resultant elevation is larger than 0.33 H, which is assumed by theoretical methods. It should be noted that the numerical model slightly underpredicted the resultant elevation compared to the experimental model, for backfill height smaller than 0.5 m. This may be due to the perfect bond assumed between the backfill soil and the foundation base.

Earth Pressure Distribution.
The earth pressure distributions on the wall at different backfill heights are shown in Figure 13. Also shown in this figure are the theoretical at-rest earth pressure distributions calculated using Jacky's formula (i.e., OCR = 1) and Mayne and Kulhawy equation with OCR = 4. In addition to that, the passive earth pressure distribution is plotted in Figures 13(c)-13(f). It can be seen that the earth pressure distribution is a triangular in shape for smaller backfill height (i.e., H i < 0.5 m), Figures 12(a)  and 12(b). As the backfill height increased above 0.5 m, the distribution is not a hydrostatic type. Results presented in Figure 13 show that an extrahorizontal earth pressure larger than that theoretically predicted by Jack's formula is  induced by compaction. It is interesting to note that the lateral earth pressure distribution predicted near the top of the backfill was closer to the passive earth pressure estimated with Rankine theory, especially for larger backfill heights. As the backfill height increases beyond 0.5 m, the earth pressure significantly increases at the top of the backfill compared to the bottom, due to compaction effort. These results are in good agreement with the experimental results of Chen and Fang [33], which showed higher earth pressure at the top of vibratory compacted model wall compared to the bottom. It can be concluded that the distribution of earth pressure resulted from overconsolidated sand on nonyielding walls is not hydrostatic nor following the traditional jacky's formula.

Conclusion and Recommendations
The current study presents experimental and numerical investigation of at-rest lateral earth pressure resulted due to overconsolidated sandy soil adjacent to nonyielding walls. For this purpose, scaled model walls were constructed and specially instrumented to measure the lateral earth force. The sandy soil was compacted by vibration in order to increase the overconsolidation ratio. In addition, a numerical model has been developed to simulate nonyielding wall and validated using the measured wall responses. Based on the results presented in this study, the following points could be summarized.
(1) For nonyielding wall systems with nearly smooth back, the vertical load transfer to the footing of the wall is approximately equivalent to the facing self weight. This value expected to be larger in cases of walls with rough back.
(2) Overconsolidation ratio of sandy soil increases with repeated vibration compaction, and as a result, the horizontal effective stress increases significantly.
(3) Jaky's formula is proven to significantly underestimate the at-rest lateral earth pressure coefficient for overconsolidated sand.
(4) Overconsolidation ratio of sandy soil is an important factor that affects the at-rest lateral earth force. Including a suitable overconsolidation ratio in the modified Jaky's formula produced realistic at-rest earth pressure coefficient.
The resultant of at-rest lateral earth pressure is measured to be located closer to 0.4 H (H is the backfill height), from the footing of the wall, which is above the 0.3 H assumed by the classical earth pressure theory.
(6) The location of the earth pressure resultant measured in the current study indicated that the hydrostatic distribution for at-rest condition assumed by the classical earth pressure theories is not valid for overconsolidated sand.
(7) The numerical model developed in this study predicts wall responses that agree well with the measured responses.
(8) The earth pressure distribution predicted numerically shows that the increase of the earth pressure due to vibration at the wall top is more significant compared to the wall bottom.

Introduction
Most utility services, including electricity, water, gas, and telecommunications, are distributed using buried pipelines or conduits, or via directly buried cables, and the majority of this buried utility infrastructure exists beneath roads. Trenching is usually required whenever they need maintenance, repair, replacement, or extension and this often causes disturbance (and sometimes damage) to other utility services, delays to traffic and/or damage to the environment. Inaccurate location of buried pipes and cables results in far more excavations than would otherwise be necessary, thereby creating a nuisance and increasing the direct costs of maintenance to the service providers, while greatly increasing the costs to others, the most important being the enormous direct cost of traffic delays to business and direct and indirect costs to private motorists. These "social costs" of congestion in the UK alone are estimated to be as high as £5.5 billion per annum [1], 5% (∼£275 million) of which is attributed to street works. There are also very considerable environmental "costs" due to traffic congestion, a significant proportion of the damage to the planet due to motor transport deriving from vehicles that are delayed. Nevertheless, utility service providers, who are under enormous pressure from the regulators to improve performance in all sorts of ways and minimize costs to customers, retain the open cut approach and accept the inconvenience of "dry holes" (excavations that miss the target service) as a marginal cost addition. It is important that this attitude changes, but it will only do so when the utility location technologies prove to be reliable, that is, produce accurate and comprehensive plan representations of the buried utility infrastructure with ideally good depth approximations also.

The UK Heritage.
Most of the essential public service infrastructure was installed in the last two hundred years in the UK, to various levels of constructional quality, and with different geographical referencing, depending on age and buried asset type. Worldwide this picture is replicated, albeit that the ages are often not as extreme. Nevertheless, the scope and extent of underground utility and local authority assets worldwide are massive and represent an enormous capital investment. For example, the lengths of the different UK assets were reported by Burtwell et al. [2] to be as follows: (i) 275,000 km of gas mains; (ii) 353,000 km of sewers; (iii) 396,000 km of water mains; (iv) 482,000 km of electricity cable; (v) an estimated 2,000,000 km of telecommunication cables; (vi) an estimated 500,000 km of highway drains and surface water sewers.
In addition, there are numerous other services, some of which are largely forgotten in this debate and others are country specific. The UK also has the following: (i) traffic management cabling (lights, signs, etc.); (ii) utility service connections to property; (iii) Network Rail (the national railway infrastructure provider) owns services including signalling, drainage, power and telecommunications, electrification and plant; (iv) nationally important oil pipelines.
Moreover, much of the UK's urban fabric is very old and dense, the streets are narrow and the utility service infrastructure is largely buried beneath them. Records dating back 200 years are incomplete and of variable quality. Therefore, it is not surprising that the detection of these buried assets is a huge challenge.

Pressures on Utilities and
Society. Utility service providers are faced with the continuing need for high levels of access to an increasingly congested underground environment, with little real knowledge of it and the associated costs of this activity are inevitably large. The UK Government's stated objective of generally available broadband access by 2005 added significantly to the amount of work in roads and footpaths over the last decade and resulted in the installation of services that are very expensive indeed to repair at relatively shallow depths; access to older, deeper utilities must now negotiate another hazard with enormous penalties if third party damage occurs. Moreover, the next thirty years will see gas main replacement programme activities in the UK at higher levels than ever before. This situation, though differing perhaps in detail, is being replicated worldwide. Growth in the economy, the introduction of competition into the utility services industry and increasing customer demand for essential services has brought with it a greater number of excavations in the streets in order to supply these services. The increase in the number of utilities licensed to lay mains and cables beneath our streets brings with it the increased potential for conflict between the utility service providers, who have statutory rights to use the streets for provision of essential services, the highway authorities and others who maintain them, and those who use the streets for transport purposes (and who are also the recipients of those services). Thus, the situation is complicated further.
As a society, the impact of utility work in roads and footpaths on people and the environment continues to grow, with an increasing recognition of the need to mitigate its effects, evidenced in the UK by Landfill Tax (the costs of disposal of waste from excavations continues to rise), the Aggregate Levy (adding to the cost of new aggregates for backfilling trenches), and the introduction of so-called "congestion tsars" in cities to ease traffic congestion, much of it due to roadworks. More specifically, taxes have been introduced on utility companies who occupy road space to incentivize rapid utility works and ease traffic congestion. For example, lane rental charges and permit schemes were introduced in 2010 and have been trialed in London Boroughs and other Councils in the UK. This will also increase cost pressures if, as planned, they are implemented more widely across the UK. Initial results of the London permit scheme have shown greater collaboration between different utility providers, resulting in more "days saved" in carrying out the work [3] and so the incentives to work more efficiently are having a positive effect. It is evident therefore that there is equally an incentive to those providing the surveying tools needed to make street works more efficient.

The Consequences of Utility Network Maintenance.
The direct cost of trenching and reinstatement work in UK highways for utilities is in excess of £1 billion per year [4], part of which is attributable to dry holes and damage to third party assets. In 1987 it was estimated that, each year, there were some 75,000 incidents of third party damage to utility equipment, with an associated cost of £25 million per year [5]. By 2000, the UK water industry alone incurred £15 million of costs of repairing third party damage [6]. As discussed earlier, growth in the use of fibre optic cable has posed a further risk: one particular incident of damage to a single cable alone cost £0.5 million to repair [7].
Large though they are, direct costs are significantly less than indirect costs. Those affected by the impact of utility road works include what follows: (i) highway users, through the cost of congestion, delays and accidents, (ii) business, through reduced output and turnover due to the delays caused by congestion and disruption of activities in the vicinity of the works, (iii) local communities, through reduced-or lostaccess to amenities and premises, and overloaded diversionary routes, (iv) the environment, through damage to trees, increased pollution (noise, fumes and visual), extended use of natural resources, and generation of waste, (v) third parties, through damage to property, (vi) highway authorities, who have to repair damaged pavements and deal with the consequences of the compromised life of road structures, (vii) utility companies, through adverse publicity, abortive costs, and the cost of repairing damage, Advances in Civil Engineering 3 (viii) operatives working in the road exposed to health and safety risks.
As mentioned previously, in total, these indirect costs in the UK are estimated to amount to as much as £5.5 billion [1]. Total direct and indirect costs to utilities, industry, society, and government will continue to rise unless better information supplied in the form of utility records and more effective location technologies, both to prove the records information available and discover those utility services that are not recorded, can be made available to those doing the work. Moreover, the rights to open excavation could be questioned if existing legislative measures of controlling road congestion and disruption to the public do not improve the current situation. This paper will introduce two projects which aim to develop technologies to locate buried assets. If these assets are mapped with 100% accuracy, the number of dry holes could be reduced considerably (if not removed completely) and utility street works can be carried out far more efficiently. This has the potential to reduce direct costs, social costs and environmental costs, and improve the quality of life in our cities.

The Mapping the Underworld Project (MTU)
2.1. Initiation and MTU Phase 1. The MTU project (http:// www.mappingtheunderworld.ac.uk/) is a 25-year initiative to improve the way utility companies operate in the street. Although conceived as far back as 1996, MTU was formally initiated at an "IDEAS factory", which was used as an innovative new approach by the UK Engineering and Physical Sciences Research Council (EPSRC) to facilitate multidisciplinary working; in this case to identify complementary projects associated with the location of buried assets. MTU Phase 1, which was coordinated by the first three authors, was active from 2004-2008 and consisted of a £1.2 m programme bringing together the universities of Bath, Birmingham, Leeds, Nottingham, Oxford, Sheffield, and Southampton, together with a number of industrial stakeholders. One of the four core projects consisted of a feasibility study to identify suitable sensing technologies, which, when combined, could locate all buried utilities in all ground conditions without the need for probing excavations [8]. In addition, MTU Phase 1 included a project that sought to develop a surface-mounted mapping system, using geoscience techniques, to provide accurate 3D positional coordinates of the buried infrastructure, even when working in "urban canyons" and represent them in an appropriate 3D electronic mapping system. This work was essential as there is often no clear view of a large section of the sky necessary to obtain an accurate Global Positioning System (GPS) position. This research succeeded in developing a reliable positioning system, integrating GPS and Inertial Navigation Systems (INS) with a precision of approximately one centimeter [9][10][11][12].
In the UK, as in many western countries, the utility industry is highly diverse with many private companies operating in any given area. Each of these companies hold their own utility records in a number of formats, the records sometimes being incomplete and/or inaccurate, and sometimes not even in digital form. In order to obtain holistic information on all the buried assets in the ground, the individual records need to be combined. Therefore, MTU Phase 1 also investigated the construction of a unified database of all the location data from the various utility companies, hence providing a network for data sharing (e.g., see [13,14]).
The final project of MTU Phase 1 investigated enhanced methods for the detection of buried assets by developing new methods of improving the visibility of underground pipes when surveyed from the ground surface using electromagnetic techniques. A series of "resonant labels" (or RFID tags) were developed. These are relatively simple metallic structures that could be encapsulated within a new pipe prior to installation or attached to an existing pipe that is being repaired [15,16]. They provide an effective means of reflecting electromagnetic signals at predetermined frequencies, in much the same way as a bicycle reflector provides enhanced visibility when illuminated by the lights of a car. The RFID tags are regarded as a cost-effective solution and are of particular interest for plastic pipes, which are suggested to be the most challenging type of pipes to be detected by Ground Penetrating Radar (GPR).
All of the above aspects of MTU Phase 1 received followon funding. The VISTA project combined the research into the data integration and GPS positioning aspects of MTU to focus on visualizing integrated information on buried assets to reduce street works (http://www.vistadtiproject.org). The project culminated in field studies aiming to combine automatically data from individual utility records. Largescale tests are currently underway within the London area, that is, the area bounded by the M25 orbital motorway. The RFID tag technology also received follow-on funding from EPSRC under a scheme that aims to bring promising new technological advances arising from the research it funds closer to market.

The MTU Multisensor Location Project.
The MTU Phase 1 utility location feasibility study demonstrated that all four of the distinctly different geophysical technologies investigated, which were previously used in isolation to locate underground infrastructure, have the potential to be combined in a multisensor device and hence fulfill the vision of MTU, that is, to achieve 100% detection without the need for proving excavations. This was taken forward by a grant worth ∼ £3.5 million from EPSRC (2008-2012) to research in detail a multisensor device that can detect all buried pipes and cables (termed hereafter buried assets), building on the promising results of the feasibility study and using every possible advantage to see through the ground and focus on the targets. This is undertaken by the universities of Birmingham, Bath, Southampton, and Leeds. The four geophysical technologies that are being researched specifically with regard to their combination in a single, integrated device are GPR, vibroacoustics, passive electromagnetic, and low frequency electromagnetic fields.
GPR is one of the most common techniques currently utilised to locate buried utilities. Its principles are well established (see [17]), and its limitations are equally well known: it struggles for depth of penetration in saturated clay soils, it requires a good contrast between the target and the material in which it is buried (a void in a gas pipeline can often be more distinct, e.g., than the pipe itself), and it can struggle to see past overlying utilities when seeking deeper targets or distinguish between adjacent utilities in situations where there are congested (cluttered) buried assets. Two approaches are being adopted in the MTU project; the first uses the traditional technique of looking down through the ground, although utilising more advanced ideas such as swept frequency Orthogonal-Frequency Division Multiplexing (OFDM) GPR [18], and the second uses a dual GPR system. The dual system has a transmitter and receiver installed on a robotic device within an existing pipe in the ground, such as a sewer, to "look" outwards, but is combined with a transmitter and receiver at the ground surface so that one-way travel of the signals (pipe to surface, and surface to pipe) can be accommodated. This removes the need for the signals to travel into the ground and back out again as reflections, thereby increasing the effective depth of the survey.
The vibro-acoutstics technique offers a number of advantages to buried utility detection, such as the ability to find plastic pipes, while having the particular advantage of working best in saturated media when GPR struggles due to high attenuation of its signals. Two approaches are being investigated: direct excitation of the buried asset via a manhole or valve that can be used to locate the line of the pipe into the far distance as the waves are transmitted along the pipe and radiate up to the ground surface, where they are detected using an array of geophones or a scanning laser; and excitation of the ground with the aim of detecting the reflected waves from buried pipes using geophones or a laser, which has the potential to locate multiple buried objects [19,20].
The low frequency electromagnetic field technique has been developed from first principles in this research. It has the potential to complement GPR by locating utilities that GPR has difficulty detecting. Examples include small diameter plastic pipes and fibre optic cables, pipes which lie in the blind zone of GPR, and large deep buried assets such as deep sewers, that lie beyond the range of traditional methods [21]. The passive magnetic fields (PMFs) technique utilises the flow of current within a buried AC power cable, which creates an associated oscillating magnetic field that the PMF sensor can detect [22]. Current flow within the power cable can also induce currents within neighbouring utility pipelines or ducts made from conducting materials, such as cast iron, and the PMF has the potential to detect these utilities also.
Two prototype carts have been developed to date. Figure 1 shows one of the prototype carts being pushed along a test site with both a commercial GPR as well as low frequency electromagnetic sensors attached. Furthermore, there are a number of positioning sensors included on the cart to not only give the absolute position of the cart using GPS, but also relative position of the individual sensors. This is absolutely vital for successful data integration. The other sensors are currently tested separately, with the ultimate goal to combine these on a single cart.
A further important aspect of the project is to investigate techniques for fusion of the data from these various sensors with existing utility records to develop a probability "map" of where the buried services are likely to be. This is essential so that the data are in a form that can be easily understood by the user [23]. Clearly the degree to which this aspect of the research can be advanced is dependent on the outcomes of the work on the sensors. Tests have been conducted over the last 12 months combining the different sensors on two test sites. Initial results of these tests are presented by Royal et al. [24], who also give further details on the latest advances regarding the individual sensing technologies.
In addition, it is well understood that the ground conditions have an important influence on the ability of different sensor techniques to detect buried services. Another aspect of the research is consequently to produce a knowledge-based system (KBS) to aid in the application of the multisensor device and improve survey operational protocols. This KBS will utilise information from a number of sources, such as the geological and geotechnical databases held by the British Geological Survey in the UK. It will also include techniques for converting geotechnical information into geophysical parameters more appropriate for the sensing techniques used to detect the buried services, much of the pioneering research which has been conducted by the MTU team (see [25][26][27][28]). This KBS will help utility surveys in a number of ways, for example, it can provide an indication of the likely ground conditions to be expected on a site prior to the survey and hence help the user decide which techniques are likely to work best and also enable the devices to be fine-tuned prior to a survey to maximise their ability to detect the buried services.
As part of the research programme a UK test facility for trialling location technologies and for training operators is also being investigated. A purpose-built facility has been planned and a UK contractor is hoping to build it this year. The facility is based around a number of different bays containing different ground conditions and pipe arrangements. Some bays are being kept simple for testing new technologies and others contain complex arrangements of buried services for more advanced testing and training of personnel. This will provide an ideal testing ground for the multisensor device when it is in its final prototype form and will help to establish site testing protocols such as sequencing of survey technologies.

Gravity Gradient Sensor (GG-TOP Project)
As indicated above, no single technology will be able to locate all buried assets in all ground conditions. The only way to achieve this goal is by combining a number of different sensing technologies. Although the four technologies being developed under MTU, and being combined on a multisensor cart, are envisaged to have the potential to locate Advances in Civil Engineering all conventionally shallow-buried utilities, it is important to identify additional technologies which could increase the confidence and extend the range of the MTU sensors. GG-TOP is such a complementary research programme that aims to explore technologies that seek to deliver a step change in gravity research and gravity gradient mapping (2011)(2012)(2013)(2014)(2015). The GG-TOP novel generic technology base will rely on atom interferometry, which, as a disruptive quantum technology, has the potential to exceed conventional gravity gradient sensors by several orders of magnitude in sensitivity and allow new flexible sensor schemes to suppress terrain, geological, and other noise sources, a research need expressed by Difrancesco et al. [29]. These improvements would open up new applications in underground mapping with a potentially enormous impact on industry (construction, buried and surface infrastructure maintenance) and society (by reducing traffic congestion and bringing more sustainable practices to our cities), as well as potentially helping to decipher history via deployment in archaeological settings and advancing fundamental science (testing our model of nature). Current gravimetric technology has been widely used in the fields of exploration, underwater navigation, and site investigation, but its potential is limited by the unacceptably large measurement time required to deliver anything approaching an acceptable degree of precision. The limitation of current gravity technology is a diameter to depth ratio of approximately unity if time scales acceptable to current industry applications are adopted. The potential of gravity gradient technology as pursued in GG-TOP is to locate small underground features, which will be detected as gravity anomalies, at both shallow and mid-range depths with a diameter to depth ratio of 1, for example a 100 mm object at a depth of up to 10 m, on the same time scale as current technology detects cavities whose dimensions are of the same order of magnitude as their depth (i.e., diameter to depth ratio =1). This would represent a major step change, which, in principle, presents no obstacle [30], and would serve to complement other surveying technologies. The sensitivity of the sensors can be demonstrated in Figure 2, which shows the sensitivity cone. It indicates that the deeper the target is, the larger it has to be to be detected, while the corollary of this argument is that the sensor can detect small objects close to the surface. The importance of this observation lies in the fundamental approach: the MTU sensors rely on a progressive increase in "power," or "signal magnitude", to see deeper and in increasing the "power" they fail to detect smaller anomalies that lie at small distances below the surface, whereas the gravity gradient sensor has no such "power" limitations; it detects the gravity field that reflects the whole of the buried subsurface. For this reason it is able, for example, to detect a gas filled pipeline that is masked by overlying utility services simply by detecting, very sensitively, the differences in gravitational force at different distances above a reference frame.
The GG-TOP project is a four-year, £2.4 million initiative funded by EPSRC which commenced in July 2011. It comprises five work packages (WPs), with the application work package taking a central role (Figure 3). WP1 focuses on the development of the sensor technology, based on the well-established principle of atom interferometry [31] and includes both laboratory and field trials. The aim is to implement and evaluate novel atom interferometric gravity gradiometer schemes to bring about a step change  in sensitivity and usability. In addition to the laboratory evaluation, a robust technology prototype is to be developed for initial "field/practicality" tests of the new ideas. In parallel, WP2 concentrates on understanding the sensor, in particular the limits to the multiwave detection and the development of a sensor-noise model, building on the work of McGuirk et al. [32]. It will be vital to differentiate the signals received due to underground voids, pipes, and cables, and other buried objects, from those caused by other random noise picked up by the sensor. The aim is to provide a comprehensive MATLAB tool capturing the physics of the sensor to map all conceivable signal and

Drop tubes
Lasers and electronics Atom trapping and cooling chamber Fibre couplers for interferometry lasers Figure 4: Schematic of the potential layout of the gravity gradient sensor. It is envisaged that the sensor will ultimately occupy a volume of 1 m 3 .
noise inputs to the sensor output. The key input parameters into these two work packages are the user requirements (e.g., the size of the device acceptable for street surveys, the limits on acceptable measurement time to complement other surveying operations, and the resolution needed by those who are required to work on utility services in the street), coupled with data on the target applications in WP5 (i.e. the material properties of the various target assets and their contents). WP3 focuses on the understanding of the source, incorporating both linear target simulations such as pipes, cables, trenches and ditches, while discrete target simulations will include archaeological features such as graves, post holes and buried artefacts. This work package targets the optimisation of information gained from gravity gradient measurements for the purpose of discriminating multiple overlaid infrastructure assets buried at different depths. In particular, the need for closely spaced arrays of sensors will be examined as a function of ground composition and the required spatial positioning accuracy. WP4 brings the outcomes of WP1, WP2 and WP3 together and concentrates on proof-of-concept trials using both an archaeological site of international importance (Stonehenge in the UK) and the MTU test facilities. Both sites will provide linear and discrete features with the MTU test facilities providing a range of difficulties for buried asset location, with the simplest arrangement containing an isolated pipe and the most complicated a series of stacked and crossing utilities. In addition, the impact of different soils can be tested in the MTU test facility. The aim of this work package is to fuse data from different sensors (including the MTU sensors) and provide an application-specific visual interface between the fundamental technology and the end user. WP5 focuses on the evaluation of the new gravity gradient sensor technology with respect to different applications related to commercial potential (urban mapping, underwater navigation, subsea mapping, and archaeology) and fundamental physics. Importantly, the specifications for a new gravity sensor prototype for the different applications will be derived in this WP. If successful, this project has the potential to reach a large number of practitioners including archaeologists, urban planners, civil engineers, geologists, marine scientists and others. Future possibilities, though not the focus of the current project, include mineral exploitation, deep geological mapping and applications in space, that is, long-range targets.
The novelty of the GG-TOP project is that the gravity gradient is measured at a number of different locations within the sensor, and it is the difference in gravity gradient at these positions that is significant. Figure 4 shows a schematic of the potential layout of the gravity gradient sensor. It is envisaged that the sensor will occupy a volume of approximately 1 m 3 , thereby making it easily transportable for site surveys. Although likely to be rather large in the initial prototype stage to facilitate wider research explorations, it could still be added to the MTU multisensor cart.

Conclusions
There is an urgent international need for a combined multisensor device for the complete remote location of buried utility services and other buried infrastructure, as evidenced by the international literature and by the enthusiastic support for the MTU project from UK, European and North American organisations (e.g., the ORFEUS and DETECTINO European projects and the US Transportation Research Board). In the UK alone, the financial impact of the routine use of a comprehensive surveying device is likely to be very considerable indeed, perhaps reaching £50 million per annum as the MTU project matures and set to rise as congestion worsens. The argument here is that if street works account for ∼ £275 million of the costs arising from congestion, then reducing road occupation by 20% would yield such a saving. The impact on street works of an accurate, comprehensive location technology would be twofold: operations would be far swifter and the enormous number of "dry holes" would be avoided, so road occupancy would be reduced; and a comprehensive underground map of the buried utilities in an area would permit greater use of trenchless technologies. This fact alone justifies the government support, via the UK EPSRC, for the project.
It was demonstrated in MTU Phase 1 that only the combination of different geophysical sensing techniques has the potential to locate all the buried utilities in all ground conditions without the need for proving excavations. This paper has summarised the findings of MTU Phase 1 and has described two current large UK research projects aiming to develop novel sensor technologies. It also demonstrated the 8 Advances in Civil Engineering need to consider user requirements and applications in any such developments. The importance and challenge of data fusion have been highlighted, as the sensors have their own positioning and data format. The exciting approach and wide variety of potential applications, of the gravity gradient sensor provide a valuable potential complementary technology to those being researched in under the MTU initiative. If successful, the gravity gradient sensor would deliver a step change in surveying for geotechnical applications as well as extending the capabilities of utility surveys to far greater depths than currently possible. As such, it is likely to find its use being widely advocated for civil engineering projects of all types and sizes.

Introduction
The health and state of the aging and overburdened civil infrastructure in the United States has been subjected to renewed scrutiny over the last few years. The American Society of Civil Engineers reports that this state threatens the economy and quality of life in every state, city and town in the nation. As one example, the United States Army Corps of Engineers noted in early 2007 that nearly 150 United States levees pose an unacceptable risk of failing during a major flood [1].
Additionally, losses associated with failures of soil systems continue to grow in the United States and elsewhere in view of increased development in hazard-prone areas. The control and mitigation of the effects of these failures requires a better understanding of the field response of soil systems. In order to overcome these problems, the performance of these systems needs to be reliably predicted, and such predictions can be used to improve design and develop efficient remediation measures. The use of advanced in situ monitoring devices of soil systems, such as the shape acceleration array (SAA) system described in this paper, and the development of effective system identification and model calibration is essential to achieve these goals.
Soil and soil-structure systems are massive semi-infinite systems that have spatially varying parameters and state variables. These systems exhibit a broad range of complex response patterns when subjected to extreme loading conditions [2][3][4]. Accurate prediction of site response is essential in hazard analyses, health monitoring, or design of civil infrastructure systems. These predictions require the availability of calibrated and validated computational models [5]. Soil sample experiments (e.g., triaxial tests) have been widely used to evaluate the mechanical properties and calibrate constitutive relations of geotechnical systems. Nevertheless, because of limitations in reproducing in situ stress and porefluid conditions, the consensus is that these experiments may not fully reflect reality. Thus, fundamental differences still separate geotechnical engineering science and practice [6]. Peck [7] states that these differences stem from the fact that science relies on laboratory soil sample tests, while practice is rooted in field performance data and associated empirical studies. Consequently, some practitioners remain skeptical about models developed by geotechnical engineering scientists, for the obvious reason that very few models have been properly calibrated with field performance.
The answer to this challenge partly resides in the development of tools for short-and long-term health monitoring of existing civil infrastructure along with data reduction tools of systems identification and inverse problems. The knowledge gained from this monitoring and analysis would aid in planning for maintenance and rehabilitation of these infrastructure systems and will improve the design, construction, operation, and longevity. Critical soil-structure elements of the civil infrastructure which are important to monitor include bridge foundations, abutments, and support systems, retained, reinforced, or stabilized rock and earthen embankments and levees, slopes and mechanically stabilized earth (MSE) walls, and tunnels and tunnel linings. This paper presents a newly developed sensor array and local system identification technique. The array is capable of measuring in situ deformations and accelerations up to a depth of one hundred meters and is essentially an in-place inclinometer coupled with accelerometers. The frequency and spatial abundance of data made available by this new sensor array enables tools for the continuous health monitoring effort of critical infrastructure under a broad range of static and dynamic loading conditions.
The concept of the presented MEMS-based, in-place inclinometer-accelerometer instrumentation system is centered on measurements of angles relative to gravity, using triaxial MEMS (micro-electro-mechanical Systems) accelerometers, which are then used to evaluate inclinations (i.e., deformations). The same MEMS accelerometers also provide signals proportional to vibration during earthquakes or construction activities. Three accelerometers are contained in each 30 cm (1 ft) long rigid segment for measuring x, y, and z components of tilt and vibration. The rigid segments are connected by composite joints that are designed to prevent torsion but allow flexibility in two degrees of freedom. These rigid segments and flexible joints are combined to form a sensor array. The system, called shape acceleration array (SAA), is capable of measuring three-dimensional (3D) ground deformations at 30 cm (1 ft) intervals and 3D acceleration at 2.4 m (8 ft) intervals to a depth of 100 m (330 ft). The system accuracy of the SAA is ±1.5 mm per 30 m, an empirically derived specification from a large number of datasets. More detailed information on the design of the SAA is available in [8,9].
The following sections present (1) a brief description of the SAA technology, (2) a case history of the application of the SAA system, both vertically and horizontally, at a bridge replacement site in New York, (3) a case history of the application of the SAA system at a full-scale levee testing facility in the Netherlands, and (4) a newly developed local system identification (SI) technique to analyze the response of active soil systems using the dense measurements provided by a network of SAAs. The developed SAA and local SI technique constitute a major step in the direction of establishing long-term monitoring and analysis tools capable of providing a realistic picture of large deformation response and pending failure of soil and soil-structure systems.

Sensor Description
The SAA system uses temperature-calibrated MEMS accelerometers within 30 cm (1 ft) long rigid segments connected by composite joints that prevent torsion but allow flexibility in two degrees of freedom. The SAAs are factory-calibrated and completely sealed, requiring no field assembly or calibration. Because each segment of the SAA contains three orthogonal sensors, arrays can be installed vertically or horizontally as shown below in the New York State Department of Transportation (NYSDOT) bridge replacement case history. The intended array orientation does not need to be specified prior to installation. Orientation can be selected in the software. Each sensor has an output that is the sine of the angle of tilt over a range of 360 degrees. The sensor arrays are transported to the jobsite on an 86 cm (34 in) diameter reel, see Figure 1, and can be lowered into vertical, or pushed into horizontal, 25 mm (1 in) casing. The initial shape of the installation, or the absolute deviation of the installation from a virtual vertical or horizontal line, can be immediately viewed on a computer. An SAA is modeled as a virtual multisegment line in the software, with x, y, and z data representing the vertices of this polyline. In the case of nearvertical installations, the vertices correspond to the joint centers of the array in 3D. For near-horizontal installations, the vertices show vertical deformation only versus horizontal position [8,9].
Wireless SAA data transmission is made possible by the use of an on-site data acquisition system, called a wireless earth station. Similar to traditional probe and in-place inclinometers, data from the SAA represents deviations from a starting condition or initial reading. These data are sent wirelessly, over a cellular telephone network, to an automated Advances in Civil Engineering server, where data are made available to users through proprietary viewing software and an internet connection. Longterm system automated monitoring using SAAs typically collects data once or a few times a day, but this collection frequency can be respecified remotely by the user and changed at any time, through the same wireless interface used to receive the data. The SAA system is capable of collecting data at a sampling frequency rate of up to 128 Hz, which makes it suitable for dynamic and seismic measurement. Each array is equipped with a trigger sensor that would automatically switch the SAA from slow to fast sampling rate in the case of a seismic event. Limiting the use of fast sampling rates to specific dynamic events significantly reduces the power consumption as well as data storage and transmission requirements.
The following section presents data that was collected during a full-scale lateral spreading experiment conducted in a laminar container at the University of Buffalo. The laminar container is 5 m (16.4 ft) long, 2.75 m (9.0 ft) wide, and 6 m (19.7 ft) high and is capable of holding 150 tons of sand; see Figure 2 [10]. The results from two SAAs installed in this experiment provide an example of the range and type of data that can be collected by this system.
After this laminar container was instrumented and filled with loose sand and water, two 100-ton hydraulic actuators were used to induce predetermined motion with a 2 Hz frequency to the base of the box. The resultant soil liquefaction and lateral spreading was monitored using accelerometers within the soil deposit and on the ring laminates, potentiometers (displacement transducers) on the laminates, pore pressure transducers and two SAAs within the soil deposit. Each of the SAAs was 7 m (23.0 ft) long and contained 24 3D sensing elements. The acceleration and lateral displacement data from the SAA compared to the ring accelerometer and potentiometer data, respectively, are presented in Figure 3.
This data was collected during a sloping ground test, where the base of the box was inclined 2 • .
At the end of the input shaking event, nearly the whole soil deposit was liquefied, and the ground surface displacement at the top of the laminar container had reached 32 cm, as seen in Figure 3. Some discrepancies are observed between the SAA data and the ring accelerometer data after 6 s, which is when the soil deposit began to liquefy. As the soil liquefied, the upper part of the SAA moved downslope with respect to the bottom of the array, thus the accelerometers were tilted with respect to their initial condition. This resulted in a slight DC component bias in the SAA acceleration readings. By filtering this low-frequency component, the acceleration readings from both types of instrumentation would match even more closely. Since this was a dynamic test, the dynamic component of the displacement was removed by filtering to obtain the results presented in Figure 3. This full-scale lateral spreading experiment provides a unique example of the simultaneous acceleration and permanent lateral displacement data captured by the SAA system. For more information on this full-scale experiment, see [11].

SAA Field Installation at NYSDOT Bridge Replacement Site
The SAA system was installed at a NYSDOT bridge replacement site over the Champlain Canal in upstate New York; see Figure 4. A brief site history and description of the installation process of the NYSDOT site is provided below along with a comparison between the vertical and horizontal SAA systems and traditional instrumentation, including a slope inclinometer and settlement plates. As shown in Figure 4, SP is settlement plates, SAAH is the horizontal SAA, SAAV is the vertical SAA, and PVDs are prefabricated vertical drains.  The instrumentation plan for this site included the use of two 32 m (104 ft) long SAAs. One SAA was oriented horizontally and the other vertically to monitor the settlement and the lateral displacement, respectively, of a thirty-six meter deep soft clay deposit. Based on soil strength and consolidation testing performed on undisturbed boring samples, it was decided to employ prefabricated vertical drains (PVDs) and surcharge fills to accelerate the consolidation and strength gain of the clay layer prior to driving piles for the bridge.
The vertical SAA installed at this site was 32 m (104 ft) long, in order to extend below the very soft silty clay layer. The SAA was installed in a vertical borehole located approximately 3 m (9.8 ft) from the edge of the Champlain Canal and approximately 2.5 m (8.2 ft) from a traditional inclinometer casing, in the area between the surcharge fill and the canal; see Figure 5. A 50 mm (2.0 in) diameter polyvinyl chloride (PVC) well casing, grouted into place using the same weak grout mix used for the inclinometer casing, housed the vertical SAA. To enable future retrieval of the SAA, silica sand was used to fill the annulus between the 25.4 mm (1.0 in) approximate diameter sensor array and the inner wall of the casing. The sand would later be jetted out with water to free the instrument. The fine sand backfill was placed by pouring from the top of the casing. The recommended installation method for the SAA now includes direct insertion into a 25 mm (1 in) inner diameter casing, which is grouted into place prior to the array installation [12]. This recommendation method had not been developed yet at the time of this installation. The consequential effect is the appearance of spurious displacements resulting from movement of the sand backfill rather than actual lateral movement of the clay deposit.
Beginning in April 2007, a 4.5 m (14.8 ft) high, geosynthetic reinforced earth wall was constructed on the east bank of the Champlain Canal to mimic the load of the proposed bridge abutment, upon which an additional 1.5 m (4.9 ft) of fill was placed. With the surcharge in place, ground displacements began to accumulate and the lateral displacement of the foundation soils could be discerned. The zone of lateral squeeze can be seen in Figure 6 with displacements approaching 20 mm (0.79 in), from 3 to 5 m (9.8 to 16.4 ft) depth after April 2007. Figure 6 shows a comparison between the displacement measurements from a traditional inclinometer and the vertical SAA system for a three-month period of monitoring following the surcharge fill placement; that is, May 2007 is used as the zero reading. The trends from both methods of instrumentation are similar. The right side of Figure 6 shows the continuous displacement profile from the SAA system software for the four-month monitoring period after surcharge fill placement. Total displacements measured by both systems were less than 18 mm (0.71 in), but the general trends are discernible.
The horizontal SAA was installed after the PVDs had been driven, just prior to the construction of the surcharge embankment, approximately 5 m (17.5 ft) east of the westmost extent of the embankment and approximately 0.3 m (1 ft) west of a row of PVDs. The array was pushed into ten sections of 25.4 mm (1 in) diameter PVC conduit, which had been glued together with PVC cement prior to the array insertion. Cable-pulling lubricant was used to assist the array insertion. However, the 32 m (104 ft) length was inserted into the full length of PVC conduit with relative ease even in spite of having to install the array against a slight upward grade. The array-conduit assembly was placed in a small trench, approximately 0.3 m (1 ft) deep, within a previously placed gravel drainage layer. The displaced drainage material was backfilled around the conduit. The initial position of the horizontal SAA was obtained by laptop connection within minutes of the installation. The earth station for wireless data collection was installed a few days later, coinciding with the start of the embankment construction. The horizontal SAA transmitted wireless data every four hours, after an initial evaluation period, where data was collected every hour. Figure 7 shows the settlement profile from the horizontal SAA and a row of settlement plates (SP1, SP2, and SP3). This figure includes the horizontal SAA settlement data shown as a contour plot through February 2008, at which time the array was extracted prior to the pile installation at the site. The settlement plate profile is only provided through August 2007 in Figure 7 though it can be seen that the shape and values of the profiles from both methods of instrumentation is quite similar. It can been seen from the time history plots of displacement in Figure 8 that the settlement plates (SP1, SP2, and SP3) experienced greater total settlement, approximately 280 mm (11.0 in) versus 225 mm (8.9 in) maximum observed SAA settlement. This difference is attributable to the fact that the settlement plates were located approximately 4 m (13.1 ft) east of the horizontal SAA, a location bearing more of the surcharge load. The x-values shown for the SAA and the SPs correspond to the position of the measurement on Figure 7, measured from the cable end of the SAA. Although the traditional site instrumentation was not ideally located for direct comparison with the vertical and horizontal SAA readings, this project demonstrates the usefulness of SAAs for construction monitoring. The information provided by these two SAA systems helped NYSDOT engineers evaluate the effectiveness of the geotechnical treatments utilized at this site, namely, surcharge loading and PVDs. Information from the horizontal installation, especially, helped engineers make decisions about the surcharge waiting period during construction. Specifically, the settlement profile beneath the embankment and the lateral squeeze of the underlying soft clay layer were available in real time. Had it been necessary, the construction schedule at this site might have been accelerated based on interpretation of the real-time settlement and rate of settlement information provided by the horizontal SAA. At the end of monitoring, both SAAs were successfully retrieved for reuse on other projects. The same methodologies applied at this site could be used for longer-term monitoring of foundation soils of permanent structures.

SAA Field Installation at IJkdijk
The IJkdijk (Dutch for "calibration levee") is a test site in The Netherlands for inspection and monitoring technologies for levees. The objectives of this site are two-fold: first to develop and validate new sensor techniques, and second to perform full-scale failure experiments on levees to understand their fundamental behavior. This should increase the quality of the levee inspection process and the safety assessment of levees. The final goal is to develop tools to respond to flood threats in a timely manner with appropriate measures.
The first task of this project was a full-scale consolidation test on an instrumented levee. Uncertainties regarding the bulk properties of a peat layer in the subsoil, based solely on laboratory testing, necessitated this field test to determine the permeability and the strength parameters in situ. In this full-scale test, one vertical SAA and one horizontal SAA were used as experimental instrumentation. Details of this test are given in [13]. In view of the accurate measurements obtained from the SAAs during this consolidation test, the SAA became the reference system for the evaluation of other deformation measurement systems in subsequent tests. The following presents the design and execution of the first large levee stability test.
The levee for the first production stability test at IJkdijk was constructed with a height of 6 m (19.7 ft), a length of  The levee core is sand, with a thick clay cover. This is the usual configuration of new levees in The Netherlands. For this full-scale testing, using sand inside is an advantage since the levee can be filled with water, which reduces strength and increases the load on the subsoil. An aerial view of the levee on the second day of the test is shown in Figure 9.
To enable the calibration of the new techniques and to evaluate the test in general, reference monitoring systems including three vertical SAAs were installed in this stability test. Based on successful early field tests, the SAA system was deemed suitable as the reference system for monitoring the levee deformation. A cross-section of the levee showing all installed systems is shown in Figure 10. Some of the systems were installed along the length of the levee, but most of them were concentrated in three cross-sections, one in the middle and two 35 m (114.8 ft) away from the middle. To avoid damage from postconstruction installation, all tubes and buried cables were installed before and during the construction of the levee.
The loading sequence to bring the levee to failure is indicated in Figure 11 and consisted of six stages. First, the bathtub on the wet side was filled, followed by an excavation of 1 m (3.3 ft) on the other side. Second, the excavation was enlarged down to the sand base. In Figure 9, this phase had just started. Third, the sand core was filled to 2/3 of its height with water. The fourth step was to drain the excavation. In  the fifth step, the containers on the crest were filled with water, and finally, in the sixth step, the sand core was filled completely, thus completing this sequence of internal and external loading. The full-scale stability test began on September 25, 2008. As planned, the test started with the filling of the bathtub, closely followed by the shallow excavation. The second phase of the test, that is, deepening and widening of the excavation (Figure 9) was completed on the second day of the test. On the third day of the test, the filling of the sand core of the levee from within, through the built-in infiltration tubes, commenced. Because of the apparent variation in permeability, the pore pressures in the sand core increased rather irregularly. well within the part of the levee that failed. Figure 12 shows measurements from one of the vertical SAAs, with measurements at a 0.305 m (1 ft) intervals with depth.
The SAA was installed well below the slip plane. It can be seen that the levee was still moving because of consolidation as a result of the construction on the peat layer when the excavation was made. This caused an increase of deformations, which slowed down during the first night of the test. The enlargement of the excavation caused a large increase of deformations, which slowed down during the night. During the filling of the sand core, the deformations strongly increased until the clearly visible failure occurred, that is, the 27 Sep 16 : 04 data line in Figure 12. The failure  Figure 9). caused such large movements, more than 3 m (9.8 ft), that the SAA was drawn out of its end anchor, resulting in failure of the subsoil that appeared to occur at 5.25 m (17.2 ft) depth ( Figure 12) that is incompatible with other findings which showed failure occurring at around 3.5 m (11.5 ft) depth. However, the SAA continued to provide data through this large deformation and was retrieved for use in the next test. Figure 13 presents a photo of the levee immediately after it failed. The SAA measured deformations were confirmed by post test surveying measurements.
An additional excavation was made next to the middle instrumented cross-section, a few days after the failure. This showed that large cracks had appeared in the peat, which were filled by clay from the original surface layer. These cracks enabled the transport of infiltrated water down to a thin layer of about 5 to 15 cm (2.0 to 5.9 in) of sand between the peat layer and a thin, impermeable layer on top of the base sand, which appeared to be present only under a part of the levee, including the part which failed. Clear signs of sliding along this more or less horizontal sand layer were found as far as the forensic excavation was possible.
Advances in Civil Engineering 9 Figure 13: Levee after failure.
Although comparison plots are not available between the SAA and traditional displacement monitoring systems due to difficulties with the traditional system, this project demonstrates the usefulness of the SAA system for real-time monitoring of levees. The IJkdijk project has identified real-time information about the status of the water system and levees as an important precondition for large-scale water management systems.

Field Instrumentation Strategy.
Identification and calibration of a soil model solely using records of a surface motion or even the motion provided by one vertical array, (i.e., [15]) is a challenging task. This is especially the case if the system response is essentially multidimensional and marked by the development of large local deformations or interaction with structural elements. Thus, identification and model calibration activities using field data remain relatively scarce in view of a historical lack of appropriate data. The limited number of sensors commonly employed to monitor field sites often leads to open-ended indeterminate calibration and identification problems. Such problems require advanced three-dimensional instrument configurations, along with data reduction techniques, that go beyond usual and simple approaches. However, such instrumentation is limited due to prohibitive costs. The low cost of the SAA system provides a unique opportunity to monitor the response of complex soil and soil-structure systems using three-dimensional configurations.
In fact, the SAA is enabling a new strategy to monitor the static and dynamic response of soil and soil-structure systems. This array allows easy three-dimensional instrumentation of new and existing geotechnical systems with a dense network of accelerometer and deformation sensors. In view of their small size, these sensors may be installed at virtually any location within a system and along its boundaries without compromising the system's structural integrity. Figure 14 presents a sketch rendering this vision using a number of SAAs installed to monitor level ground, slope, and soil-pile interaction at a bridge abutment site where the soil slides and deforms due to extreme loading (traffic loads, earthquakes, rain fall, etc.). Such comprehensive dense instrumentation enables a new and more efficient local identification methodology, as described below.

Local System Identification (SI) Algorithm.
The local identification technique capitalizes on the dense measurements provided by the SAA. This algorithm (Figure 15) consists of the following steps: (1) evaluation of strain tensor time histories using the static and dynamic motions recorded by a cluster of closely spaced sensors, (2) estimation of the corresponding stress tensors utilizing a preselected class of constitutive models of soil response, (3) computation of the deformations or accelerations associated with the estimated stress tensors using the equilibrium equations, and (4) calibration and evaluation of an optimal model of soil response. This approach focuses on the analysis of local soil dynamic characteristics and properties without interfering with the boundary conditions or adjacent response mechanisms [15,16].

Proof-of-Concept Using Two-Dimensional (2D) Soil Systems
The capabilities of the SI algorithm were assessed using a number of computer simulations along with analyses of centrifuge test data of small-scale soil systems with sensor configurations that mimics those enabled by the SAA. The performed simulations addressed the identification of the complex response of a soil system behind a retaining wall, as shown in Figure 16. These simulations showed that the local SI technique provides an effective means to analyze the constitutive behavior of complex, massive soil and soilstructure systems at specific locations independently of adjacent response mechanisms or material properties. For the 2D problem of Figure 16, the motion recorded by a 3 × 3 (or larger) cluster of accelerometers and inclinometers may be used to identify low and large strain dynamic properties of the soil comprised within the instrumented zone independently of adjacent soil (even for a complex multilayered site). For instance, a subset of 5×5 accelerometers of the soil system shown in Figure 16 were efficiently used to identify the lowstrain shear modulus, G 0 , of an intricate zone of this system [17]. The centrifuge tests were conducted under a 50 g gravity field for the clay soil retaining structure system shown in Figure 17 [16]. A one-dimensional lateral shaking was imparted along the model base. The 2D response of the clay soil was monitored at 15 locations behind the retaining structure using a 5×3 array of traditional accelerometers. The recorded accelerations provided ample experimental data to locally assess the constitutive stress-strain relationship of the clay layer using the SI algorithm. A multisurface plasticity technique was used to idealize the nonlinear and path-dependent stress-strain behavior of the clayey soil. The identified accelerations at location (4 and 2) are shown in Figure 18, along with the corresponding shear modulus variation with strain amplitudes. Good agreements were obtained between computed and recorded accelerations at Actual G 0 variation Identified G 0 variation Accelerometer Shape acceleration array Figure 16: Sample results of a numerical simulation conducted to show the potential of using shape acceleration arrays (SAAs) to identify locally the low-strain mechanical properties of multilayered sites or other complex soil systems [14]. this and other locations. The modified one-dimensional stress-strain analysis takes into account the impact of lateral normal stresses rather than only using a shear beam idealization [14,16].
6.1. Three-Dimensional Site Characterization. The newly developed SAA and local identification approach are currently being used to develop an effective new approach for site characterization. A network of SAAs has been installed (spring 2009) at the Wildlife Refuge free field site in California. This network has a three-dimensional configuration. Such a configuration will enable the development of improved tools to: (1) characterize the 3D response of field sites and other geotechnical systems, (2) accurately evaluate the in situ small-strain and nonlinear mechanical properties of these systems, and (3) calibrate soil models. More specifically, the set of installed arrays will be used to fully characterize and identify the soil mass comprised within the sensors. The data provided by the SAAs and associated data reduction tools will produce significantly more and better information than current soil sample experiments, with the added benefit that this information is for in situ conditions and covers continuous soil strata from the ground surface up to 100 m (328 ft) depth and that the issues of soil sample disturbance and size are circumvented.

Conclusions
This paper presented two successful field applications of the shape acceleration array (SAA) system, at an active bridge realignment site on a 30 m deposit of very soft clay and a full-scale levee testing facility in The Netherlands, which demonstrates how this system could be utilized for real-time health monitoring of civil infrastructure. A new local identification technique to characterize the response and assess the properties of soil and soil-structure systems was also presented. The developed identification technique provides an effective tool to locally analyze and assess the static and

Introduction
Resilient modulus, M R , generally corresponds to the degree to which a material recovers from external shock or disturbance. This property of the material is actually an estimate of its modulus of elasticity, E. In case of slowly applied load, slope of the stress-strain curve in linearly elastic region yields E, whereas, for rapidly applied loads (e.g., load experienced by pavements), this would yield M R . The resilient modulus can be expressed as where σ is the applied stress and ε r is the recoverable axial strain. M R describes the mechanical response of a pavement base or subgrade to the applied cyclic (traffic) load, and, hence it is considered to be an essential parameter for pavement design. By knowing the resilient modulus for the subgrade soil and the pavement material, the structural behavior of the pavement against traffic loading can be ascertained.
However, obtaining M R is a very difficult task, and it can only be determined by laboratory testing of the material [1][2][3]. As such, the Long Term Pavement Test Protocol (LTTP) P46 is widely used [1,4,5] for determining M R , which in turn requires dynamic triaxial testing on cylindrical cores. Several other (modified) methodologies such as National Highway Research Program (NCHRP) 1-28A method and Federal Highway Administration (FHWA) method (1) are also employed for determining M R . Various empirical relationships, correlating M R to other material properties (namely, California Bearing Ratio, CBR; Limerock Bearing Ratio, LBR; R-value and the Soil Support Value, SSV) can also be employed to estimate M R . However, these relationships give vast variation between the estimated and experimental results [4][5][6]. In addition to the material properties, M R value depends upon many of the testing parameters like preparation technique, loading amplitude, sequence of loading cycle and the confining pressure. However, not much attention has been paid by the earlier researchers to corroborate laboratory results vis-à-vis field 2 Advances in Civil Engineering conditions. This necessitates development of a methodology that would yield M R in a convenient way without compromising the field conditions. Under these circumstances, application of a nondestructive methodology, which is based on propagation of mechanical waves, seems to be a better choice [1,7]. In recent years, it has been found that some of the nondestructive testing methods (namely, the laser technique, ground-penetrating radar, falling weight deflectometers, mini-or portable lightweight cone penetrometers, GeoGauge, and infrared and seismic technologies) can be successfully employed for the prediction of M R and for the purpose of quality control and acceptance of flexible pavement construction [8]. However, some researchers [6,[9][10][11] have found that M R , determined from the laboratory testing, differs from the nondestructive testing based analysis.
With this in view, attempts were made to determine the resilient modulus of asphaltic concrete cores by employing piezoceramic elements and an electronic circuitry developed by the researchers at the Indian Institute of Technology Bombay, India [12,13]. In addition, complete characterization of these cores was done as a part of the proposed method for determining M R . The result obtained from this method was then compared with that obtained from the triaxial loading testing, and it was concluded that piezoceramic elements can be successfully employed for determining resilient modulus in pavement designing.

Characterization of Asphaltic Concrete
Cores. DAC (Dense Asphaltic Concrete) and SDAC (Semi-Dense Asphaltic Concrete) cylindrical core samples for this study were obtained from the airfield pavements of the two runways of an airport in India. These cores were extracted from the wearing and binder courses of the pavements of these runways. Density-void analysis, Marshall Stability, and Flow value tests were carried out on these cores as per ASTM D6927 [14], and the results are depicted in Table 1.
The Marshall Stability value of the DAC and SDAC specimens, when tested at 60 • C, were found to be 765 kg and 725 kg, respectively. The flow value of the SDAC specimens was found to be on the higher side as compared to that for DAC samples. The average bulk density of the DAC and SDAC specimens was found to be 2.36 and 2.33 g/cc, respectively. The stiffness modulus of the mix was determined based on the parameters of the mix (namely, density, air voids, aggregate voids filled with bitumen, and bitumen content), the properties of the bitumen (namely, penetration, softening point, temperature susceptibility, penetration index, and specific gravity), and the properties of aggregates (namely, specific gravity) by using the Shell nomograms [15], as listed in Table 1. The gradation curves for the samples are depicted in Figure 1.

Measurement of Shear and Compression Wave Velocities.
To determine the shear and compression wave velocities (V s and V ρ , resp.), a simple and cost-effective bender element setup developed by the authors [12,13] was employed. Signal  interpretation and analysis of the results has been done in accordance with the information available in the literature [16][17][18]. The block diagram of the test setup for measuring V s and V ρ in the cylindrical asphaltic concrete cores has been depicted in Figure 2. As depicted in the figure, on both ends of the specimen, piezoceramic elements (the pair of a transmitter and a receiver) can be fitted. The transmitter is excited with a single-cycle sine wave of certain amplitude, which is generated from a function generator. The receiver   is connected to a filter/amplifier circuitry, which in turn is connected to a digital oscilloscope that also receives a direct sine wave or a step signal from the function generator.
Bender elements used in this study were procured from the Centre for Offshore Foundation Systems, The University of Western Australia. These elements were constructed by bonding two piezoceramic materials together in such a way that a voltage applied to their faces causes one face to expand while the other face to contracts. This causes the entire element to bend and generation of a voltage and vice versa. As depicted in Figure 3, the receiver and transmitter bender elements consist of series and parallel bimorph configurations, respectively. The bender elements in Figures 3(a) and 3(b) were subsequently used as extender element, thus producing V ρ , by inter changing the wiring configurations and direction of polarization, as shown in Figures 4(a) and 4(b).
For determining the time delay introduced in the measurements due to the electronics, ceramics, and coating materials of the bender element, calibration of the complete system was conducted. This was achieved by placing the tips of the two bender elements in direct contact with each other and measuring the calibration time t c between the electrical pulse sent to the transmitter and received by the receiver. It was found that the magnitude of t c is very small (=5 μs). In addition to this, V s for an aluminum rod (160 mm × 25 mm × 25 mm), a thermocol (82 mm diameter and 62 mm length) and a M-30 grade concrete (50 mm diameter and 67 mm length), was also measured [11]. To achieve this, thin slits (about 1.6 mm wide and 11 mm long) were created at the centre of each of the two planes, which are perpendicular to the length of the aluminum bar or concrete block. Later, in these slits, which are parallel to each other, bender elements were fitted. For these materials, V s was found to be 3217 m/s, 280 m/s and 1500 m/s, respectively, which match very well with the values reported in the literature [19]. Moreover, V s and V ρ were measured on some standard materials. Using (2), [20,21], Poisson's ratio, ν, when computed for rubber, stainless steel, and cork was found to be 0.5, 0.29, and 0, respectively, matching well with the results in literature [22][23][24] where r is ratio between V ρ and V s . Later, V s and V ρ in the DAC and SDAC specimens were measured. A typical waveform obtained for the DAC specimen is depicted in Figure 5.

Loading Test.
A Humboldt, USA, made master loader system (HM 3000) was used for determining M R . This setup facilitates microprocessor-based stepper motor speed control and consists of analogue-to-digital converter with real-time data acquisition; the motor speed can be selected between 0 to 75 mm/min, with RS-232 interface.  Load was applied on the specimens with the help of computer-controlled user defined test setup program. Before loading the specimens, strain rate was set to 25 mm/min and stop condition was set to: "load exceeding 18 kN," which is based upon the possible elastic modulus values for these samples. Deformation undergone by the specimens was recorded every 1 s by employing a Linearly Variable Differential Transducer (LVDT), connected to the computer controlled user defined setup. Each load cycle followed a time lag of 10 s during which unloading was done. A total nos. of 35 loading cycle was applied to each specimen.