SBS Content Detection for Modified Asphalt Using Deep Neural Network

-is study proposes a prediction model for accurately detecting styrene-butadiene-styrene (SBS) content in modified asphalt using the deep neural network (DNN). Traditional methods used for evaluating the SBS content are inaccurate and complicated because they are prone to produce errors by manual computation. Feature data of SBS content are derived from the spectra, which are obtained by the Fourier-transform infrared spectroscopy test. After designing DNN, preprocessed feature data are utilized as training and testing data and are fed into the DNN via a feature matrix. Furthermore, comparative studies are conducted to verify the accuracy of the proposed model. Results show that the mean square error value decreased by 68% for DNN with noise and dimension reduction. -e DNN-based prediction model showed that the correlation coefficient between the target value and the mean predicted value is 0.9978 and 0.9992 for training and testing samples, respectively, indicating its remarkable accuracy and applicability after training. In comparison with the standard curve method and the random forest method, the precision of DNN is greater than 98% for the same test conditions, achieving the best predicting performance.


Introduction
Asphalt pavement is widely used owing to its remarkable durability. However, several factors, such as increased traffic and extreme weather conditions, poses great threat to pavement longevity, which gives rise to pavement distress, such as permanent deformation, fatigue cracking, and thermal cracking. As a triblock copolymer, styrene-butadiene-styrene (SBS) is the most popular polymer used for modifying asphalt, which is of great significance for improving the toughness of asphalt pavement against rutting deformation and cracking in high and low temperatures, respectively. Additionally, it also enables the improvement of antifatigue and antiaging features of the pavement [1][2][3]. It has been found that the properties of the modified asphalt are related to the amount of SBS, considering that it has a close relation with the microstructure of the SBS-modified asphalt [4][5][6]. However, due to the high cost of SBS modifiers, adequate content of SBS in modified asphalt is not always guaranteed during the production process. erefore, an effective method should be proposed for quantifying the content of SBS in modified asphalt. Furthermore, it is of great importance to regulate the supply market of the asphalt binder and guarantee the reliability of asphalt pavement.
Traditional methods for evaluating the content of the SBS modifier in asphalt is dependent on the storage stability test, which is conducted to measure the physical properties, such as penetration, ductility, softening point, and viscosity, of the modified asphalt [7]. Nevertheless, some disadvantages for this method were found, which include high time, inaccuracy, and unrepeatability [8,9]. Although fluorescence microscopy can be employed to observe the dispersion of the SBS modifier in modified asphalt with different contents, it is quite challenging to detect the SBS content quantitatively [10]. In a previous study, an electrochemical technique was developed to evaluate the storage stability of SBS-modified asphalt [11]. However, there is a need to improve this technology for quantifying the SBS content accurately. Several researchers have suggested that an accurate and effective method for quantifying the SBS content in modified asphalt could be proposed based on the Fouriertransform infrared spectroscopy (FTIR) test [12][13][14]. To identify the C�C bond in butadiene and determine the existence of styrene, 966 and 699 cm − 1 bands are adopted, respectively [15]. Absorbance peak and area show a significant correlation with the SBS content, which can be used for quantifying the SBS content in modified asphalt accurately [12]. A simple single variable linear regression relation between SBS content and their absorption peak properties is utilized to estimate the SBS content of an unknown sample. ese peak properties are artificially identified and extracted to calculate the SBS content based on the infrared spectrum. Obviously, these estimations are prone to produce errors during the spectra addressing process, and an enormous computational task is required to obtain accurate predicting. However, considering that the infrared spectrum comprised all absorption peaks of an asphalt sample, it is a novel method for predicting the SBS content via deep learning based on the infrared spectra.
Neural network has significant advantages in the field of object recognition and evaluation and has been applied widely in civil engineering [16,17]. It was successfully employed to detect concrete cracks [18][19][20] and structure damages [21,22]. Besides the outstanding recognition and detection capability, the accuracy of the neural network prediction model has been of great interest in recent years. e deep neural network (DNN) with multiple hidden layers can be obtained after its model is trained completely using a considerable amount of data. DNN can learn sufficient and complicated information, which is significantly important for improving the accuracy of prediction or classification. It comprises input and output layers and contains more than three hidden layers in most cases. Considerable amount of data are input and used to train and improve the model. Finally, an accurate prediction model is generated by a deep learning method. It has been found that the neural networks are more effective in predicting the fatigue life of asphalt mixtures than traditional statistical-based prediction models [23]. Moreover, it was observed that the neural network has great potential for predicting the compressive and tensile strength of high-performance concrete [24]. Additionally, the temperature prediction model of ice pavement in winter could be obtained using an improved backpropagation neural network and its applicability was validated for a given time period [25]. e DNN method has a remarkable advantage of providing an accurate prediction model.
is study aims to employ the DNN method for detecting the SBS content in modified asphalt accurately. We prepare SBS-modified asphalt samples uniformly with different SBS modifier contents and scan these samples by an FTIR spectrometer to generate spectra, which contained information about the SBS content. A feature matrix composed of absorbance data, which are extracted from different spectra, was utilized to obtain information about SBS content by DNN. erefore, herein, we designed an architecture based on DNN, which comprised an input layer, output layer, and 11 hidden layers. Furthermore, feature matrix data are preprocessed to improve the learning speed and accuracy of the DNN-based prediction model. en, the 483 × 512 matrix of feature data derived from 483 samples was used to train the DNN. Also, we used another 126 × 512 matrix of feature data as testing samples for this DNN. Using various efficient learning techniques, we trained and tested the DNN to confirm its accuracy and performance. Finally, we conducted a comparative study and observed that the proposed DNN-based architecture outperforms its counterparts.

Raw Materials.
Herein, we used four types of neat asphalt binders (70#/90# in terms of penetration), whose properties are listed in Table 1. For producing SBS-modified asphalt, we adopted three types of SBS modifiers, namely, LG501 and LG411 produced in Korea and LCY3501 produced in Taiwan, and mixed them with neat asphalt. Table 2 lists the properties of those SBS modifiers. As a cross-linking agent, we utilized industrial sulfur (0.2 wt% of neat asphalt) to improve the storage stability of modified asphalt.

Preparation of SBS-Modified Asphalt.
We employ the following procedure to prepare SBS-modified asphalt via melt blending [26]. First, the neat asphalt was heated to 135°C and then mixed with the SBS modifier in various proportions (by weight of neat asphalt binder). Second, the mixture was sheared for 15 min at 180°C using a shear mixer at a rate of 3000 r/min and was further sheared continuously for 45 min at the rate of 6000 r/min. ird, a cross-linking agent was added into the blend and sheared for 45 min at 180°C at the rate of 7000 r/min. Furthermore, the SBSmodified asphalt was moved into an oven and kept therein for 15 min, while considering a better swelling development of the SBS-modified asphalt. For comparison, we also processed unmodified asphalt by the same procedure.

Validation of SBS Modifier Dispersion Uniformity.
When the FTIR spectrometer scanned a small quantity of the modified asphalt, i.e., approximately 1 g, the dispersion uniformity of the SBS modifier was found to have a significant influence on the FTIR spectra [27]. erefore, the dispersion uniformity of the SBS modifier in modified asphalt should be validated before the FTIR test. e SBS modifier and neat asphalt naturally have different excitation responses under the irradiation of high-energy beams. Yellow light is reemitted when the SBS modifier is irradiated using blue light for excitation. e SBS modifier exhibits much lighter than the neat asphalt under the fluorescence microscope [5,10]. ese responses can be observed using a fluorescence microscope equipped with a digital camera, as shown in Figure 1. Asphalt samples with different SBS content were prepared to observe and analyze the SBS modifier dispersion uniformity. We chose blue light for the excitation. Herein, the amplification of an objective lens in the microscope is ×40. Figure 2 shows the morphology of the SBS modifier (LG501) with different content in modified asphalt.
From Figure 2, it can be seen that SBS modifiers are homogeneously dispersed in asphalt in a filamentous structural form. Using the fluorescence microscope, a dense structure but no agglomeration was observed as the SBS content increased, which indicates that the asphalt samples are produced uniformly to be used for scanning by the FTIR spectrometer.

FTIR Test.
FTIR spectrum data for asphalt samples were obtained using the Controls Cary 630 FTIR spectrometer with a sampling accessory and zinc selenide attenuated total reflectance (ZnSe ATR), as shown in Figure 3(a). Figure 3 shows a self-developed sampling mold that matches with the ATR. As illustrated in Figure 3(c), this self-developed mold can form seven replicates for each sample, which can simplify the sampling process, improve the sampling efficiency, and also guarantee the testing accuracy. e temperature was set at 24.4°C for the FTIR test with a resolution of 3.726 cm − 1 . e wavenumber accuracy is greater than 0.005, whereas the signal-to-noise ratio is higher than 5000.
irty-two scans within the wavenumber range of 4000-650 cm − 1 were obtained and averaged.
One spectrum of SBS modified asphalt can be achieved based on 900 absorbance data in this study. With a consistent abscissa, the ordinate is different for each sample with different content of the SBS modifier. As illustrated in Figure 4, there are eight obvious spectral bands in the FTIR spectra of neat and modified asphalt. Furthermore, as summarized in Table 3, the spectral bands at different wavenumbers reflect the absorbance of various molecular structures. From Figure 4, it can be seen that there are peaks at 966 and 699 cm − 1 bands for the SBS-modified asphalt, which correspond to the C�C bond in butadiene and styrene that are attributed by the SBS modifier.
ese two peaks could not be found based on the spectrum at 966 and 699 cm − 1 bands for neat asphalt. erefore, the properties of these peaks can be used to predict the SBS content. It is worth noting that the spectrum comprised absorbance peaks corresponding to the asphalt sample with certain SBS content. us, the modified asphalts with different SBS content represent these different spectra, which have information about SBS contents to be revealed accurately by DNN.

Research Approaches
e spectrum comprised 900 absorbance values, which were used to analyze the structure information of an asphalt sample using the deep learning method. Figure 4 shows that there is a remarkable difference among asphalt samples with different SBS content, which indicates that there are some regular differences among those absorbance values for each spectrum that is connected to the SBS content. e data extracted from different spectra represent SBS content and could be utilized to train the DNN. e DNN comprised an input layer, several hidden layers, and output layer. Furthermore, there are abundant neurons in the input layer and each hidden layer. e relation between the output and input values for each neuron was determined by weights and biases, which are continuously corrected during the training process. en, the DNN with parameters could be obtained and tested to confirm its prediction accuracy. If the accuracy is not as expected, the DNN is structured and trained all over again. Figure 5 shows the developmental process of DNN.

Generation Database for DNN
3.1.1. Noise Reduction. As mentioned in Section 2.2, eight kinds of asphalt samples were scanned by the FTIR spectrometer, whereas 30 duplicate samples were taken for each   asphalt. en, the data of 630 spectra were used as input data for the DNN. However, the systematic and artificial errors resulting from signal noise should be taken into account. In the smoothing pretreatment for spectral analysis of SBS content by FTIR, the Savitzky-Golay filter is frequently used to reduce the noise interference [31]. Herein, the moving window method based on least squares theory is employed to filter out the noise and reduce the effect of smoothing preprocessing on the information. A filter window contains 2m + 1 wavenumbers, where m is a random number greater than one, which is symmetrically ranged in the window.
Absorbance values corresponding to these wavenumbers were fitted by the following equation: where z is the fitted value, x is the absorbance value, a i is the fitting parameters, and i � 0, 1, . . ., k − 1. ere are 2m + 1 equations for a filter window, and the linear system of equations is obtained as shown by equation (2), which is further expressed in matrix form by equation (3). e column vector, A, comprised the fitting parameters obtained using equation (4) based on least squares theory. us, Z is given by equation (5), after filtering, as follows: where Z and Z are the matrix of eigenvalues of the spectrum data before and after filtering, respectively; furthermore, X is the matrix of absorbance in the filter window and E is a column vector comprising 2m + 1 sums of squared errors between fitted and real values. A smooth spectrum is obtained using filter windows. Finally, after noise reduction, Table 3: Characteristics of molecular structures at different wavenumbers [28][29][30].

Principal Component Analysis.
As the input data, the 609 × 900 matrix of feature data was considerably hard to train the DNN. To improve the training speed, principal component analysis (PCA) was conducted as a dimension reduction method herein. However, in the dimension reduction, the reduced level was not arbitrarily selected but was to confirm that the minimal dimension of data can represent more than 95% information of the original data. By PCA computation, the principal component was gained to maintain the most essential information of primary characteristics for input data. Singular value decomposition (SVD) was used to transform input data into a set of eigenvalues and corresponding eigenvectors, which are linearly independent. e eigenvectors refer to the characteristics of input data, while the eigenvalues refer to the degree of the characteristic [19]. e feature matrix, A, represents the input data and can be considered as a scalar product of three different matrices, as follows: where U is an m × m matrix of, Σ is an m × n matrix, and V T is an n × n matrix. All the elements of Σ are zero except the elements on its diagonal, which are known as singular value σ i . Furthermore, u i and v i are eigenvectors of AA T and A T A, respectively, which are applicable to the following formulas: where λ i is an eigenvalue, i � 1, 2, . . . , m; then, U � (u 1 , u 2 , . . . , u m ); where λ i is an eigenvalue, i � 1, 2, . . . , n; then, V T � (v 1 , v 2 , . . . , v n ). e following formula gives the relation between σ i and λ i : where i � 1, 2, . . . , m + n and σ 1 > σ 2 > · · · > σ r > · · · > σ m+n . Generally, the singular values decrease rapidly in a decreasing sequence, and 99% of the summation of all singular values results from the 10% (even 1%) of singular values in the sequence. en, Σ is obtained and represented as follows: Herein, the summation of the singular values ranging from σ 1 to σ r is 99% of the summation of all the total values. us, the feature matrix A can be expressed as follows: us, dimension reduction can be realized by SVD, as expressed by the following equation: where r is the dimension after the SVD calculation. is indicates that the major features of matrix A can be expressed by the principal component that comprised eigenvectors corresponding to the first r nonzero singular values. Furthermore, the n-dimensional data can be reduced to r-dimensional data after PCA calculation. To speed up the DNN process and ensure that the data of low dimension represent more than 95% characteristics of the original data, and 512-dimensional data were gained finally after repeated PCA calculation.

Data Classification.
After data preprocessing, the 609 × 512 matrix of feature data obtained was used as a database for the DNN. It contained all the characteristics of spectra reflecting the SBS contents. Each spectrum comprised absorbance data of dimension 512. For training and testing the DNN, 80% and 20% of the total spectra data were utilized, respectively. As listed in Table 4, 80% and 20% of total spectra data for each SBS content were randomly selected as training and testing data, respectively. We observed that the DNN needs a specific label of data from each spectrum. erefore, the features of the spectrum, as expressed by the eigenvector X, are the input data to be fed into the DNN. e corresponding vector, labeled as Y, represents the SBS content and is regarded as an output value.

Architecture of DNN.
e purpose of designing a DNN is to build a complete architecture that can connect all neurons in a regular and reasonable mode in such a way that the information can be transmitted correctly between the input and output data. Figure 6 shows the architecture of DNN, which comprised an input layer, output layer, and 11 hidden layers arranged from layers 2-12. It dramatically enhances the depth of learning and improves the accuracy of model expression. Regarding the hidden layers, as shown in Figure 7, numbers of neuron nodes for layers 2-12 are 512, 256, 256, 128, 128, 64, 64, 32, 32, 16, and 16, respectively. Although this setting enables the improvement of computer processing and accelerates its speed of optimization, there is no connection between the neurons in the same layer. Nevertheless, they are fully connected with every neuron in the next adjacent layer. e eigenvector X, as expressed by equation (13), is imported into the input layer that contains 512 neurons. Equation (14) describes an element in X. e output value is obtained by equation (15) based on its corresponding input value. It should be noted that neurons have different weights and biases in the DNN; moreover, these weights and biases are given by equations (16) and (17) and were continuously updated during the DNN training. us, we have  Figure 6: Architecture of DNN.

Advances in Materials Science and Engineering
where n is data dimension and m is the number of samples; X i is an eigenvector of spectrum data for SBS-modified asphalt and Y is the output eigenvector of SBS content; furthermore, l is the number of the hidden layer, which ranges from 1 to 12, whereas W (l) ij represents the eigenvector of weight from layer l to l + 1; moreover, b (l) ij represents the eigenvector of bias from layer l to l + 1, whereas i and j refer to the i th and j th neuron in layer l and l + 1, respectively.
We applied nonlinear activation function in all deep learning processes, which increased the speed of convergence. Moreover, when the sigmoid function was used as a nonlinear activation function, the gradient disappeared easily by backpropagation for deep networks. Gradual transformation and information loss have always been recognized as the causes of interruption during the training of deep networks. Additionally, there are several computations to be performed for the sigmoid activation function. However, a rectified linear unit (ReLU) can make the output values 0 for a part of the neurons, which reduces the dependence of parameters in the DNN and also reduces the probability of overfitting [32,33], reducing the computational cost. erefore, we utilized the ReLU as an activation function herein. Furthermore, those data were passed in the DNN using this nonlinear activation function, which is expressed by equation (18). Figure 7 depicts the schema of the ReLU activation function. e gradients of the ReLU are always zeros and ones, which facilitate faster computations and achieve better accuracies [18]. us, we have where x and f (x) are the input and output values, respectively.

Cost Function.
e cost function is defined to measure the similarity between the predicted and target values. Weights and biases are confirmed to gain a minimum error of the cost function by DNN training. Furthermore, the cross entropy is often applied in logistic regression, whereas the mean square error (MSE) is used in logistic regression. Herein, we employed the MSE as a cost function, as expressed by the following formula: where Y (i) is the predicted value, y (i) is target value, and n is the number of samples.

Learning Method.
Learning method played a fundamental role in DNN training. Precisely, it confirmed the weights and biases of the DNN. e preprocessed data were fed into the DNN accompanying the vector X. Moreover, they were trained by unsupervised learning, which is a feature learning in forward propagation. We used initial values of weights and biases to train the first layer and generate feature data that were used to train the second layer.
en, the produced feature data with the weights and biases were utilized to train the next layer. Features with the corresponding weights and biases for each layer were obtained after the last layer was trained completely. Next, we conducted learning with the output labels to tune the parameters, including weights and biases, of the whole layers for the DNN. Minibatch stochastic gradient descent was implemented to update parameters for each iteration based on the errors of the cost function. We set the batch size to 16. In comparison with adaptive learning rate algorithms, such as adaptive gradient (AdaGrad) or RMSProp algorithm, the Adam algorithm was found to have a faster convergence rate [34]. It was more efficient to learn and solve the problems, such as low learning rate, slow convergence, and large variance of updated parameters. e following equations can express the learning method for each iteration herein: where m (l) W and v (l) W are the exponential decay mean value and noncentral variance of zJ(W, b)/zW, respectively, m (l) b and m (l) b are the exponential decay mean value and noncentral variance of zJ(W, b)/zb, respectively, β 1 is 0.9, β 2 is 0.9999, ε is 10 − 8 , and η is 0.001.

Initialization, Regularization, and Normalization.
In the training of DNN, several challenges, including the divergence, overfitting, and vanishing gradient problems, abound. To train the DNN successfully, careful initialization, effective regularization, and well-designed normalization should be taken into account.
Error gradients are backpropagated across the layers in the DNN with gradient descent and are used to modify the parameters. In the propagation, the gradients are frequently reduced significantly, which leads to the vanishing of the gradient problem. Under this situation, parameters are hard to move in the backpropagation. To prevent the vanishing gradient for ReLU activation, we utilized the Xavier initialization to initialize the weights of DNN. W can be derived from the uniform distribution U(− r, r), where r is expressed by the following equation: where n input and n output denote the number of connected neurons in the input and output layers, respectively. To reduce the risk of overfitting during DNN training, the dropout technique is often used to regularize the DNN.
is technique randomly omits a hidden neuron with a probability of 0.5, which indicates that 50% of the neurons in hidden layers are randomly omitted at each iteration. Both in forward and backward propagation, the omitted neurons will not be activated. Furthermore, by the dropout technique, different overfitting can be generated in different networks because different neurons are omitted at each iteration. us, overfitting can be reduced entirely by offsetting each other in those binary opposition fitting. e dropout technique forces the updates of weights to be more independent instead of relying on the interaction of hidden nodes with a fixed relation. Consequently, weights will change effectively in the direction of approaching the target.
Batch normalization is a widely used method to normalize the inputs of hidden layers. e solution domain of the optimization is smooth, resulting from the fixed distribution of inputs for each layer by the batch normalization. is method ensures that the gradients are more predictable and stable. erefore, the learning rate can be employed on a large scale and a faster convergence rate will be achieved due to the batch normalization. Although the responses or gradients are inappropriate for deeper networks, batch normalization could change them into a reasonable range, which prevents the vanishing gradient. e following equations expresses the relation between the feature X (i) and matrix eigenvalue Z (i) : where μ (i) and σ (i) are the mean and standard deviation of batch data, respectively, c and β are the scale factor and shift factor derived from the DNN training, respectively, and ζ is a smooth factor, which is an infinitesimal number to prevent divisions from being zero.

Network
Speed. e process mentioned above was realized based on the same software and hardware platform that the processor is Inter (R) Core (TM) i7-8700 CPU @3.20 GHz, and the video card type is NVIDIA GeForce GTX 1070 8G GPU. DNN was trained and accelerated using CPU and GPU, respectively. Due to the acceleration, the forward time and backward time are 10.25 and 28.56 s, respectively.

DNN Test.
e purpose of this test is to analyze the practicability and accuracy of a prediction model. We utilized the untrained test samples listed in Table 4 to test the DNN.
e MSEs of the test are less than the prediction accuracy of the trained samples, which indicates that the DNN predicts the SBS content in the modified asphalt efficiently. Otherwise, the network structure and its parameters would be readjusted until the requirements are met.

Effect of Data Preprocessing on Precision.
Herein, precision refers to the MSE between predicted and target values, as given by equation (18). To analyze the effect of data preprocessing on the precisions of DNN models, we conducted different preprocessing methods, including no preprocessing, only dimension reduction, only noise reduction, and both noise and dimension reduction. Table 5 lists the results.
From Table 5, it can be seen that the MSE values are reduced for DNN models with data preprocessing. For dimension and noise reduction, the MSE value is decreased by 8% and 65%, respectively. is indicates that using noise reduction is highly significant in improving the precision and reducing the error of the prediction model. Although the error is not reduced, obviously, the training speed is improved efficiently for deep learning by using dimension reduction. Furthermore, the MSE value is decreased by 68% by utilizing both noise and dimension reduction, which enables the implementation of a DNN model with remarkable accuracy and robustness.

Iteration Times.
e number of iterations influences the accuracy of the prediction model. Figure 8 shows that the MSE value decreases with an increase in the number of iterations. A small MSE value is achieved after 1850 iterations, which is attributed to a lot of fine-tuning weights and biases. When the number of iterations is up to 2160, the MSE value is gradually kept at 0.047. If the number of iterations is more than 2160, there is a slight fluctuation in the MSE value. Since a considerably high number of iterations could slow the learning rate, the optimum number of iterations is confirmed as 2160.

Performance of the DNN.
After data preprocessing, we utilized the 483 × 512 matrix of feature data derived from 483 spectra as training data of DNN, whereas 126 samples, forming the 126 × 512 matrix of feature data, were used as testing samples of this DNN. Furthermore, herein, 2100 iterations were conducted. Figures 9 and 10 show the training and testing results, respectively. e abscissa stands for sample ID of different asphalt samples, and the ordinate represents the SBS content.
From Figure 9, it can be seen that the training values output by DNN are close to the target values of the training samples.
e MSE value of 0.047 was achieved after 483 samples were used to train completely. Figure 10 also shows an outstanding performance of the DNN. e testing values are very close to the target values. e MSE value of 0.032 was obtained from testing results.
To confirm the precision of the DNN model, the mean predicted value was calculated and compared with the target value for each SBS content. Figure 11 shows the correlation between the target value and the mean predicted value calculated by the proposed DNN model. It was found that the mean predicted value has a significant relation with the target value for each SBS content, which comprised some parallel samples that are listed in Table 4.   e correlation coefficient between the target value and the mean predicted value is 0.9978 for training samples, whereas the value is 0.9992 for testing samples. is shows that the DNN model has an outstanding accuracy and applicability after training. erefore, the SBS content of unknown samples can be predicted correctly with enough parallel samples by this DNN model.

Comparative Studies.
To compare the accuracy of the DNN with those of the existing methods, we selected three different SBS contents, i.e., 3.6%, 4.6%, and 5.6%, and two well-known methods, namely, standard curve method (SCM) [12] and random forest method (RFM) [35]. For SCM, a linear relation between SBS content and absorbance areas ratio (699 cm − 1 /1377 cm − 1 and 966 cm − 1 /1377 cm − 1 ) is achieved with modified asphalt samples composed of different SBS content. For RFM, the training samples are classified into several subsets, which are used to established prediction models by the bagging method. en, the final prediction model composed of those prediction models in a specific way is used to predict the SBS content of an unknown sample.
To maintain the same condition with the DNN training, we used 23 samples for each SBS content to establish the prediction model. Furthermore, we utilized the correlation coefficient and MSE to evaluate the accuracy of the different model. Moreover, to evaluate the predicted precision for each SBS content, we introduced the P value , which is defined as follows: P value � 1 − |mean predicted value − target value| target value × 100%.
(23) Table 6 lists the results of different prediction models. It can be seen that the MSE values are 0.231, 0.078, and 0.049 for SCM, RFM, and DNN, respectively. e simple linear prediction model shows poor accuracy, whereas the DNN exhibits the best predicting performance. P value shows the precision of the mean predicted value derived from parallel samples. It can be seen that precisions of SCM are less than 94%, and its RFM values are less than 96%, whereas the precisions of DNN are all greater than 98% for the same test condition, which indicates that the predicted value obtained by DNN is much close to the actual SBS content in the modified asphalt. Moreover, it displays the most outstanding predicting accuracy of SBS content in unknown samples by DNN. In addition, it shows better performance than the other two prediction methods. A strong generalization ability of DNN can be trained due to its excellent learning method for the feature of samples. Besides, DNN has some significant advantages, in that it can deal with multivariable and comprehensive prediction problems. e DNN prediction model can reduce the complexity of existing measurement methods and improve their prediction accuracy stably and efficiently.

Conclusions
In this study, SBS content in modified asphalt was detected by the DNN prediction model based on the deep learning method. According to the results and discussion, the following conclusions can be drawn: (1) Absorbance data derived from spectra can be utilized to train and test the DNN model, which can successfully recognize the SBS and its content in modified asphalt.
(2) Data preprocessing, including noise and dimension reduction, can improve the accuracy and learning speed of the DNN model. e MSE value decreased by 68% for the DNN with preprocessed data.
(3) Using various efficient learning techniques, including minibatch gradient descent, cost function, initialization, normalization, and dropout, the MSE value could be gradually kept at 0.047 and 0.032 after 2160 iterations for training and testing results. e correlation coefficient between the target and the mean predicted value is 0.9978 for training samples, whereas the value is 0.9992 for testing samples, which indicates that the DNN model has excellent accuracy and applicability after training with 23 duplicates.
(4) In comparison with SCM and RFM, DNN displayed the most outstanding predicting capability. Precisions of DNN are all greater than 98% for the same test conditions.

Data Availability
We confirmed that the data submitted in this manuscript are available. All the data provided in the manuscript were obtained from the experiments performed at the School of Highway of Chang'an University.

Conflicts of Interest
e authors declare that there are no conflicts of interest.