Parameter Estimation for a Class of Lifetime Models

and Applied Analysis 3 The equality (13) can be turned into the following form: (a 11 + d) (β 1 − β (0) 1 ) + a 12 (β 2 − β (0) 2 ) + ⋅ ⋅ ⋅ + a 1r (β r − β (0) r ) = a 1y , a 21 (β


Introduction
In the study of fatigue lifetime evaluation of rubber materials, accelerated aging test is widely used as an effective procedure for obtaining the data on performance indicators ( ), aging time ( ), and aging temperature ( ). In order to investigate the relationships among them, Dakin [1,2] proposed the kinetic equation for aging; that is, where is the performance indicator of rubber, is the aging time, is an aging rate constant depending on the temperature , is a constant, and is a constant in (0, 1).
Mott and Roland [3] and Wise et al. [4] interpreted that the in (1) can be expressed in the Arrhenius form. In this paper, we also assume the convention that the for rubber can be described by the Arrhenius type where is aging temperature and and are constants. By (2) and (1), we obtain the model which is called the --bivariate nonlinear regression model in this paper. Here, , , , and are model parameters. In the past, one (see, e.g., [5][6][7]) usually split (3) into (1) and (2) to estimate the parameters in (3). The constant is determined by successive approximation method, which is to minimize (4) to two decimal places: where and̂denote the experimental measurements and predicted values of the performance indicators of rubber when the aging temperature index is and the experiment serial number is , respectively. When is assigned a value, (1) can be converted into the following linear form through logarithm transformation: where = ln , = ln , = − , and = .
The values of and are determined by the least squares method: The is given bŷ= ∑̂/ , where is the maximum index of aging temperature.

Abstract and Applied Analysis
The is given bŷ= −̂and used as the known value in (2). Similarly, (2) can be converted into the following linear form through logarithm transformation: where = ln , = ln , = − , and = −1 .
The values of and are also determined by the least squares method: The estimated values of and are given bŷ=̂and = −̂. At last, the final estimates of the parameters are substituted into (3) to form the regression forecast model. However, the above-mentioned TSM has the following limitation.
First, the estimates of the parameters in (1) and (2), obtained by the logarithm regression method, are generally not the least squares solution of the original variables [8].
Second, substituting the estimates of the parametersî n (1) into (2) may lead to large errors. This is because the estimates of the parameters and in (3) highly rely on the precision of̂, if̂has a small change that will lead to considerable change of the values of and . Furthermore, TSM is a tedious calculation method.
Finally, the parameter in (3) is the average of̂, whose goodness needs verifying.
Regarding the limitation above, the purpose of this paper is to adopt MM to estimate the four parameters in (3).
Substituting the th set of observations of the independent variables into the model (9), we see that Since 1 , 2 , . . . , are known values, we deduce that ( , ) is a function of 1 , 2 , . . . , . For a given initial value . . , (0) ), we expand ( , ) using Taylor's formula at (0) and omit the quadratic and above terms. The expansion is as follows: All the numbers in (11) except the parameters 1 , 2 , . . . , are known. It is clear that the right-hand side of (11) is a linear function of 1 , 2 , . . . , . Thus, we apply the least squares method to (11) and set where ≥ 0 is called the damping factor. When = 0, this method of linearization becomes the Gauss-Newton method [9] which is a special case of MM. Even worse, the selection of initial values of iteration for the Gauss-Newton method is harder than that for MM. In order to minimize , the first partial derivatives of with respect to 1 , 2 , . . . , should be zero; that is, Abstract and Applied Analysis 3 The equality (13) can be turned into the following form: . . . where Obviously, this solution depends on the initial values  (14), the larger the value of is, the smaller the absolute values of . . , − (0) are. Therefore, the value of should not be too large; otherwise the times of iteration will be increased. The boundary for selecting the value of depends on whether the residual sum of squares is decreasing. (3). There are two independent variables (aging time and aging temperature ) and four unknown parameters ( , , , and ) in (3). The steps for solving the nonlinear equations for the four parameters are as follows.

Steps for Calculating the Parameters in
(a) Calculate the partial derivatives of in (3) with respect to , , , and , respectively; and we obtain (b) Select the initial iteration values of the parameters; that is, (0) = ( (0) , (0) , (0) , (0) ). Whether the selection of initial values is appropriate will determine the amount of calculation and the convergence of iteration process. This paper uses TSM to estimate the initial values of the parameters according to the aging data in the related paper [6]. These values are also considered the initial values of the parameters in the process of random simulation in Section 3.  (ii) If (1) < (0) , the second iteration is done. But if (1) ≥ (0) , set = 0; that is, = (0) , recalculate , and recalculate the residual sum of squares (1) . (iii) If (1) < (0) , the second iteration is done. But if (1) ≥ (0) , set = 1; that is, = 10 (0) , and recalculate and (1) .

Random Simulation and Data
Processing. Random simulation is a method which uses random numbers to conduct computer simulation. The sample observations obtained by random sampling were utilized to estimate the parameters of the models (see, e.g., [10,11]). This paper uses MatLab programming to simulate the data. Follow the steps below.
(3) Assume that follows the uniform distribution on the interval (1, 100) and generates the random numbers from (1, 100) as . The number of times of simulations is .
(4) After obtaining all the simulated values, insert them into the model and add to by a random number following the uniform distribution on (−0.1, 0.1). We can simulate sets of subsamples eventually.
According to the simulated data, we first use MatLab programming to figure out that the approximate value of is 0.80 and then calculate the values of , , , and by TSM using SPSS software. The results are displayed in Table 2.
Then we estimate the parameters in the bivariate nonlinear model (3) by MM using SPSS software (the initial values of the parameters here are the same as those used in random simulation). The results are displayed in Table 3.

Result Analysis.
(1) In regression analysis, the coefficient of determination 2 = 1 − (residual sum of squares)/(total sum of squares of deviations) is a statistic that measures the goodness of fit of the model under consideration. Specifically, the coefficient of determination is a statistical measure of how well the regression line fits the real data points. The closer to 1 the 2 is, the closer the points of practical observations to the sample line and the better the goodness of fit of the model are. From Tables 2 and 3, it can be seen that the 2 of MM is The total residuals sum of squares = 0.641. larger than that of TSM, which indicates that the prediction model of MM is more suitable for fitting the simulated data.
(2) Comparing the estimates of the parameters obtained by the two methods, we can easily find out that the estimates of the parameters of MM are closer to the initial values, which indicates that using MM to estimate the parameters in the --model is more reasonable.
(3) We compare their residual sum of squares. The residual sum of squares for MM is 0.325, and that for TSM is 0.641. The former is only half of the latter. Obviously, the prediction error of the --model resulting from using MM is smaller, and the precision of its fitted equation is higher.

Conclusion
In this paper, we demonstrate that the MM is more suitable for estimating the parameters of the aging lifetime model by the theoretical analysis and random simulation. Our method not only avoids a plenty of tedious calculation in TSM but also adds the damping factor, which loosens the limitation of selecting the initial values. Furthermore, compared with TSM, MM greatly decreases the fitting error between the predicted values and the practical observed values, and we obtain the best-fit parameters. In addition, the model estimated by MM has higher fitting precision than that by TSM.
We note that the parametric estimation in this paper can also be used in the prediction of lifetime of other materials, such as composite materials (see [12]).