Multiple Linear Regressions by Maximizing the Likelihood under Assumption of Generalized Gauss-Laplace Distribution of the Error

Multiple linear regression analysis is widely used to link an outcome with predictors for better understanding of the behaviour of the outcome of interest. Usually, under the assumption that the errors follow a normal distribution, the coefficients of the model are estimated by minimizing the sum of squared deviations. A new approach based on maximum likelihood estimation is proposed for finding the coefficients on linear models with two predictors without any constrictive assumptions on the distribution of the errors. The algorithm was developed, implemented, and tested as proof-of-concept using fourteen sets of compounds by investigating the link between activity/property (as outcome) and structural feature information incorporated by molecular descriptors (as predictors). The results on real data demonstrated that in all investigated cases the power of the error is significantly different by the convenient value of two when the Gauss-Laplace distribution was used to relax the constrictive assumption of the normal distribution of the error. Therefore, the Gauss-Laplace distribution of the error could not be rejected while the hypothesis that the power of the error from Gauss-Laplace distribution is normal distributed also failed to be rejected.


Introduction
The first report on multiple linear regression appears on 1885 [1] and was detailed in 1886 [2]. The classical treatments of the multiple regressions were built on the product-moment method implemented in 1846 [3] and later connected with the optimal correlation [4].
In his first published paper, Fisher introduces the method of likelihood maximization [5], later used in conjunction with Pearson's correlation [6]-a paper which started a contradictory debate between the method of central moments and the method of likelihood estimation [7] replied to in [8] and finally linked with the partial correlation coefficients [9].
A multiple linear regression model involves more than two variables, one ( ) being assumed dependent and the others ( 1 , 2 , . . . , ) being assumed to be independent, and is considered here as a continuation of a previous study [10]. The most important assumption is that the data are paired; for example, a natural association between the values of the variables exists. This kind of association is accomplished when for instance a multiple linear regression is constructed involving a measured property/activity ( ) for a series of compounds 2 Computational and Mathematical Methods in Medicine for which other compounds measured properties/activities or structure-based descriptors are available ( 1 , 2 , . . . , ), the natural association being in this case the (chemical) compound responsible for that property/activity/descriptor value.
The least squares method is the standard approach for regression analysis, the method being credited to Legendre [11] (for a debate about the inventor, please see [12]), which also (implicitly) assumes that the error is normally distributed.
Iteratively applying local quadratic approximation to the likelihood (through the Fisher information [13]), the least squares method was used to fit a generalized linear model as a way of unifying classical, logistic, and Poisson (linear) regression in [14] by iteratively reweighing the least squares method in the way to the maximum likelihood estimation of the model parameters.
Generalized Gauss-Laplace distribution is the natural extension [15] from Gauss's [16] and Laplace's [17] symmetric distributions. It is a triparametric distribution (location, scale, and shape) and parameter estimation via maximum likelihood and the method of moments have been reported in [18], concluding that the estimates do not have a closed form and must be obtained numerically.
A more general result regarding the maximum likelihood estimation can be found in [19] but unfortunately provides only conditions in which maximum likelihood estimate exists and is unique without providing the reciprocal (namely, there exist also other conditions than the one given, in which maximum likelihood estimate exists and is unique). Even more, for numerical estimates, it is hardly to discuss unicity.
The problem of estimating the parameters of a multiple linear regression under assumption of generalized Gauss-Laplace distribution of the error is a hard problem which can be solved only numerically and it involves an optimization problem with + 3 constrains, where is the number of unknown (to be determined) coefficients of the multiple linear regression. In this paper a mathematical and a numerical treatment of the problem is proposed.
In order to provide a proof of the facts for the proposed method of relaxing the distribution of the error when linear regression is used to link between chemical information and biological measurements, ten previously reported datasets were considered, all with significant role in human medicine or ecology.

Mathematical Treatment
One may define the generalized Gauss-Laplace (GL) distribution as GL ( ; , , ) where Γ(⋅) is the Gamma function and (location), (scale), and (shape) are the parameters of the distribution.
This definition will be used here for the Gauss-Laplace distribution to relax the normal distributed constraint for the distribution of the error ( ).

Statement of the Problem. Multiple linear regressions
under assumption of GL distribution (see (1), for the error ; ( ) 1≤ ≤ , and are to be determined from sampled data) are stated in the following equation: where = − ∑ 1≤ ≤ (and̂≈ 0 and = 0). The case with intercept ( ≈̂= 0 + ∑ 1≤ ≤ ) is reduced to the case without intercept by increasing with one the number of the independent variables ( +1 ← 0 ; +1 ← 1; and ← + 1, when ≈̂= ∑ 1≤ ≤ +1 ) and therefore will not be mentioned further. The substitution given as (2) transforms the distribution from univariate to a multivariate one and can be mathematically characterized by a series of properties, such as is given in [29] (results applicable resizing from 0 and 0 ← ).
Let us take a sample of paired measurements (e.g., ( , ( , ) 1≤ ≤ ) 1≤ ≤ , where is the number of paired measurements and is the number of independent measures). The likelihood for the sample is Doing the substitution given in (2) and expressing in full its parameters, the expression of the likelihood function from (3) becomes The likelihood is at maximum when all its partial derivatives are zero: Please note that = ( , ( ) 1≤ ≤ ) does not depend on . Therefore, let LMLRGLS( , , ) be the function having this constraint. After some calculations, the expression of LMLRGLS( , , ) is where On the other hand, only depends on ( ) 1≤ ≤ , and therefore when the derivative of relative to 1 , . . . , is zero, then also the derivative of the maximum likelihood estimation function (either any of LMLRGL and LMLRGLS) is zero. Doing the partial derivatives of , with the above given substitution (function ), the following equation is the result: ; where ( , ) = ( − ∑ 1≤ ≤ , ) −2 . At this point only the expression of the likelihood function (see (7) and (8)) must be included in the new statement of the problem (see (9)) in order to keep in full the derivatives constraints of the initial problem (see (4) and (5)). There is no obvious further reduction of the problem. However, revising the results obtained till this point, the cancelling of the (likelihood function) derivative relative to was included at the beginning of the simplification (see (6)) while the cancelling of the (likelihood function) derivatives relative to the regression coefficients 1 , . . . , is equivalent to the previous given equation for the regression coefficients (see (9)). On the other hand, (7)-(9) facilitate an iterative solution of the problem.

Fixed-Point Theory for Iterating to the Optimal Solution
A convenient notation was used in (9) to suggest the further treatment of the problem. Actually, Fisher and Mackenzie proposed for the first time to use such numerical treatment in statistics [30]. This is based on the assumption that, near to the optimal solution, an iterative evaluation of the coefficients (here and ( ) 1≤ ≤ ) conducted using their previous values (hidden in (9) inside of the function ( , )) leads to the optimum. The optimum is obtained when no significant change from one step to another is on their values, and, at that time, the ( , ) function acts as the argument of a contraction mapping [31]. There are some inconveniences for a smooth application of the fixed-point theory. One of them is that the obtaining of the maximum of the LMLRGLS function (see (7), being obtained for known values of and (where ( / )LMLRGLS( , , ) vanishes)) is not a very simple problem; it is expected from its explicit expression to have more than one local maximum. Fortunately, some clues exist, such as the domain of (ranging from 0) and the expectance from the power of the error (here let us say that it is expected to have from 0.1 to 10 and is very unlikely but possible to have from 0.01 to 100, but outside of this range also precision of computations often fails). But the biggest inconvenience is that (9) is not an equation, but a system of equations, and here we may only provide different strategies of iteration, hoping that at least one of them provides the contraction mapping. Namely, we may (i) start from some initial values of the regression coefficients (( ,0 ) 1≤ ≤ ) and for the power of the error 0 ; (ii) use initial values to obtain the likelihood function LMLRGLS (from (7)) as a function depending only on ; it requires only the evaluation of (see (8)); (iii) find the maximum (let this be ) of LMLRGLS function from (7) (where its derivative is 0 and the point is a global extreme point); (iv) prepare starting of a loop on , by setting it to 0 ( ← 0); (v) it is possible, especially at the beginning of the iteration (when = 0), that and be largely different one to each other; a major change in will accelerate the convergence but will also increase the likelihood of divergence; therefore, use the new ( ) and old ( ) value to indicate the gradient of the change in , such as +1 ← ⋅ + (1 − ) ⋅ , with small 0 < < 1 to be determined; (vi) do a loop ( ← +1) using the new value of (namely, +1 ) to calculate the new values of the coefficients (( , +1 ) 1≤ ≤ with (9)) and using the new values of the coefficients calculate the new value of (turn back to find the maximum from (7)); (vii) repeat until (( , +1 ) 1≤ ≤ ) and ( +1 ) have no changes during iteration.
At arriving in the stationary point, all criteria for the maximization of the likelihood are accomplished; namely, the equations corresponding to all derivatives cancellation are assured. The great advantage of this proposed method is that it reduces the problem of finding the maximum of a Algorithm 2: Calculate " " at some step " " from (7). function with + 3 variables to the finding of the maximum of a function with one variable ( ), in a repeated process, of course. The disadvantage is that the evolution is through a contracting functional of which contraction cannot be assured all the time. This is the reason why there are different strategies of finding such kind of contracting functional (see example 6.1 in [32] for construction of a contracting functional from resampling).
Some calculations are the same regardless the strategy used and are given in the next as Algorithms 1-3.
One strategy is to use the equations from cancellation of regression coefficients derivatives (see (9)) to iterate the values of the coefficients, while another one is to treat (9) as a system of equations and to solve it as a whole (Algorithm 4).
Another strategy that is required to be specified is that if (7)-(9) is used to simply iterate for new values or if (9) should be used in a loop to converge to new estimates for the coefficients associated with the new (Algorithm 5). The expected assumption is that the errors are normally distributed ( = 2) and the optimal solution of (5) is near to this.
The contingency of 2 × 2 strategies given above was tested on sampled data (see Section 4), and the pair (Algorithms 4 and 5) turned out to be the only one providing a contraction functional. Thus, for convenience, the working algorithm is given in full (see Algorithm 6) and was used to obtain the results given in the next section.
In order to assure the numerical stability of the calculations, Algorithm 6 was used with fixed and reasonable value = 10 −5 , and in order to assure a smooth convergence, the value of the new error's power estimate ( ) was replaced by an exponential smoothing (a technique commonly applied to time series [33]), ← 0.1 ⋅ + 0.9 ⋅ −1 .
Therefore, in all scenarios, the initial (starting) values of the estimates to be determined will be the one given by the classical multiple linear regression models as presented in the following: where MLR( , 1 , . . . , ) uses the classical strategy of ordinary least squares ((x x) −1 x y) to find the parameters.

Case Study
Ten datasets of chemical compounds with different sample size (Table 1) along with their measured outcome activity were considered to illustrate Algorithms 1-6. For all datasets, the experimental values of the dependent variable ( ) and for the selected previously reported independent variables (under the assumption of the normal distribution of the error) on multiple linear regressions with two ( = 2) independent variables are given in Table 2.
Different descriptors (independent variables) were used to explain the activity/property of interest on models presented in Table 2. The names of these descriptors are Algorithm 4: Block solves providing "( , +1 ) 1≤ ≤ " at some step " " with (9).
Algorithm 5: Double loop with (9) for (7) and (8).     Table 3: Differences between values of coefficients obtained by classical linear regression approach compared to the proposed approach. All sets subjected to analysis converged maximizing the likelihood and Table 3 provides the differences between values obtained by classical MLR approach and values obtained by the proposed approach (MLR-MLE-GL).
The results presented in Table 3 reveal different estimates for the coefficients in the assumption of the more general generalized Gauss-Laplace distribution of the error. In 6 out  of 10 cases, the power of the error proved to be higher compared with convenient value of 2, the highest values being observed for set 10 ( = 2.937). Opposite, the power of the error proved to be almost half of the expected value for set 5 ( = 1.058). The values of coefficients obtained by applying the MLE and the proposed approach were close to each other in two cases (set 4 and set 8). With one exception, represented by set 2, the sum of the absolute differences of 0 , 1 , and 2 was less than 1. The values obtained for the population standard deviation by the two investigated methods proved to be closest to each other, with highest difference of 0.49310 observed on set 10.
A question (hypothesis) can be raised about the power of the error: if its distribution can be assumed normal. This hypothesis (the distribution of the power of the error can be assumed to be normal) can be tested on the results even if the sample is small (10 cases) to provide an answer. However, the tendency to have a mean of two in convergence is clear ( = 2.06 from the 10 cases) and the hypothesis of its normality cannot be rejected (Anderson-Darling statistic measures that only 14.72% ( to-reject = 0.8528 > 0.05) of the random samples are in better agreement with the normal distribution while Kolmogorov-Smirnov statistic measures only 28.7% ( to-reject = 0.713 > 0.05)).

Conclusions
The proposed algorithm (Algorithm 6 in this paper) was found to provide an appropriate contraction mapping to be used for maximum likelihood estimation of the multiple linear regression parameters in the generalized Gauss-Laplace distribution assumption of the measurement's errors. The analysis conducted on 10 samples demonstrated that, in general, it is not appropriate to assume that the measurement error is normally distributed, and when it is possible a deeper treatment of the distribution of the error need to be conducted. From a sample of 10 cases, the analysis of the distribution of the error showed that the normal distribution of the power of the error could not be rejected, being very likely to have a mean equal to two.