Application of the LINEX Loss Function with a Fundamental Derivation of Liu Estimator

For a variety of well-known approaches, optimum predictors and estimators are determined in relation to the asymmetrical LINEX loss function. The applications of an iteratively practicable lowest mean squared error estimation of the regression disturbance variation with the LINEX loss function are discussed in this research. This loss is a symmetrical generalisation of the quadratic loss function. Whenever the LINEX loss function is applied, we additionally look at the risk performance of the feasible virtually unbiased generalised Liu estimator and practicable generalised Liu estimator. Whenever the variation σ2 is specified, we get all acceptable linear estimation in the class of linear estimation techniques, and when σ2 is undetermined, we get all acceptable linear estimation in the class of linear estimation techniques. During position transformations, the proposed Liu estimators are stable. The estimators' biases and hazards are calculated and evaluated. We utilize an asymmetrical loss function, the LINEX loss function, to calculate the actual hazards of several error variation estimators. The employment of δP(σ), which is easy to use and maximin, is recommended in the conclusions.


Introduction
e practical applications of statistics gained new emphasis [1][2][3][4]. In this approach, we consider the following. A farmer not only needs to choose the kind of fertiliser that generates the greatest mean output from a list of 'k' fertilisers but also needs an estimation of the mean for the fertiliser he chooses. A physician needs not only to choose the kind of drugs from a list of 'k' distinct drugs, quantify its efficiency, and choose the more efficient one but also needs to evaluate the drug's efficiency using the similar information. See [5], for more information on this subject, including debates and applications [6]. e symmetrical quadratic loss functions have been frequently employed in assessing risk functions of certain estimators. It is worth noting that all-biased estimator research employs the mean square error (MSE) or, equally, the symmetric quadratic loss as the foundation for evaluating estimator effectiveness. e employment of symmetric loss functions are well acknowledged to be incorrect in several situations, especially when positively and negatively errors have differing effects. Varian [7] developed the asymmetric LINEX (linear exponential) loss function, which is quite valuable. Since then [8], the features of the LINEX loss function have been thoroughly explored, and various research studies on the usage of the LINEX loss function have been conducted. e loss functional is obtained by estimating the variable θ by θ [9]: where a ≠ 0, b > 0, and Δ � θ − θ/θ are the relative estimation errors when employing θ to evaluate θ. Because the comparative estimation errors are independent of the unit, it is frequently utilised. We consider (without losing flexibility) that b � 1 in this research. If overestimation is much more serious than underestimation, the value of the shape factor a represents the orientation of asymmetry. We assign a > 0(a < 0) if overestimation is more serious than underestimation. e degree of asymmetry is represented by the size of a. In the case of tiny values of |a|, L(θ) � 1/2ba 2 (θ − θ/θ) which is equivalent to a (SEL) squared error loss. As a result, the LINEX loss function could be thought of as an asymmetric generalisation of the squared error loss function. e LINEX loss function has been evaluated by a number of researchers in several interest difficulties. For instance, see [10][11][12][13][14]. With the LINEX loss function, Ohtani [15] investigated the risk of the feasible generalised ridge regression (FGRR) estimation. Whenever a is large and positive, Ohtani [16] demonstrated that the FGRR estimator could completely outperform the ordinary least squares (OLS) estimator. Using the asymmetric LINEX loss function, Wan [17] investigates the characteristics of the feasible almost unbiased generalised ridge regression (FAUGRR) estimator. Positive estimation error is considered more significant than negative estimation mistake if the variable a is positive, and vice versa. When multicollinearity is a challenge, one option is to employ the Liu estimators suggested by [18] (also, see [19]). Undefined variables are the finest biasing variables to use; they can be substituted by sampling estimations. e Liu estimators are referred to as viable Liu estimators in this circumstance. Akdeniz and Kaçiranlar [19] calculated the accurate MSE of the viable generalised Liu estimator. Since the usage of symmetrical loss functions might be problematic in certain practical scenarios, asymmetric loss function estimating difficulties has recently received a lot of emphasis (see, for example, [8]). e accompanying important asymmetric LINEX loss function was developed by Varian [7]: where the variables a ≠ 0, b > 0 are well-known. Using the above loss function, Zellner [8] [20] established that the ordinary sample mean is unacceptable for predicting standard mean (in the situation where the variation is available). Roio [21] extended Zellner's findings by considering the acceptability of linear functions of the sample mean under the LINEX loss function (2). Bolfarine [22] looked at estimate difficulties for the limited populations total using the LINEX loss function at the time, θ, in (2), meant populations' total. He presented the populations' overall Bayes estimation technique and addressed the acceptability of certain generated estimators. e goal of this study is to see if linear estimators of an independent linear function of limited population's feature variables are admissible under the LINEX loss function. Assume that the limited populations Y 1 , . . . , Y N are a random sample drawn from the superpopulations' prototype [23]: where k � 1, . . . , N, a k > 0 and b k were given variables, β is the undetermined variable, ε k is standard with mean zero and variation, σ 2 , and ε 1 , . . . , ε N are directly independent. Cassel et al. examined this concept in depth and found it to be highly beneficial. It was also addressed by [24]. Considering the superpopulation framework, we would investigate the estimation difficulties of the linear function N), with the LINEX loss function (Eq. 2 and Eq. 3). We consider that the sample y k , k ∈ s is generated using an independent sampling method p(i.e., p(s)satisfiesp(s) > 0, and s∈S p(s) � 1, where S is a class of subsets of 1, . . . , N ). We find all acceptable linear estimators of N k�1 p k Y k in the scenario, where σ 2 is given. We also analyse the acceptability of a linear estimator in this instance because σ 2 is frequently undetermined in actual issues. In the class of linear estimation techniques, we achieve all acceptable linear estimation methods of N k�1 p k Y k . Unlike the squared error loss (SEL), the sufficient and necessary criteria for a linear estimator to be acceptable with the LINEX loss function, for scenarios where σ 2 is unknown or known, are significantly varied, at least within the class of linear estimators, which is rather unexpected. e accompanying factors are the factors why the researcher chooses linear function N k�1 p k Y k : (i) It contains the normal situation of E(ε 2 K ) � σ 2 a g k (g ≥ 0 is a fixed variable) through transformations (ii) In certain actual situations, the linear function N k�1 p k Y k must be estimated [25] We consider b � 1 in the LINEX loss functions because the values of b have no effects on acceptability.

Linex Loss Functions
Zellner studied the LINEX loss functions in his research work [8]. e derivations are as discussed below. e scalar estimating errors in utilizing θ to predict θ is denoted as e accompanying convex loss function was proposed by Varian [7]: L(0) � 0 is clearly visible. Moreover, we need ab � c for a minimum to occur at Δ � 0; therefore, (4)could be rewritten as In (5) there are two variables, a and b, with b determining the loss function's scale and a determining its form. For specified values of a and Δ, values of e aΔ − aΔ − 1 are graphed in Figure 1. For a � 1 or a > 0, it can be observed that the functions are asymmetrical, with overestimation costing more than underestimation. Whenever a < 0, on the contrary, (5)climbs practically exponentially when Δ � θ − θ < 0, and approximately, linear when Δ � θ − θ > 0. e function is nearly symmetrical and not far from a squared error loss function for smaller values of |a|.

Computational Intelligence and Neuroscience
Extending e aΔ � 1 + aΔ + a 2 Δ 2 /2, L(Δ) � a 2 Δ 2 /2, which is a squared error loss function. As a result, for low values of |a|, the best predictions and estimations are similar to those produced using a squared error loss functions. Whenever |a| considers significant values, meanwhile, optimum point predictions and estimations would vary significantly from those produced with a symmetrical squared error loss function; for instances, see Varian [7]. ere is a desire to expand the type of estimate for multiparameter estimations and multivariate prediction challenges (4) and (5) Let Δ i � θ i − θ i represent the errors in predicting θ i with the estimate θ i , i � 1, 2, . . . , k. e separable expanded LINEX loss function is then as follows: correspond to a minimum in this convex loss functions. In multiparameter multivariate prediction and estimation issues, the function in (6)could be used.

The Estimator
e estimator is discussed extensively in the research work of Parsian and Farsipour in the year 2000 who made an extensive study and derived variables, as discussed below [26]. Let X i1 , X i2 , . . . , X in , i � 1, 2, be a set of independent randomly sampling from standard population, each with an undetermined mean θ i and a mutual determined variation τ 2 . Let X i represent the sample mean of the ith population, where i � 1, 2. e fundamental method, according to which the population equivalent to the bigger sample mean is picked, is used to choose the population with the higher mean. We would like to calculate the population's mean M that could be represented as Here, θ � θ 1 − θ 2 , I 2 � 1 − I 1 , and M is, of course, a stochastic variable with a discrete property and probability functions: ). An estimator δ of θ satisfies the preceding conditions, which are considered to be risk unbiased by [27] As a result, we call an estimator δ of M risk unbiased with consideration to the LINEX loss function (1) (from here on referred to as L-unbiased) if Otherwise, it is biased because bias is described as where M has a normal estimation: X max � max(X 1 ., X 2 .) is unmistakable. e second estimation is generated by removing from, on its own, an λ-multiple of δ 1 's predicted bias. is results in a slightly different class of estimators known as bias reducing (BR) estimators. ey are reliant on constants λ that determine the degree of bias elimination and also have an impact on the hazard, which is of the type Computational Intelligence and Neuroscience 3 where , λ0 is an arbitrary variable, and V(Δ) is the usual regular overall distributions functions.
We suggest a third estimator, which is provided by Consider that, for a motive of (15) as an estimation of M, use MLE of 1/aIn E(e aX ) (14): is form of estimator has a broader version, which is provided by where λ is a variable. e reason for δ 1λ (σ) is similar to for δ 2λ (σ), and it is calculated by removing a λ-multiple of δ 3 (σ) estimated bias from its own. Another estimation that we will look into is provided by where c is a variable that can be changed. It is worth noting that we obtain X max , for c � 0, which is similar to δ 1λ (σ) for λ � 0. e estimator δ H (c) is frequently referred to as a hybrid estimator, and it has been taken into consideration. It is explained how δ H (c) has been adjusted. We acquire this estimator using X max and a preliminary testing. We now develop the next estimation utilizing a concept of references. It is worth noting that, in most circumstances, the categorization of the selected group maintains to be fascinating after the decision has been made. In most cases, the preliminary decision effects are omitted in the estimation [28]. After early significant testing has been completed, the challenge of estimating arises. e post-selection estimating challenge is particularly important in the development of processes and equipment with a large number of elements, as [29] pointed out in relation to his approach. If the risk rates or another feature value of the entire system or equipment is evaluated, the accumulating bias might create quite a deceptive conclusion if both the assortments and the number of components are considerable [30].
When it comes to estimation, additional information is generally accessible than when it comes to decision. e statistics accessible at the period of choosing are denoted by Z 1 and Z 2 , and the extra data accessible at the time of estimation are denoted by W 1 and W 2 . Z 1 , Z 2 , W 1 , and W 2 are clearly designed to be independent factors: is is how we characterize: e accompanying formulas can be used to determine the parameters Z i and W i : where c is a positive value and V 1 and V 2 are independent random factors. ey are regularly generated, with a mean of 0 and a variation of σ 2 , and are unaffected by X 1 . and X 2 .. ey can be specified as a function of the actual sampling components or created using a table of stochastic numbers. Now, let us establish It is then simple to prove that E e aM 1 � E e aU .
Specifically, U is an L-unbiased M 1 estimator that is biased for M. Instead of U, a different estimate is generated as specified: As a result, a M estimation is provided by e δ 4 (c) estimator is an L-unbiased estimation: for large c, which becomes θ 2 + 1/aIn (e aθ − 1)V(θ/ �� � 2σ √ ) + 1 . As a result, by increasing c, the bias of δ 4 (c) as an M estimator could be reduced. e Pitman-type estimator of M, which is the generalised Bayesian estimation of M with regard to the regular priori 4 Computational Intelligence and Neuroscience on the two-dimensional (2D) space (θ 1 , θ 2 ), is the concluding estimation under consideration: It proves to be as well as it is a minimax.
To conclude this section, it is essential noting that whenever symmetries were available in a situation, it is normal to demand a comparable symmetry to exist for the estimation, according to the selection theoretic method.
ere are intrinsic symmetries in many statistics estimating issues. In our estimating issue, this is likewise the situation. e proposed estimators were stable by position modifications in the sense that if M represents for an estimator of M, Since M My1; y2 is also stable by position modifications, this is a desired characteristic.

The Feasible Gl Estimator's Risk Performance
Akdeniz widely studied the feasibility of the GL estimator risk performance [31]. According to his research work, we established a suitable requirement for the GL estimator with d i � ⋋ i (β 2 i − σ 2 )/⋋ i β 2 i + σ 2 to dominating the OLS estimator when the LINEX loss functions are applied in the preceding section. However, in practise, this biasing factor comprises the undetermined parameters, β i and σ 2 , that can be substituted by their sampling estimations. e practicable GL estimator has the following parameters: In this section, we look at how the feasible GL estimator performs whenever the LINEX loss function is applied.
We'll define z i � λ 2 i β i /σ and V � (n − l)z 2 /σ 2 . erefore, with v � n − l degrees of freedom, z i and V are distributed as N(θ i , 1) and chi-square distributions, correspondingly. e feasible GL estimator of β i could be expressed as using z i and V: Or We can define β i � z i β i /θ i since z i � λ 1/2 i β i /σ an dθ i � λ 1/2 i β i /σ. As a result, . (32) Or As a result, the risk function for β * i is As a result, the risk functional of the practicable GL estimator, β * i , satisfies the risk component of the feasible GRR estimation method, which is provided. By replacing β 2 i and σ 2 with their unbiased estimations β 2 i − z 2 /λ i and σ 2 , we get the following estimates of d i : . . , l (refer, for instance, Liu (1993)) [23]. e viable GL estimation of β i in this example is expressed as As a result, e b i risk function is Computational Intelligence and Neuroscience e rth moment of b i /β i is provided as indicated in Appendix.

Risk Functions
Kazhuhiro Ohtani is one of the researchers who studied the extensive risk functions involved in LINEX functions. According to him, the following discussion is done [16]. In his study, the following variables are defined first: While u is allocated as X '2 1 (λ), v is allocated as X 2 n− 1 , X '2 1 (λ) the noncentral chi-square distributions with 1 degree of freedom, X 2 n− 1 signifies the chi-square [31] allocation with n − 1 degrees of freedom, and noncentrality variable is λ � n(μ − μ 0 ) 2 /σ 2 Taking into account that n i�1 ( Here, a 1 � n + 1 and a 2 � n + 2. Equate e risk function of σ * 2 with the LINEX loss is given: R(σ * 2 ) is reduced to utilizing the binomial expansions: e standard equation for the components of the PTSV estimation (i.e., E[(σ * 2 /σ 2 ) m ]) is provided in Appendix: Here, e incomplete gamma functions ratio is denoted as P( ∝ , y): 6 Computational Intelligence and Neuroscience (50) We get the risk function equation by equating (48)into (46)

The Liu Estimator's Probability Shrinking Factor Distributions
According to Akdeniz andÖztürk, the Liu estimator probability shrinking factor and distributions were discussed as shown below [32]. Using the CLRM requirements, we need to get the density functions of d i (i � 1, 2, . . . , p), as described in (51) [32,33]. e accompanying theory expresses the conclusion. e Liu estimator's probability shrinking factor distributions are (52) and (53) are provided by Proof. From (53), where u � v σ 2 /σ 2 ∼ X 2 v is a central Chi-square allocation with v degrees of freedom and δ i � (β i /σ/ �� λ i ) 2 ∼ X 2 1 (θ) i is a noncentral Chi-square allocation with one degree of freedom and noncentrality factor θ i . Because σ 2 and β i are unrelated, it proves that With regard to density,  Computational Intelligence and Neuroscience where Γ( ∝ ) � ∞ 0 t ∝− 1 e − t dt and B( ∝ , β) � Γ(α)Γ(β) /Γ( ∝ + β). After that, there is the stochastic factor: When using the reverse transformations, possesses the density with the given − λ i < d i < 1.

Conclusion
Under the conditions of multicollinearity, certain biased estimators, such as the Ridge and Liu types, might be able to handle the OLS estimator's drawbacks. e efficiency of the Liu estimators addressed in this research was evaluated using the asymmetrical LINEX loss function. As a result, we can determine that these Liu estimator categories are gradually equivalent. Furthermore, the LINEX loss function is more complex to estimate and implement, whereas the density function of the shrinkage biasing variables of the generalised Liu-type estimator. e characteristics of the resultant Liu estimator are probably to be affected by the random behaviour of the predicted shrinkage biasing variables. e Liu-type estimator is based on the values of v � n − p, θ i � λ i β 2 i /σ 2 , and λ i . at could be considered a variation of the Liu estimator, which is simpler to compute and implement.

Data Availability
e data used to support the findings of the study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.