A Generalized Bridge Regression in Fuzzy Environment and Its Numerical Solution by a Capable Recurrent Neural Network

Bridge regression is a special family of penalized regressions using a penalty function 􏽐 |Aj| c with c≥ 1 that for c � 1 and c � 2, it concludes lasso and ridge regression, respectively. In case where the output variable in the regression model was imprecise, we developed a bridge regression model in a fuzzy environment. We also exhibited penalized fuzzy estimates for this model when the input variables were crisp. So, we perform the presented optimization problem for the model that leads to a multiobjective program. Also, we try to determine the shrinkage parameter and the tuning parameter from the same optimization problem. In order to estimate fuzzy coefficients of the proposed model, we introduce a hybrid scheme based on recurrent neural networks.-e suggested neural network model is constructed based on some concepts of convex optimization and stability theory which guarantees to find the approximate parameters of the proposed model. We use a simulation study to depict the performance of the proposed bridge technique in the presence of multicollinear data. Furthermore, real data analysis is used to show the performance of the proposed method. A comparison between the fuzzy bridge regression model and several other shrinkage models is made with three different well-known fuzzy criteria. In this study, we visualize the performance of the model by Taylor’s diagram and Bubble plot. Also, we examine the predictive ability of the model, thus, obtained by cross validation. -e numerical results clearly showed higher accuracy of the proposed fuzzy bridgemethod compared to the other existing fuzzy regressionmodels: fuzzy bridge regression model, multiobjective optimization, recurrent neural network, stability convergence, and goodness-of-fit measure.


Introduction
In regression analysis, researchers often face the problem of multicollinearity defined as the existence of nearly linear dependency between the columns of the design matrix leading to wide confidence intervals for individual parameters or a linear combination of the parameters [1][2][3]. To overcome the multicollinearity problem, the ridge regression methodology has been widely studied in the literature [1,4]. e proposed ridge estimator performs better than other estimators that rely on ordinary least-squares. Moreover, when multicollinearity happens, the least-squares methods may lead to poor prediction. In such cases, the ridge regression method often gives better accuracies compared to the least square technique. As an extension, Frank and Friedman [3] introduced bridge regression which includes ridge regression with c � 2 and subset selection with c � 1 as special cases. e authors in [3] did not solve the estimator of bridge regression for any given c > 0; however, they indicated that it is more appropriate to optimize the parameter c. As a special case of the bridge regression (with c � 1), the authors in [5] introduced the lasso regression model. e authors in [6] extended the lasso method, called fused lasso, in cases where the number of features is much greater than the sample size. Furthermore, the lasso method was modified by Zou [7] who proposed the adaptive lasso with adaptive weights to penalize different coefficients in the l 1 -penalized. Park and Casella [8] proposed the Bayesian lasso which provides interval estimates that can guide variable selection.
Bridge regression has been successfully applied to solve problems of wide-ranging scope and complexity, starting from the pioneering works in [3,9]. Zou and Yuan [10] imposed the F ∞ norm on support vector machines in the classification context. Huang et al. [11] studied the asymptotic properties of bridge regression. e authors in [12] suggested an adaptive elastic net combining the strengths of the quadratic regularization and the adaptively weighted lasso shrinkage. e authors in [13] showed that bridge regression possesses oracle, sparsity, and unbiasedness. In [14], the authors provided a bridge regression that adaptively selects the penalty order from data and produces flexible solutions in various settings. Polson et al. [15] proposed a Bayesian bridge estimator for regularized regression. Mallick and Yi [16] considered Bayesian bridge regression and provided sufficient conditions for strong posterior consistency under a sparsity assumption on a high-dimensional parameter. Finally, in [17], the authors consider a bridgerandomized penalty of regression coefficients by incorporating uncertainty penalty into Bayesian bridge quantile regression.
In traditional regression models, the observed values of the dependent and independent variables are supposed to be exact numbers. However, in real-world applications, because of parameter uncertainty, human errors data are usually not exact but have uncertainty. As a result, the classical regression models experience some restriction, which need to be adjusted accordingly to deal with fuzzy data (see [18][19][20][21][22][23][24][25][26][27][28][29][30][31][32]). As in regression models in traditional statistical theory, multicollinearity is also an important issue in fuzzy regression models. Accordingly, the fuzzy bridge regression model which combines bridge regression with the fuzzy regression model in order to reduce the effect of multicollinearity for the first time in this paper is considered. To solve such a problem, in this paper, the fuzzy bridge regression model is converted to a nonlinear multiobjective problem. e proposed multiobjective problem is solved by the scalarization method making the multiobjective function create a single solution, and the weight is determined before the optimization process (see [33]). A large weight that is determined to an objective function shows that the considered function has a higher priority compared to the ones with a smaller weight. ere are several approaches to determining the weight of scalarization, and the one that is used in this paper is introduced in [34]. For a more detailed presentation of the methods treated here, as well as other related topics, refer to [35] and the references therein.
e fuzzy regression problems can be solved by applying artificial neural networks and evolutionary algorithms (see [36][37][38][39][40][41][42][43][44][45]). For example, Ishibuchi et al. [39] proposed a learning algorithm of fuzzy neural networks with triangular fuzzy weights, and Hayashi et al. [40] fuzzified the delta rule. Neural network solutions were examined to fuzzy problems by Buckley and Eslami in [41]. e authors in [46] extended the risk-neutral model suggested in [38,47] by adopting a multilayered feed-forward neural network where weights, biases, input, and output variables were assumed to be LRfuzzy. Furthermore, Coppi et al. [48] studied on FLR by using the least-squares approach with LR-fuzzy numbers. In [49], a class of fuzzy clusterwise regression model which integrates fuzzy cluster analysis into the fuzzy regression model was introduced. e authors in [50] also used triangular fuzzy numbers for fuzzy modeling with a robust framework through the least-squares approach. e topic of the numerical solution of fuzzy polynomials by a fuzzy neural network was investigated in [42]. Mosleh et al. [44,51,52] and Otadi [53] provide a hybrid approach based on a feed-forward neural network to approximate fuzzy coefficients of fuzzy linear, nonlinear, and polynomial regression models. Roh et al. [54] concerned an FLR model based on the design approach of polynomial neural networks. e authors in [55,56] used a random weight network to develop a fuzzy nonlinear regression model. A fuzzified radial basis function network for obtaining estimations of fuzzy regression models is utilized in [57]. For further papers on this topic, see [19,20,46,[58][59][60][61]. e aforesaid references in the main refer to a learning algorithm which is used to adjust the parameters of the feedforward fuzzy neural network. e although authors in [62] presented a hybrid scheme based on recurrent neural networks, and there is no record for solving the fuzzy bridge regression model by a neural network with stability and convergence properties. Furthermore, to obtain weighting coefficients, the authors use a recurrent procedure where the inverse of a matrix in each iteration needs to be computed explicitly. Obviously, in high-dimensional cases, the computational complexity is extremely high. Besides, the convergence of the proposed neural network methods depends on choosing a suitable initial point. Consequently, suggesting a recurrent neural network to deal with fuzzy bridge regression problems without penalty parameter with rigorous stability and convergence analysis is completely necessary and relevant. Proposing a capable neural network for analog hardware implementation which gives a good solution of the fuzzy bridge regression problem is the main advantage of this work. e optimality conditions of convex programming for solving the related quadratic formulation of the fuzzy bridge regression problem is studied according to the Karush-Kuhn-Tucker (KKT). It is proved that the equilibrium point of the proposed neural network is equivalent to the KKT point of the related quadratic programming formulation. e existence and uniqueness of an equilibrium point of the network are also analyzed. An adequate condition to ensure the existence and global asymptotic stability for the unique equilibrium point of the neural network are obtained by constructing an appropriate Lyapunov function. Additionally, the global convergence of the model is analyzed. e proposed recurrent neural networks in this paper are substantially different from the traditional ones for two important reasons: Firstly, the neural network method has its advantage in dealing with real-time optimization problems which are usually imperative in many applications. However, traditional optimization methods may not be competent due to the problem's stringent requirement for computational time. As a result, many continuous-time neural networks for solving real-time optimization have been widely expanded (see [63][64][65][66][67]). Secondly, neural networks for solving optimization problems are hardware implementable. In other words, neural networks can be implemented by using integrated circuits. e remainder of this paper is organized as follows. In Section 2, some preliminaries of the fuzzy sets and fuzzy calculus are briefly described. In Section 3, the fuzzy bridge regression problem is introduced. In Section 4, an optimization model to solve the fuzzy bridge linear regression model with LR-fuzzy-dependent variable and nonfuzzy explanatory variables is proposed, and it is transformed into the equivalent quadratic form. In Section 5, based on the (KKT) optimality conditions a neural network model for solving the related quadratic formulation with stability and convergence properties is studied. e goodness-of-fit indices are also studied in Section 6. A numerical experiment and a simulation study to illustrate the effectiveness of the model in the presence of multicollinearity are provided in Section 7. Some concluding remarks are given in Section 8. Proofs of technical statements are stated in Appendix, to better focus on the computational part in the body of paper.

Preliminaries
In this section, first, we summarize some basic concepts of fuzzy set and fuzzy arithmetic which are used throughout the paper (see [68]); then, some definitions and a basic theorem in multiobjective optimization are briefly discussed.

Fuzzy Set and Fuzzy Arithmetic. A fuzzy set
A fuzzy number A is a normal and convex fuzzy subset of the real line R with bounded support. In a special case, a class of fuzzy numbers, so-called LR-fuzzy numbers, is used as a parametric form: where a m is the mode of A and a l , a r > 0 are the left and right spreads of A, respectively. L, R: R + ⟶ [0, 1] are decreasing shape functions S, with S(0) � 1, S(z) < 1 for all z > 0, S(z) > 0 for all z < 1 and S(1) � 0 (or S(z) > 0 for all z and S(+∞) � 0). As an abbreviated notation, we can define an LR-fuzzy number A with the membership function μ A (x) in (1) by A � (a m , a l , a u ) LR . If L(x) and R(x) are both linear functions on the domains x|0 < L(x), R(x) < 1 { }, the corresponding LR fuzzy number is a triangular fuzzy number, which is symbolically written as A � (a m , a l , a u ) T . Moreover, if a l � a u , A becomes a symmetric triangular fuzzy number which is denoted by A � (a m , a l ) T .
Suppose two fuzzy numbers A and B are represented by LR-fuzzy numbers of the form A � (a m , a l , a u ) LR and B � (b m , b l , b u ) LR . e sum of A and B is again an LR-fuzzy number of the form e scalar multiplication kA is defined as kA � ka m , ka l , ka u LR , k > 0,

MultiObjective Optimization.
A multiobjective optimization problem (MOP) can be defined by the following problem: subject to where . . , f k (X)] ⊤ : X ⟶ R k is a vector with the values of objective functions to be minimized, X is the vector containing the decision variables, defined in the design space R l , and g i (X) represents the i th inequality constraint function. Equations (5) and (6) define the region of feasible solutions, S, in the design space R l .
Definition 1 (See [69]). A decision vector X * ∈ S is Pareto optimal if there does not exist another X ∈ S such that for, at least, one index j. Definitions 1 introduces global optimality. However, it may be difficult to generate globally optimal solutions if the problem is not convex (that is, if some of the objective functions or the feasible region are not convex). In that case, the solutions obtained may be only locally optimal unless a global solver is available. Note that, under the assumptions mentioned in the problem formulation, the Pareto optimal solutions exist (see [70]).
Classically, multiobjective optimization problems are often solved using scalarization techniques. e scalarization method makes the multiobjective function create a single solution, and the weight is determined before the optimization process. A large weight that is given to an objective function shows that the said function has a higher priority compared to the ones with a smaller weight.
One of the most intuitive scalarization methods used to obtain a single unique solution for MOP is the weighted sum method. In this approach, the MOP (4)-(6) is converted into a scalar preference function using a linear weighted sum function of the form subject to Journal of Mathematics 3 where X ∈ R l .
According to eorem 1, for positive weights and convex problem, the optimal solutions of the substitute problem (7)-(9) are Pareto optimal.

Fuzzy Bridge Regression Model
Consider a set of data (x i0 , . . . , x ip , Y i )|i � 1, . . . , n , in which x ij (i � 1, . . . , n and j � 0, 1, . . . , p) is the value of j th independent variable and Y i (i � 1, . . . , n) is the corresponding value of dependent variable Y in the i th case. e purpose of the linear regression model is to fit a linear model to the given data. is model can be considered as follows: where A 0 , A 1 , . . . , A p denote the list of regression coefficients (parameters). In some cases, we may need to consider that the relationship expressed in (10) may be fuzzy, and then, the following different cases are obtained: (a) e case where the predictor variable is fuzzy, but the parameters are crisp (b) e case of a crisp predictor and fuzzy parameters (c) e case of a fuzzy predictor and fuzzy parameters It should be noted that the observed responses in all cases are naturally fuzzy. As in LR models in traditional statistical theory, in most of the empirical works, we often encounter the multicollinearity phenomenon in FLR models, as a result of highly intercorrelated explanatory variables. In the last two decades, different shrinkage methods have been proposed for estimating regression coefficients; among them, the bridge shrinkage method has received considerable attention. e authors in [3] proposed the following optimization problem to solve the bridge regression for any given c ≥ 0: where the positive parameter t represents the tuning constant. Also, the optimal c selection will increase the efficiency of the model. It is shown that for c ≥ 1, the bridge regression model is a convex problem (see [9]). Moreover, by setting c � 0, c � 1 and c � 2 in optimization problem (11) and (12), the least-squares regression model, the lasso regression model, and the ridge regression model are, respectively, obtained.
In the following section, in order to develop the bridge regression model for fuzzy numbers, the aforementioned dataset (b) is considered. en, a possible approach for solving the problem connected to multicollinearity in an FLR with LR-fuzzy numbers is discussed.

Proposed Model.
Under the assumption of model (10), suppose A j � (a jm , a jl , a ju ) LR and Y i � (y im , y il , y iu ) LR , (i � 1, . . . , n and j � 0, 1, . . . , p). Let I ij be the identification function which only equals to 1 when x ij ≥ 0. Using (2) and (3), the LR-fuzzy number Y i is defined as Now, the proposed MOP for solving the fuzzy bridge regression model consists of three objective functions via the least squares method on the center, and the spreads is presented as follows: p j�0 a jl where t 1 , t 2 , and t 3 represent three tuning constants. It should be pointed out that the use of different t s (s � 1, 2, 3) will lead to better results rather than constraining them to be the same throughout the computations [71].

An Optimization Model
In order to obtain the estimate for coefficients A 0 , A 1 , . . . , A p of the fuzzy bridge regression model, it is necessary to minimize objective functions (14)-(16) simultaneously under constraints (17)- (19). A scalarization problem related to (14)- (19) is stated as subject to p j�0 a jm where A � a 0m , . . . , a pm , a 0l , . . . , a pl , a 0u , . . . , a pu ⊤ , is a (3p + 3)-dimensional column vector, w s ≥ 0,(s � 1, 2, 3), and 3 s�1 w s � 1. To solve problem (20)-(23) we first use the following proposition [72]. Let u, v ∈ R n be auxiliary variables such that where erefore, in order to model the absolute values in optimization problem (20)-(23), a jm should be split into positive and negative parts such that a jm � a jm ′ − a jm ″ with a jm ′ , a jm ″ ≥ 0. us, the following quadratic optimization problems for c ≥ 1 are proposed: subject to where is a (4p + 4)-dimensional column vector, w s ≥ 0,(s � 1, 2, 3), and 3 s�1 w s � 1. Here, with Journal of Mathematics , and B is a column vector: Also, It can be easily verified that the Hessian matrix of C bridge (A) is positive definite matrix because en, problems (26)-(28) are convex quadratic problems with inequality constraints.

Fuzzy Lasso Regression and Fuzzy Ridge Regression.
In this section, to achieve the two well-known forms of the fuzzy bridge regression model, let c � 1 and c � 2. us, the following quadratic optimization problems are, respectively, obtained as subject to subject to where Q is the following diagonal block matrix: where G is a (3p O is a (p + 1) × (p + 1) zeros matrix, and G is a Also, where A, A, Q, P 1 , P 2 , P 3 , B, and M are the same as defined in (24), (29)-(34), and (37), respectively. It can be easily verified that the Hessian matrix of C lasso (A) and C ridge (A) is a positive definite matrix because en, problems (39)-(41) and (42)-(44) are convex quadratic problems with inequality constraints.
ere are several optimization methods for solving the convex quadratic problems with inequality constraints that each one of these methods has its weaknesses and strengths. Following Section 5, we proposed a neural network that can approximate coefficients A 0 , A 1 , . . . , A p of the fuzzy bridge optimization model.

A Neural Network Scheme
In this section, using standard optimization techniques, we transform quadratic programming problem (26)-(28) into the equivalent nonlinear dynamical system model. From [73], A * ∈ R 4p+4 is an optimal solution of (26)- (28) if and satisfies the following KKT system: where A * is called a KKT point of (26)- (28) and U * is called the Lagrangian multiplier vector corresponding to A * .

Journal of Mathematics
Now, let A(.) and U(.) be some time-dependent variables. e aim is to construct a continuous-time dynamical system that will settle down to the KKT point of problem (26)- (28). We propose a neural network as (54) where for q � 1, 2, 3, and τ is a tuning parameter. For simplicity of the analysis, let τ � 1. (53) and (54). en, A * is a KKT point of problem (26)- (28). On the other hand, if A * ∈ R 4p+4 is an optimal solution of problem (26)-(28), then there exists (53) and (54).
In the following, it is proved that the proposed neural network (53) and (54) is stable in the sense of Lyapunov.
To show the global convergence of the model, the following lemma and theorem are considered.

Theorem 6.
e convergence rate of the neural network in (53) and (54) increases as τ increases.

The Goodness of Fit
In this section, before focusing on the measurement of goodness of fit for a bridge fuzzy regression model, some popularly used measures are introduced to define the distance between fuzzy numbers. en, some goodness-of-fit indices are employed, which are represented in [74][75][76][77].
Definition 2. Xu [78] presented the following distance between two fuzzy numbers A and B as where and f(h) is an increasing function on [0, 1] with f(0) � 0 and 1 0 f(h)dh � 1/2. e advantage of this distance is that it can let different h-level cuts play different parts and can be applied to general fuzzy number space. [48] defined the distance between two LR-fuzzy numbers A and B as follows:

Definition 3. authors in
where parameters c � 1 0 L − 1 (ω)dω and ρ � 1 0 R − 1 (ω)dω have the twofold role of taking into account the variability of the membership function and decreasing the emphasis on the spreads (see [50]).
Hassanpour et al. [79] defined the distance between two triangular fuzzy numbers as follows.
Definition 4. For two triangular fuzzy numbers A and B, D 3 is called the Hassanpour distance defined by e most important topic in fuzzy regression models is the ability to describe a model. Accordingly, the following goodness-of-fit indices are used in this paper to evaluate the goodness of fit of the proposed model.

Mean Square of Prediction Error.
One of the indicators of goodness-of-fit assessment in the statistical regression model is the mean square of prediction error (MSE) between the predicted and the observed values. In the following, this criterion is presented as a definition.
Definition 5. For regression model (10), the MSE between the estimated and the observed values is defined by where D j (Y i , Y i ) is the distance between two fuzzy numbers Y i and Y i (see . Smaller values of MSE indicate that the model fits better the data (see [74]).

Similarity
Measure. Several similarity measures are introduced in the literature to measure the similarity between fuzzy sets. In this work, we use the index that has certain advantages over other similarity measures. Such a similarity measure has been used for evaluating the performance of a fuzzy regression model in [75,76]. where and Card(A) � x μ A (x)dx where A is a fuzzy number. e similarity measure ranges from 0 to 1, and the model with a higher value of MSM provides a better fit to the data.

Cross Validation.
To further investigate the performance of the model obtained in Section 4, we apply another index based on the cross-validation method to examine the predictive ability of the model (see [80]). To this end, each time, the i th observation is left out from the dataset, while the remaining observations are used to develop a fuzzy regression model. en, the obtained model is used to predict the response value of the i th observation (Y (−i) (x i )). Finally, to compare the i th observed Y i and the predicted value Definition 7. For regression model (10), the index of MDC is defined by where Y (−i) (x i ) is the fuzzy response prediction by omitting the i th pair ( To compare performances of different models, the following well-known diagrams can be used to summarize the power of the proposed model to represent the actual values graphically.

Taylor's Diagrams.
Further to the common performance evaluation criteria, Taylor's diagrams are depicted to convey information about pattern similarity between estimates and data. is similarity is quantified in terms of the correlation between estimates and data, their centered Root Mean Square (RMS) differences, and their standard deviations. is diagram is suitable for evaluating multiple aspects of complex models. In addition, it is possible to compare the relative performance among different models. e observed dataset is represented by a point on the x-axis as "Actual data." e estimates are positioned according to three statistics: the radial distance of the point labelled "actual data" from the origin of the plot that is proportional to the standard deviation of each dataset; the RMS difference between data and estimates that is proportional to their distance; and their correlation is given by the azimuthal position of the estimates, i.e. the angle from the x-axis. e formulas for calculating the centered Root Mean Square Error (RMS), the correlation (R), the standard deviation of the observed data (σ Y ), and the standard deviation of the estimated data (σ Y ) are given as follows: where Y and Y are the calculated mean values of Y i and Y i , respectively. In Taylor's diagrams, when the distance to the point representing the actual data is relatively short, there is a good agreement between the estimated and observed data.

Fuzzy Bubble Plots.
Coppi et al. [48] made use of bubble plots to evaluate the fit of the fuzzy regression model with fuzzy output and crisp inputs. e x-axis and the y-axis denote, respectively, the observed and estimated centers, the circles are centered in (y im , y im ), and their diameters are given by (|y il − y il | + |y iu − y iu |) 1/2 . In bubble plots, the closeness of these circles to the bisector of the plane shows that the model correctly estimates the centers of the fuzzy data, and the smaller diameters of the circles indicate that the model correctly estimates the spreads of the fuzzy data.

Numerical Examples
In order to demonstrate the effectiveness of the proposed scheme, in this section, we test some applicable examples by neural network (51) and (52). For testing problems, we also compare the numerical performance of neural network (53) and (54) [83], and Kula and Apaydin [84] with different distances explained in Section 6 applied to calculate the goodness-offit criteria. It must be noted that each method claimed to exhibit better performance compared to other fuzzy multiple regression models. In addition, we present Taylor's diagrams [77] which provide information on multiple statistical indices to evaluate the accuracy of neural network (53) and (54). e simulation is conducted on Matlab R2017b, and the ordinary differential equation solver engaged is ode45s.

Example 1 (A Simulation Study).
To further illustrate the performance of the proposed procedure, we generated a 30 simulated dataset of size n � 20 with fuzzy output and crisp inputs. Consider the following linear regression model: where for i � 1, 2, . . . , 20, Using the proposed methods for every 30 simulated dataset, the mean of all the employed goodness-of-fit measures (MSM, MSE, and MDC) with respect to the distances D 1 , D 2 , and D 3 is shown in Table 1.
To compare the performance of the proposed models with the existing models in [71,76,81,83], the estimated errors of the observed responses are calculated based on all 30 simulated datasets of size n � 20. e numerical results are summarized in Table 1. It can be seen that the proposed fuzzy bridge regression model with different values of c provides more accurate results.
We utilize the proposed neural network method to the 15 th simulated dataset. According to Section 3, the optimal selection of c increases the efficiency of the model. us, to estimate the best value of Y using (26)-(28), we need to find the optimal bridge parameter c. e total errors of the proposed fuzzy bridge regression model with c � 3.7 are obviously better than the total errors calculated from the methods with c � 1, 2, and 3 (see Figure 1).
In addition, Table 3 shows the estimated fuzzy coefficients of the fuzzy bridge regression model for c � 3.7, t 1 � 1669, t 2 � 41, and t 3 � 38. Figures 2-4 show that the trajectories of neural network model (53) and (54) for solving the fuzzy bridge regression model (c � 3.7) are convergent to the optimal solution of the problem. Simulation results show the trajectories of (53) and (54) with any initial point are always convergent to the optimal solution of the problem. Taking A 24 for example, Figure 5 displays the transient behaviors of the fuzzy bridge regression model based on (53) and (54) with 20 random initial points. e model functioning in the graphical state is illustrated in Figures 6-8, through Taylor's diagrams. Figure 6 shows proposed neural network (53) and (54) for the bridge regression model of the centers with c � 1, c � 2, c � 3 and c � 3.7, depicted by brown, green, black, and magenta circle, respectively. It is clear that the proposed model with c � 3.7 lies very close to the actual dataset. is result holds also for the left and right bounds.
In addition, to visualize how the models fit the observed fuzzy data, fuzzy bubble plots are used (see Figure 9). e fuzzy bubble plots show that observations fit well to the proposed models because of the closeness of the circles to the bisector of the plane and the short radius of each circle. erefore, it is concluded that the proposed model used in this paper has certain merits in practice. Table 4 is taken from [85], with crisp inputs and a symmetric triangular fuzzy output.

Example 2. In this example, the dataset is shown in
Using these data, we develop an estimated fuzzy regression equation:   In order to estimate the value of Y using (26)-(28), we need to find the optimal bridge parameter c. Figure 10, considering two different goodness-of-fit measures and distances D 1 , D 2 , and D 3 , illustrates a dramatic rise in the MSM index when c � 2. e MSE index with three different distances D 1 , D 2 , and D 3 , on the other hand, shows a different trend. e numbers fall significantly by c � 2. erefore, the optimal fuzzy bridge regression model is obtained using (26) and (28) with c � 2, r 1 � 0.1139 × 10 8 , r 2 � 0.3893 × 10 6 . Figures 11 and 12 show that the trajectories of model (53) and (54) for solving the fuzzy bridge regression model with c � 2 are convergent to the optimal solution of the problem. Simulation results show that the trajectories of (53) and (54) with any initial point are always convergent to the optimal solution of the problem. Figure 13 displays the transient behaviors of fuzzy bridge regression coefficient A 2 based on (53) and (54) with 20 random initial points. Furthermore, the results of several common fuzzy regression models are presented in Table 5.
Comparing the different methods in terms of the goodness-of-fit criteria (MSM, MSE, and MDC, concerning three distances D 1 , D 2 , and D 3 ), it can be seen that the proposed method exhibits better performance compared to the other methods.
is is clarified in addition in Figures 13-16. In Taylor's diagrams, a comparison of the proposed fuzzy bridge regression model with the existing models in [71,76,[81][82][83][84] is shown. e position of each circle on the graph represents a different model result and is determined by the values of the correlation coefficient and standard deviation. It is clear that proposed model (53) and (54) has a better capability and the most appropriate compared to the other exiting models. e fuzzy bubble plots show that observations fit well to the selected model because of the closeness of the circles to the bisector of the plane and the short radius of each circle. erefore, it is concluded that the proposed model used in this paper has a good fitting effect.
In general terms, all figures and the goodness-of-fit criterion in this example indicate that the fuzzy bridge regression model with neural network (53) and (54) has good fitting performance and this model can more accurately describe the original data than the others models.
To end this section, we summarize some advantages of the proposed method as follows: A suitable neural network for analog hardware implementation is suggested which gives a good solution of the fuzzy bridge regression problems. e proposed recurrent neural network does not require any penalty parameter for solving the fuzzy regression problems. e proposed neural network method convergence does not depend on choosing a suitable initial point, since it is globally convergent. e computational burden can be greatly reduced using the proposed approach compared with some existing methods because it is not needed to calculate the inverse of a matrix (see, for example, [31,50]) in each iteration in order to obtain weighting coefficients. In order to evaluate the proposed model, three different distances and goodness-of-fit measures are used. However, some manuscripts use only one distance. For example, one can see [44,51,53]. We use a statistical well-known method called "cross validation" and make a statistical comparison with observation through Taylor's diagram and bubble plots to show the capability of the model. We provide a simulation and an applicable example through our methodology as Examples 1 and 2. It has been shown that the proposed method is completely capable for solving real-world problems in the fuzzy bridge regression.

Conclusions and Future Work
In this paper, a neural network model is presented for solving the fuzzy bridge regression model with fuzzy coefficients and crisp input. To appraise the efficiency of the presented model, a dataset generates through a         No.   Figure 11: Transient behaviors of a 0m , a 1m , a 2m , a 3m , a 4m and a 5m for neural network (53) and (54) a 0l , a 1l , a 2l , a 3l , a 4l and a 5l for neural network (53) and (54) in Example 9.  e suggested neural network performance is, then, evaluated using the MSM, MSE, and MDC with three different fuzzy distance measures compared to some exiting techniques. As a result, we obtained lesser values for these three measures. To further investigate, and the performance of the given model is achieved using another applicable dataset. ese numerical results also show that the model is superior to some other exiting techniques. In addition, we present the bubble plots to evaluate the accuracy of the introduced model. Also, Taylor's diagrams which provide information on multiple statistical indices are presented to evaluate the accuracy of the introduced model. As future work, one can apply the proposed scheme to solving fully fuzzy regression, interval-valued regression, and fuzzy type-2 regression problems.       (53) and (54). en, dA * /dt � 0 and dU * /dt � 0. It follows easily that With a simple calculation, it is clearly shown that  Figure 16: Fuzzy bubble plots in Example 9. (a) Proposed fuzzy bridge regression model with c � 2, (b) the model in [71], (c) the model in [81], (d) the model in [76], (e) the model in [82], (f ) the model in [83], and (g) the model in [84]. 22 Journal of Mathematics , and O indicates a zero matrix. From [86], we see that (G * ) ⊤ G * is a positive semidefinite matrix. Matrix Q is also assumed to be positive semidefinite. Moreover, it is clear that matrix W (4p+4)×(4p+4) is negative semidefinite. As a result, the Jacobian matrix ∇ϕ(y) is a negative semidefinite matrix. In this case also, it is easy to verify that ∇ϕ(y) is a negative semidefinite matrix. is completes the proof. en, according to [87], the result is obtained from the differentiable convexity of (GA) k (k � 1, . . . , 4p + 4). We also have  where E 1 (y) � ‖ϕ(y)‖ 2 and E 2 (y) � (1/2)‖y − y * ‖ 2 . From optimization literature [88] and Lemma 3, we know that E 1 (y) is a differentiable function. From (55), it is seen that dϕ dt � zϕ zy dy dt � ∇ϕ(y)ϕ(y).

24
Journal of Mathematics erefore, the convergence rate of the trajectory y(t) increases as τ increases.

Data Availability
e data for Example 1 are simulated, and the data for Example 2 are taken from [85].

Conflicts of Interest
e authors declare that they have no conflicts of interest.