A New Approach to Reducing Search Space and Increasing Efficiency in Simulation Optimization Problems via the Fuzzy-DEA-BCC

The development of discrete-event simulation software was one of the most successful interfaces in operational research with computation. As a result, research has been focused on the development of new methods and algorithms with the purpose of increasing simulation optimization efficiency and reliability.This study aims to define optimumvariation intervals for each decision variable through a proposed approach which combines the data envelopment analysis with the Fuzzy logic (Fuzzy-DEA-BCC), seeking to improve the decision-making units’ distinction in the face of uncertainty. In this study, Taguchi’s orthogonal arrays were used to generate the necessary quantity of DMUs, and the output variables were generated by the simulation. Two study objects were utilized as examples of monoandmultiobjective problems. Results confirmed the reliability and applicability of the proposed method, as it enabled a significant reduction in search space and computational demandwhen compared to conventional simulation optimization techniques.


Introduction
The development of discrete-event simulation (DES) software was one of the greatest successes in bringing the realms of operational research (OR) and computation together, according to Fu [1].Hillier and Lieberman [2] highlight that simulation is an extremely versatile technique which enables the investigation of practically any type of stochastic system.This versatility has turned simulation into the most commonly used OR technique for stochastic systems.
Nevertheless, the simulation optimization integration grew stronger from the 90s due to the development of commercial software packages which marketed integrated optimization routines, thus making it considerably easier to carry out a decision-making analysis [2][3][4].
Thus, Azadeh et al. [5] affirm that simulation optimization is one of the most important OR tools that has come about in recent years.Previous methodologies demanded complex alterations which were frequently economically and temporally unviable, especially for problems with a large number of decision variables.
According to Medaglia et al. [6], simulation optimization aims to find the best values for simulation model input parameters in the search of one or more desired outputs.This is generally a slow process that takes a large amount of time and the accomplishment of innumerous experiments.
In spite of the advances in simulation model optimization software, a common criticism brought up is that, when dealing with more than one input variable, the optimization process becomes very slow [7][8][9].Furthermore, according to Hillier and Lieberman [2], computational simulation packages may be considered as relatively slow and costly when applied in studies of stochastic and dynamic systems.In such systems where a random behavior is prevalent, there is a tendency towards elevated expenses and allocation of qualified labor and expertise, along with extra time for analysis and

Data Envelopment Analysis
2.1.Deterministic Data Envelopment Analysis.Classic DEA models were introduced by Charnes et al. [15], which use constant returns of scale and were denominated as CCR models in homage to their creators [12].These models were then extended by Banker et al. [12] to variable returns of scale, this time dubbed as BCC, again in reference to their originators.
According to Cook and Seiford [17], DEA is a nonparametric methodology which comparatively measures the efficiency of each decision-making unit (DMU).Another fact that deserves highlighting is that, through its use, it is possible to avoid the problems created by incommensurability (different units of measurement) among the elements of the input and output matrices.
In the original model from Charnes et al. [15], the input and output variable weights may be obtained from the Fractional Programming model solution, given by subject to: ≤ 1,  = 1, 2, . . ., ,   ≥ 0,  = 1, 2, . . ., , V  ≥ 0,  = 1, 2, . . ., . ( With DMU 0 to DMU being under evaluation,   is the relative efficiency of DMU 0 ;  0 and   are the input and output data for DMU 0 ;  is the index for the DMU,  = 1, 2, . . .;  is the output index, with  = 1, 2, . . ., ;  is the input index,  = 1, 2, . . ., ;   is the value of th output for th DMU;   is the value of th input for th DMU;   is the weight associated with th output; V  is the weight associated with th input.
It is observed that if   = 1, DMU 0 is going to be efficient when compared to the other units considered in the model, and, if 0 <   < 1, this DMU is deemed inefficient.
A main difference between DEA-BCC and DEA-CCR models is the addition of  0 (free) variable which indicates variable returns of scale.Banker et al. [12] affirm that a DMU considered efficient in a BCC model will also be considered efficient in the CCR model; the inverse, however, is not necessarily true.
According to Cooper et al. [16], in order to provide a suitable discrimination of the DMUs in traditional DEA models, the following must be verified: number of DMUs ≥ maximum {(product of the number of inputs and outputs), 3⋅(number of inputs + number of outputs)}.
Classic DEA models consider DMUs with   = 1 as efficient and DMUs with 0 ≤   < 1 as inefficient.It is within the realm of possibility that multiple DMUs could be efficient; that is, this would not be a good discrimination of DMUs.In order to deal with this limitation, Andersen and Petersen [19] proposed the concept of superefficiency in order to help differentiate DMUs which present   = 1.
For the superefficiency evaluation to be employed in DEA-BCC models, constraint (6) must be removed in order for the DMU under analysis to attain greater scores than 1, being that the DMUs considered inefficient in traditional evaluation models will continue to be inefficient, but those which posed scores equal to one will be able to demonstrate scores above 1, thus enabling the elaboration of a ranking.

Fuzzy-DEA.
Hatami-Marbini et al. [21] undertook an expansive literature review about the Fuzzy Theory combined with DEA models.The authors' motivations were based on the fact that, in general, the estimation of input and output DMU values in real problems is difficult, which could generate efficiency values with a low level of reliability; one approach to deal with the aspects of uncertainty involves the adoption of Fuzzy Theory concepts.Kao and Liu [13] assert that measuring a DMU's efficiency is a difficult task which involves complex economic variables such as interest rates and tax rates and employment levels and demand.According to these authors, efficiency measurement becomes even more difficult when analyzing multiple inputs and outputs.
In this context, Wen et al. [22] comment that DMUs can be classified in two categories: efficient and inefficient.However, the incorporation of uncertainty as an error in measurement of inputs and outputs can make the calculation of efficiency more reliable and robust.
The value of the objective function (10) may have a greater value than 1 due to constraints (11)-( 12) which involve Fuzzy parameters and are solved using probability [23].With the incorporation of Fuzzy coefficients, DEA-BCC models cannot be resolved using traditional linear programming (LP) techniques.Hatami-Marbini et al. [21] list and describe the following main approaches which deal with Fuzzy-DEA: (i) the -level based approach; (ii) the tolerance approach; (iii) the Fuzzy ranking approach; (iv) the possibility approach.
In this study, the approach based on the -level was adopted and is described afterwards.This -level application is the most common for Fuzzy-DEA models according to Hatami-Marbini et al. [21].To apply this method, the idea is to convert the Fuzzy-DEA model into a pair of Parametric Programming Problems to find upper and lower boundaries for the DMU efficiency score membership functions [23].
For the purposes of this research, triangular membership functions were utilized.According to Liang and Wang [24], such functions well represent human expertise in adequately judging the behavior of common variables in a range of practical situations.Along these same lines of thought, Aouni et al. [25] show multiple applications for Fuzzy triangular numbers which validate and justify the adoption of such a method in conjunction with goal programming (GP) models.Another justification arises from the fact that it is a linear function, thus easing the optimization process through traditional LP means [21].
To illustrate the process of Fuzzy-DEA modeling, the example in Kao and Liu's [13] paper was used, in which they dealt with a model with a variable return of scale.Figure 1 shows the DEA-CCR and DEA-BCC models with four DMUs, denominated as A, B, C, and D, with only a single input and a single output for each DMU and with the output of DMU B being Fuzzy.
Based on Figure 1 and according to the DEA-BCC model [13], when the output of DMU B is less than or equal to 7.5, the production boundary is defined by the line segments connecting point (10; 0) to A = (10; 5) and from A to D = (50; 15).The efficiency scores of DMUs A, C, and D are 1, 0.9, and 1, respectively; the efficiency of DMU B is between 5/7.5 = 0.67 and 7.5/7.5 = 1, depending on its output.
If an increase in the output associated with DMU B went from 7.5 to 9, the production boundary would be represented by the line segments linking points (10; 0) to A from A to B = (20; 9) and from B to D = (50; 15).In this case, DMUs A, B, and D have an efficient output bound, while the efficiency of C will be, according to output orientation, between values of 9/10= 0.9 and 9/11 = 0.82.
In this example, regardless of the output value of DMU B, DMUs A and D are always efficient.In other words, with the combination (generation of scenarios) of each Fuzzy output for DMU B, the effects of uncertainty on DMUs A and D are always the same and present 100% efficiency.
Conducting an analysis under the assumption of constant return of scale (DEA-CCR model), as seen in Figure 1, the production boundary will be the solid line which links the origin point (0; 0) with point A = (10; 6).In this case, only DMU A will always be efficient (with an efficiency score of 100%), regardless of the output value of DMU B. However, the efficiency of DMU B will vary from 1/2 = 0.5 to 9/10 = 0.9, and DMUs C and D will have an efficiency of 3/5 = 0.6; that is, there are not effects of the Fuzzy output of DMU B over the efficiencies of DMUs C and D.
As a further example, the membership function linked to the output of DMU B may be illustrated, given that it is a trapezoidal function, shown by the following equation: The -level base is defined as the interval [5 + , 9 − ], being (  )   (  )   , respectively.The upper and lower bounds for the variation (uncertainty) of -level, which are associated with a scenario generation, vary between pessimistic ( = 0) and optimistic ( = 1) for an efficiency analysis.
As previously stated, the -level based approach was adopted which, according to Hatami-Marbini et al. [21], is the most popular Fuzzy-DEA model with many referenced publications [13].The value of  ∈ [0, 1] allows the generation of scenarios, that is, different efficiency values respecting the variation range determined by the membership function.In such models, X and Ỹ are, respectively, the Fuzzy parameters for th input and th output of the th DMU.These values are approximately known and can be represented by Fuzzy sets through means of membership functions  X and  Ỹ .Thus, Fuzzy-DEA models can be formed using ( X ) and ( Ỹ ) which are the values of these parameters for a given  scenario.With the -level which generates a set of scenarios for Ỹ and X , the formulas, as defined by Kao and Liu [13], are as follows: Fuzzy-DEA can be transformed into a family of DEA models with different levels of uncertainty: {(  ) | 0 ≤  ≤ 1} and {(  ) | 0 ≤  ≤ 1}.The results for each scenario identify the uncertainty variation range in the model's input and output data [13,[26][27][28], with the -level being defined by the following equation, according to Kao and Liu [13]: Kao and Liu [13], based on Yager [29], Zadeh [28], and Zimmermann [30], established that a membership function which defines the efficiency of DMU  can be expressed by with   (, ) obtained by ( 4)-( 9).The approach for constructing the membership function  Ẽ proposed in this study adopted an -level  Ẽ as being the value of efficiency in each scenario Ẽ obtained by ( 10)-( 14).For further details, Kao and Liu [13], Hatami-Marbini et al. [21], and Kao and Lin [31] are recommended reading materials.
Based on the models developed by Banker et al. [12], in accordance with (4)-( 9), and Kao and Liu [13], in accordance with ( 10)-( 14), the Fuzzy-DEA-BCC model was developed.Afterwards, the indices, parameters, auxiliary variables and decision variables, objective functions, and model constraints are proposed, considering DMU 0 as the one under analysis.The Fuzzy-DEA-BCC model   for an efficiency analysis of the DMUs in a pessimistic scenario is as follows: The Fuzzy-DEA-BCC model   for an efficiency analysis of the DMUs in an optimistic scenario is as follows: Figure 2 geometrically contemplates the position of the DEA models parameters via the triangular membership function.x0 or x and ỹ0 or ỹ correspond to the lower variation bound associated with inputs and outputs of the DMUs.X0 or X and Ỹ0 or Ỹ correspond to the upper variation bound of the DMU's inputs and outputs.The mean value of the triangular membership function is associated with the values of these parameters in a scenario without uncertainty.

Proposal of Integrating Fuzzy-DEA with Simulation Optimization Problems. This paper's proposal of integrating
Fuzzy-DEA-BCC models in simulation optimization problems is based on the following four techniques: (i) discrete-event simulation to represent the real system to be optimized and conduct scenario simulation and data collection; (ii) the Taguchi orthogonal arrays to generate experimental matrices and define simulation runs to be executed in order to represent the search space; (iii) Fuzzy-DEA-BCC to analyze the efficiency of each generated scenario, taking into account the uncertainty present in each of them and classifying them in terms of the most efficient DMUs; (iv) an optimization procedure via simulation which can perform searches for optimal solutions.
The use of optimization assumes that the simulation model is constructed, verified, and validated, thus assuring that the model adequately simulates the reality of the phenomenon under study.It is also suggested that the response variables are either discrete or integers.The steps for their use are presented in Figure 3.
The application phases for the proposed procedure are described below.Step 1. Define the simulation model decision variables ( 1 ,  2 ,  3 , . . .,   ) and the variation ranges for each variable (lower level ≤   ≤ upper level, com 1 ≤  ≤ ).
Step 3. Select the Taguchi orthogonal arrays in function of the number of decision variables and their variation limits.This selection must obey the fundamental rule established for the minimum number of DMUs to be analyzed through Fuzzy-DEA-BCC analysis [16].After the arrays selection, generate an experimental matrix which represents the most diversified solution region as possible and explore all levels of each decision variable, if possible.
Step 4. Execute the experiments in a discrete-event simulator and store the maximum, minimum, and mean values for each output variable to be optimized for analysis.
Step 5. Fuzzy analysis is as follows.Based on the experiments carried out in the previous step, insert the minimum, maximum, and mean values for each experiment in the triangular membership function.The choice of this membership function is based on comments by Liang and Wang [24] who justify the use of Fuzzy triangular functions, given that they suitably mirror human judgments.Thus, according to these authors, the advantage of using triangular Fuzzy number lies in the fact that not only human expertise can be suitably represented, but also the model additionally accounts for uncertainty in the involved data and parameters.The optimization models dealing with uncertainty which contain present and future information cannot be perfectly accounted for and should be considered as uncertain [32].This study utilized the -level based approach given that, according to Wang and Liang [32], it is the most popular Fuzzy-DEA model due to the great number of publications using Fuzzy-DEA in current scientific literature [21].For more information on these approaches, please see Hatami-Marbini et al. [21].
Step 6. Determination of efficiency for each scenario by means of Fuzzy-DEA-BCC for the simulated results is as follows.Upon the definition of maximum, minimum, and Step 1: determine decision variables ( x 1 , x 2 , . . ., x n ) Step 0: verify and and their respective bounds validate simulation (lower level ≤ x i ≤ upper level, with 1 ≤ i ≤ n) model Step 2: determine output variables to be optimised (y 1 , y 2 , y 3 , . . ., y m ) Step 3: select the Taguchi array (L4, L9, L25, L32, L54, . . . ) and generate experimental matrix Step 4: execute Step 5: Fuzzy analysis experiments Step 6: determine scenario superefficiency (Fuzzy-DEA-BCC) Step 7: rank the most efficient DMUs Step 8: define new variation range for decision variables Step 9: optimise Step 10: make decision simulation model mean values linked to each membership function associated with each analyzed condition, the Fuzzy-DEA-BCC model was applied using the -level based approach, varying the  value (0, 0.1, 0.2, . . ., 1), carrying out 11 scenarios with different superefficiency values for both pessimistic and optimistic scenarios.
Step 7. Ranking the most efficient DMUs based on the concept of superefficiency is as follows.Given that there are 11 pessimistic scenarios and 11 optimistic scenarios, it was necessary to lump them together into a global scenario.The adopted method was to extract the geometric average of each scenario, thus arriving at a global scenario.Thus, the geometric averages of all linked scenarios were obtained and ranked from greatest to lowest.The first and second positions in the rank were chosen to reduce the range for each decision variable and, in turn, carry out the optimization simulation.
Step 8. Reducing the variation range of each decision variable is as follows.The first and second positions of the rank were chosen to carry out the range reduction of each variable, thus taking out those variables with equal values for both DMUs.This was the value adopted for that variable.
Step 9. Optimize the simulation model with a new variation range for each decision variable.
Step 10.Analyze the results of the optimization and make decisions based on the results found.
In order to exemplify the application of the proposed procedure, real situations involving the optimization of two simulation models are presented in the following sections.The utilized models were previously verified and validated, thus proving to be apt for optimization of the simulation.

Application and Data Analysis
4.1.Monoobjective Case.The modeled situation corresponds to a quality control cell in a fiber-optic transponder company.The cell is responsible for a series of tests which will ensure the approval or failure of the equipment produced at the plant.All verification and validation phases were appropriately undertaken, thus providing a consistent model.
For this study object, the decision variables were defined as the number of operators responsible for carrying out quality control tests, denoted by ( 1 ,  2 ,  3 ), and the number of pieces of equipment for test types 1, 2, and 3, denoted by ( 4 ,  5 ,  6 ).All variables were defined as integers, with a lower bound equal to one and an upper one equal to five.Table 1 presents this information.
The optimization objective was to find the combination of decision variables which would maximize the total number of inspected products in the quality control center, denoted by ( 1 ).For the problem in question, considering the number of decision variables (6) and their maximum variation (1-5), there are a total of 15.625 = (5 6 ) possible scenarios for the search space.Considering the quantity of decision variables, the variation of levels of each variable, and the rule for the minimum number of DMUs proposed by Cooper et al. [16], the orthogonal array L25 was chosen.Seeing that there are six input variables and one output variable, there would be a minimum of 21 DMUs (runs) by following the classic rule, justifying the use of array L25.With a defined array, the experimental matrix was generated and presented in Table 2.
In the following, 25 scenarios were simulated from array L25 with 30 replications of a month's time of operation in quality control.The simulations were carried out on a computer with an Intel processor (Core 2 Duo) 1.58 GHZ, a 2 GB RAM, and a 64-bit Microsoft operational system platform.
Data for each output variable were stored for the superefficiency calculations.Nearly 26 minutes were spent to process the 25 scenarios, considering all replications.Maximum, minimum, and average values were stored for 30 replications for each scenario with the goal of using them in the triangular membership function.Results for output variable  1 are shown in Table 2.
For the calculation of superefficiency related to each DMU with a Fuzzy-DEA-BCC model, the software The General Algebraic Modeling (GAMS) [33] was used, version 22.8.1, using the solver CPLEX, version 11.0, adapted for the specific calculation.
With these results, the superefficiency value of the DMU can be related to each scenario for the pessimistic and  3 and 4.
For the calculation of superefficiency for each scenario, the average geometric value was extracted for both pessimistic and optimistic scenarios.The results found are presented in Table 5.With these data, it was possible to calculate the average of each DMU and then rank them in function of their superefficiency values (see Table 5).
Through a superefficiency analysis, it was possible to rank the DMUs in order of efficiency.For the problem in question, DMU 1 is the most efficient, followed by DMU 6.Both are highlighted in Table 5.
With the identification of the two most efficient DMUs and based on the experimental matrix in Table 2, a new interval can be identified for each decision variable in which a better set of solutions is expected.The new intervals for each decision variable are presented in Table 6.Variable  2 stands out, as its value was already defined, being equal to  2 = 1, reducing the number of decision variables to 5.
With the reduction of the variation interval for each decision variable, the search space for the best solution was reduced from 15,625 to 240-a reduction of 98.4%.
To confirm the efficiency of search space reduction, the optimizer SimRummer [34] was utilized for the simulation model optimization.SimRunner stands as popular optimization software [35] in operation and being sold in conjunction with the ProModel simulator.According to Kim et al. [36] and Banks et al. [4], this optimization software finds an optimal solution using as its search method a metaheuristic called Genetic Algorithm that as pointed by Ólafsson [37] mimics the process of natural selection.
The optimizer was set to the same conditions and objectives; however, two optimizations were conducted for the same problem.One used a reduction in the variation interval (Table 6), and the other one used the original problem (Table 1) conditions.The results found can be seen in Table 7.
The responses presented by the optimizer for the decision variables were only equal for variables  4 and  5 .For the other variables, the optimizer arrived at different values.As for the solution of  1 , with search space reduction, the optimizer reached a value statistically equal to both optimized cases.
The optimizer for the optimization problem with its range being reduced carried out 88 experiments before converging, which is equal to 36.67% of the experimental area (240 scenarios), taking a little more than 2.25 hours.For the problem with original range settings, the optimizer carried out 183 experiments, equal to slightly more than 1.17% of the total experimental area (15,625 scenarios), taking 4.8 hours.

Multiobjective Cases.
The second study object represents a production cell in a Brazilian telecommunications company which produces fiber-optical equipment.This model, which is similar to the previous case, had already passed through the phases of verification and validation prior to being used for this paper in the simulation optimization technique.For the presented study object, the decision variables were defined as the number of pieces of inspection equipment, types 1 and 2, denoted by ( 1 ,  2 ), and the number of employees, types 1, 2, and 3, denoted by ( 3 ,  4 ,  5 ), who carry out the activities presented in the model.The variables were defined as integers, with a lower bound of 1 and an upper bound of 5. Table 8 presents this information.The optimization objective was to find the combination of decision variables which would maximize throughput ( 1 ) and production cell profit ( 2 ).It can be seen for the problem in question that, considering the number of variables (5) and their maximum variation (1-5), there are a total of 3,125 = (5 5 ) possible scenarios for the search space for the best configuration.
In order to meet the fundamental rule for the number of DMUs, an L25 orthogonal array was used.An experimental matrix was generated which is presented in Table 9.The scenarios for the experimental matrix were simulated in ProModel.
Thirty replications were simulated for each scenario, which corresponded to a month of operation in the production cell, and the data for each output variable were stored for the superefficiency calculation.Simulation of the 25 scenarios and their 30 replications took slightly more than 16.25 minutes.The results (minimum, maximum, and mean values) for the outputs  1 and  2 for each DMU are shown in Table 9.
For the calculation of superefficiency related to each DMU for the Fuzzy-DEA-BCC model, the same procedure was employed.With the output variable results, each scenario can be related to each DMU superefficiency value for pessimistic and optimistic scenarios.In the following, the scenarios are lumped into a geometric average.The final result is presented in Table 10.Thus, the average can be calculated for each DMU, making it possible to rank them in terms of superefficiency.
Based on the superefficiency value rankings (Table 3) in this study, DMU 2 proved to be the most efficient followed by DMU 17.Both are highlighted in Table 10.
Once these two DMUs are identified as the most efficient, a new interval for each decision variable can be redefined.In doing so, the optimization search space is reduced.Table 9 shows that variable  2 presented the same value for the most efficient DMUs, thus causing its value to be defined as  2 = 2,  (3,998) (3,997) reducing the number of variables from five to four.The new intervals for the other decision variables are presented in Table 11.
By reducing the variation interval for each decision variable, the search space for the optimal solution was reduced from 3,125 to 64-a reduction of nearly 98%.
To test the efficiency and robustness of the new search space, SimRunner was set to carry out the simulation model optimization, aiming to maximize total cell production ( 1 ) and total profit ( 2 ), for both the original variation range (Table 8) and the reduced variation range (Table 11).Results are seen in Table 12.
The responses presented by the optimizer for the decision variables were only equal for variable  1 = 1.For the other decision variables, the optimizer arrived at different values.
In the case of the original variation range, the responses indicated the need to hire more employees and purchase more equipment, except for variable  4 .As for the solutions of  1 and  2 , the results were statistically equal for a confidence level of 95%.Continuing the analysis, the optimizer took 38.25 minutes to execute 51 experiments before converging in the case of the reduced range problem which is equal to approximately 80% of the reduced experimental area.In the case of the original variation range, the optimizer took 2.1 hours to carry out 168 experiments before converging, which is equal to about 5.3% of the total experimental area.
As it can be seen in Figure 4 for both models tested with the proposed optimization procedure, the output variable responses were statistically identical, while the time needed to reach these results dropped considerably for reduced ranges.

Conclusions and Recommendations for Future Research
This paper has proposed a method for reducing search space for simulation optimization problems which has provided search space reductions of roughly 98% and significant reductions in convergence time without compromising the response quality.These results were reached through the proposal of using the Fuzzy Theory along with a DEA-BCC model which enabled scenario analysis in the face of uncertainty, which is a common reality for discrete-event simulation models, considering that they most commonly deal with stochastic, dynamic, and interrelated environments.With an output variable analysis taking uncertainty into account, the use of Fuzzy-DEA-BCC allowed an efficiency analysis of pessimistic and optimistic scenarios.This, in turn, enabled the ranking of the most efficient DMUs.
Upon determining the two most efficient DMUs, a new range for each decision variable was established, permitting the optimization software to concentrate on the region of the greatest efficiency, according to the analysis conducted with Fuzzy-DEA-BCC.
As a means of validating this paper's proposal, a widely used commercial optimizer was used to check whether the method was indeed able to limit the search region to the area containing the best solutions or not.In order to do so, the Mathematical Problems in Engineering optimizer ran the simulation model under both variations: original and reduced with Fuzzy-DEA-BCC.For both study objects, the method was able to provide a response of equal quality from the reduced search space when both variation ranges were compared.There was a significant reduction in convergence time with reduced search space.
Finally, it is worth mentioning that the Taguchi arrays proved to be practical and reliable, as they represented the search space, exploring the maximum possible diversity of levels present in each decision variable, which would have been difficult using classical experimental techniques that use two or three levels.
The possibilities for future research include (i) utilizing GPDEA-BCC [38] which improves DEA discrimination even without a number of DMUs to meet Cooper, Seiford, and Tone's rule [16]; (ii) conducting tests with continuous decision variables; (iii) applying other arrays or strategic experiments which substitute the necessity for orthogonal arrays; (iv) testing the method proposed in this paper, using other optimizers, such as multiple comparison procedure, optimal computing budget allocation, and nested partitions that seek to increase the efficiency of the optimization process.

Figure 1 :
Figure 1: Production frontier when DMU B output is Fuzzy.Source: adapted from Kao and Liu [13].
(i) Indices: (a)  is the DMU index.(b)  is the output index.(c)  is the input index.(ii) Parameters: (a) ỹ0 and x0 are, respectively, the upper and lower bounds in the definition intervals of the triangular membership function for the rth Fuzzy output and the th Fuzzy input for DMU 0 , considering the average as the most probable value without uncertainty.(b) Ỹ0 and X0 are, respectively, the upper and lower bounds in the definition intervals of the triangular membership function for the th Fuzzy output and the th Fuzzy input for DMU 0 , considering the average as the most probable value without uncertainty.(c) ỹ is the lower bound for the definition interval of the triangular membership function of the th Fuzzy output for the th DMU, considering the average as the most probable value without uncertainty.(d) Ỹ is the upper bound of the triangular membership function definition interval of the th Fuzzy output for the th DMU, considering the average as the most probable value without uncertainty.(e) x is the lower bound of the triangular membership function definition interval of the th Fuzzy output for the th DMU, considering the average as the most probable value without uncertainty.(f) X is the upper bound of the triangular membership function definition interval of the th Fuzzy input for the th DMU, considering the average as the most probable value without uncertainty.(g)  is the value chosen for the -level based approach, with  ∈ variation [0, 1].(h) Ψ  is the  coefficient in constraints linked to the th Fuzzy input for DMU 0 .(i)  0 is the  coefficient in constraints linked to the th Fuzzy output for DMU 0 .(j)   is the  coefficient for constraints linked to the th Fuzzy output of the th DMU.(k) Ψ  is the  coefficient for constraints linked to the th Fuzzy input of the th DMU.(iii) Decision variables: (a)   is the weight associated with the th output.(b) V  is the weight associated with the th input.

Figure 2 :
Figure 2: Variation limits for the incorporation of uncertainty.

Figure 3 :
Figure 3: Procedure proposed for an optimization via simulation utilizing Fuzzy-DEA-BCC and superefficiency.

Figure 4 :
Figure 4: Results comparison with optimization of both simulation models.

Table 1 :
Decision variables, types, and limits for the study object.

Table 2 :
Experimental matrix and results.

Table 3 :
Superefficiency under uncertainty for a pessimistic scenario.
optimistic scenarios by means of an  variation, summing up to 11 scenarios.These values are presented in Tables

Table 4 :
Superefficiency under uncertainty for an optimistic scenario.

Table 5 :
Global matrix for geometric averages for pessimistic and optimistic scenarios.

Table 6 :
Decision variables, types, and new limits for the study object.

Table 8 :
Decision variables, types, and limits for the second study object.

Table 9 :
Experimental matrix and results.

Table 10 :
Global matrix for geometric average for pessimistic and optimistic scenarios.

Table 11 :
Decision variables, types, and new limits for the second study object.