Multivariable Controller Design for the Cooling System of a PEM Fuel Cell by considering Nearly Optimal Solutions in a Multiobjective Optimization Approach

This paper presents a design for the multivariable control of a cooling system in a PEM (proton exchange membrane) fuel cell stack. This system is complex and challenging enough: interactions between variables, highly nonlinear dynamic behavior, etc. This design is carried out using a multiobjective optimization methodology. There are few previous works that address this problem using multiobjective techniques. Also, this work has, as a novelty, the consideration of, in addition to the optimal controllers, the nearly optimal controllers nondominated in their neighborhood (potentially useful alternatives). In the multiobjective optimization problem approach, the designer must make decisions that include design objectives; parameters of the controllers to be estimated; and the conditions and characteristics of the simulation of the system. However, to simplify the optimization and decision stages, the designer does not include all the desired scenarios in the multiobjective problem definition. Nevertheless, these aspects can be analyzed in the decision stage only for the controllers obtained with a much less computational cost. At this stage, the potentially useful alternatives can play an important role. These controllers have significantly different parameters and therefore allow the designer to make a final decision with additional valuable information. Nearly optimal controllers can obtain an improvement in some aspects not included in the multiobjective optimization problem. For example, in this paper, various aspects are analyzed regarding potentially useful solutions, such as (1) the influence of certain parameters of the simulator; (2) the sample time of the controller; (3) the effect of stack degradation; and (4) the robustness. Therefore, this paper highlights the relevance of this in-depth analysis using the methodology proposed in the design of the multivariable control of the cooling system of a PEM fuel cell. This analysis can modify the final choice of the designer.


Introduction
In control engineering, optimization tools are widely used, for example, in the design of control systems [1][2][3]. Normally, in these problems, there are several conflicting objectives to optimize (such as output errors, control effort, and robustness). erefore, it is reasonable to solve these problems as multiobjective optimization problems (MOP [4][5][6][7]). us, the designer can analyze the controller tradeoff for each design objective and can better choose the final controller.
In an MOP, nearly optimal solutions (also called approximate or ε-efficient solutions) have been studied by many authors so far [8][9][10]. ese alternatives have a slightly worse performances than the optimal solutions and may be useful to the designer. However, these alternatives are ignored in a classic MOP and considering all these solutions can slow down the algorithm and overcomplicate the decision stage. Among them, the solutions with similar performance to the optimal ones in the objective space and those that differ significantly in the parameter space (different neighborhoods) are the potentially useful alternatives for the designer [11][12][13]. We define the potentially useful solutions as the optimal and nearly optimal solutions nondominated in their neighborhood. ese alternatives provide the designer with greater diversity without increasing excessively the number of possible alternatives [11]. e designer can then analyze these controllers (significantly different from the alternatives obtained in a classical MOP) and make the final decision with higher quality information. us, it is possible to find nearly optimal controllers with better performances than the optimal ones in some features not included in the design objectives [14]. In this situation, the designer could choose a nearly optimal controller instead of an optimal one.
To formulate an MOP, the designer must define certain aspects. In the design of control systems, the MOP must contemplate aspects such as Firstly, the designer must choose the design objectives. ese objectives are defined to measure certain characteristics of the control such as output errors, control efforts, or robustness. However, each of these characteristics can be analyzed using different indicators. For example, output errors can be measured by ITAE (integral time absolute error), IAE (integral absolute error), ISE (integral squared error), and ITSE (integral time square error) [15]. Including many of these indicators in the MOP would increase the number of objectives and this has two drawbacks: a greatly increased computational cost and a more complicated decision stage. Increasing the number of the design objectives increases optimal and nearly optimal solutions if a similar discretization is maintained. Each new solution generated must be compared with all the optimal solutions and nearly optimal ones to analyze their inclusion in the Pareto set (or nearly optimal set). erefore, increasing the design objectives significantly increases the computational cost of the optimization process. Furthermore, this increase in solutions, together with the new design objectives added, makes the decision process and final decision more difficult for the designer. erefore, the designer usually chooses only some of the indicators to define the MOP. e rest of the interesting indicators can be analyzed more cheaply in the decision stage [14].
In the design of controllers, another important element is the process model employed. is model may have uncertainties in its structure and/or in the value of its parameters. ere are different methods to evaluate the impact of these uncertainties: such as Monte Carlo [16] and minimax [17]. Nevertheless, these methods increase the computational cost of the MOP [18]. e simulation environment setup is set in the definition of the MOP. However, there may be different setups valid for the designer: different numerical integration methods, different sample times, different types of noises, etc. Considering all these design alternatives in the MOP is unapproachable. e designer must therefore establish certain fundamental aspects to define the MOP, but there are other interesting aspects that are not included in the MOP due to various limitations (usually related to the computational cost). ese aspects ignored in the optimization stage could be analyzed in the decision stage on the controllers obtained during optimization. In this scenario, the nearly optimal alternatives nondominated in their neighborhood have an especially relevant role. ese controllers have similar performances to the optimal ones, but they have significantly different parameters (they are located in different neighborhoods).
ese controllers could produce an improvement (even significant) over the optimal ones in some aspect not included in the optimization process. For this reason, it can be very valuable to obtain these nearly optimal controllers. Depending on their behavior in the aspects not considered in the optimization, the final choice can be made for a nearly optimal controller.
Let us look at a specific example. We define an MOP for the adjustment of a proportional integral (PI) controller (parameters to be adjusted: gain K c and integral time T i ) for a nonlinear nominal model of the process. e MOP is defined with two design objectives for changes in the setpoint: f 1 measures the output errors through the ITAE and f 2 measures the control effort through IAU (integral of absolute control effort). In the simulation environment, steps are introduced in the setpoint and a noise in the outputs (similar to the noise present in the actual process), and no additional disturbances are considered. Under this scenario, the set of optimal and nearly optimal controllers is shown in Figure 1.
ere are a set of optimal controllers (SET 1 ) and a set of nearly optimal controllers nondominated in their neighborhood (SET 2 and SET 3 ) in various neighborhoods of the parameter space. SET 2 and SET 3 provide the designer with new alternatives significantly different from the optimal ones (but with similar performances). us, these controllers are potentially useful for the designer. From these alternatives, we select optimal controllers x 1 and x 2 and nearly optimal controllers x 3 , x 4 , and x 5 (see Table 1).
e designer now wants to analyze the output errors of the selected alternatives with a new indicator, IAE. is new indicator has not been contemplated in the optimization stage of the MOP. e value of this indicator on the selected controllers can be seen in Table 1. e nearly optimal controllers x 5 and x 4 obtain a better IAE than the optimal controllers x 1 and x 2 , respectively. In fact, the nearly optimal controller x 5 obtains a significant improvement over the optimal controller x 1 . Additionally, the designer analyzes the robustness of the alternatives x 2 and x 4 . For this, we consider the uncertainty in the parameters of the nonlinear model. e controllers are evaluated on 50 random variations of the model parameters with a variation of ± 10%. Figure 2 shows the value of the design objectives for each of the 50 variations of the model. e blue points are the objective value obtained by each of the model variations on the optimal 2 Complexity controller x 2 .
e green squares are the objective value obtained by each of the variations of the model on the nearly optimal controller x 4 . e nearly optimal controller x 4 has a lower degradation than the optimal controller x 2 . Finally, the designer analyzes the influence of the noise in the selected controllers.
e controllers are evaluated on the design objectives by varying the type of noise (f 1 ′ and f 2 ′ in Table 2). e optimal alternative x 1 is dominated by the nearly optimal controller x 3 in this new scenario. So, x 3 is nearly as good as x 1 , and the dominance depends on the noise chosen for the simulations. erefore, the nearly optimal alternatives bring new controllers potentially useful for the designer. is diversity allows the designer to make a final decision with additional valuable information. In addition, the nearly optimal controllers present improvements in the three defined scenarios over the optimal controllers selected. Further, the nearly optimal solutions obtain similar performance in the design objectives compared to the optimal ones. is situation may lead the final choice of the designer towards a nearly optimal controller to the detriment of an optimal one.
In this work, we propose the design of a multivariable control system for the cooling circuit of a proton exchange membrane fuel cell (PEMFC [19,20]) stack. e correct design of the stack cooling system is vital in the durability, cost, reliability, and energy efficiency of the stack [21][22][23][24]. e PEMFC stack is part of a microcombined heat and power (μ-CHP) system [25][26][27][28]. μ-CHP systems are cogeneration systems. e main advantage of these systems is the use of the thermal energy produced in the generation of electrical energy. In this way, the efficiency of the system is increased. μ-CHP systems employ various technologies [29]. Among them, some authors agree that the most promising, due to their efficiency and low emissions, are technologies based on the fuel cell [30,31]. e most common μ-CHP systems of this type are based on PEMFC stacks. ese systems are sometimes used for the electrical and thermal supply of homes [32]. Nevertheless, it is necessary to advance in several technical aspects to improve the performance of these systems and reduce their costs. One of the most important work areas for improvement is the temperature control of the PEMFC [33,34].
In this paper, a nonlinear model of a PEM fuel cell (Nedstack, model 2.0HP) is used which is able to produce up x 5 x 1 x 2 Figure 1: A multiobjective example: the optimal solutions are the ones in set SET 1 and the nearly optimal solutions nondominated in their neighborhood are the ones in SET 2 and SET 3 . e gray area represents the zone of the nearly optimal solutions.   Complexity to 2 kW of electrical energy and 3.3 kW of thermal energy. is stack is cooled by a liquid cooling system. is model is described in [35]. Using the methodology presented, nearly optimal controllers will be obtained. ese controllers provide the designer with significantly different alternatives (in their parameters). anks to them, the designer can make a final decision using additional valuable information.
erefore, this work shows as a novelty the usefulness of considering the nearly optimal alternatives nondominated in their neighborhood in the design of a multivariable control system. e methodology proposed in the paper enables the maximum exploitation of a specific control technology by using valuable information in the tuning procedure.
is work is structured as follows. In Section 2, some basic definitions previously presented in the literature are described. In Section 3, the nevMOGA algorithm used in this work is described briefly. In Section 4, the design of a multivariable control system for the cooling system of a PEMFC stack is presented. Finally, the conclusions are commented in Section 5.

Background
e resolution of an MOP produces a set of optimal solutions (P Q ). ere is also a set of nearly optimal solutions that could be interesting for the decision maker (P Q,ε ) and which are ignored in a classic MOP. Nevertheless, finding all of the nearly optimal solutions can considerably increase the number of alternatives. Among them, the solutions nondominated in their neighborhood (potentially useful, P Q,n ) provide the designer with alternatives that are close to the optimal ones in objective space, which differ significantly in parameter space. ese alternatives maintain the diversity in parameter space without excessively increasing the number of possible alternatives. With them, the designer can make the final decision with the benefit of valuable additional information. In this section, these sets are defined.
A multiobjective optimization problem (a maximization problem can be converted into a minimization problem; for each of the objectives that have to be maximized, the transformation: max f i (x) � −min(−f i (x)) can be realised) can be defined as follows: Definition 2. Pareto set: the Pareto set (denoted by P Q ) is the set of solutions in Q that is nondominated by another solution in Q: (2) Definition 3. Pareto front: given a set of Pareto optimal solutions P Q , the Pareto front f(P Q ) is defined as Definition 5. ε-efficiency [13]: the set of ε-efficient solutions (denoted by P Q,ε ) is the set of solutions in Q which are not −ε-dominated by another solution in Q: Definition 6. Neighborhood: define n � [n 1 , . . . , n k ] as the maximum distance between neighboring solutions. Two decision vectors x 1 and x 2 are neighboring solutions ( Definition 7. n−dominance: a decision vector x 1 is n−dominated by another decision vector x 2 if they are neighboring solutions (Definition 6) and Definition 8. n−efficiency [11]: the set of n−efficient solutions (denoted by P Q,n ) is the set of solutions of P Q,ε which are not n−dominated by another solution in P Q,ϵ : e sets P Q , P Q,ε , and P Q,n may have infinite solutions. erefore, obtaining these is often unapproachable computationally. Normally, the designer obtains the discrete sets P * Q ⊂ P Q , P * Q,ε ⊂ P Q,ε , and P * Q,n ⊂ P Q,n , in such a way that appropriately characterize P Q , P Q,ε , and P Q,n , respectively. Figure 3 shows an example. ere is a set of optimal solutions SET 1 located in the neighborhood 1 . In addition, there is a set of nearly optimal solutions (gray area). Both sets form P Q,ε . erefore, if the designer considers the nearly optimal alternatives, he or she will obtain new alternatives significantly different from the optimal ones (neighborhood 2 and neighborhood 3 solutions). A knowledge of these neighborhoods enables the designer to make a more informed final decision. But adding all the nearly optimal solutions has two drawbacks: it slows down the algorithm and complicates the decision stage. Nevertheless, the nearly optimal solutions nondominated in their neighborhood provide the designer with diversity in the obtained set without excessively increasing the number of possible alternatives. Consequently, it avoids the two drawbacks mentioned previously. In this example, these alternatives are 4 Complexity SET 1 , SET 2 , and SET 3 . We believe these alternatives are potentially useful solutions-the best solutions in each neighborhood. ere are various algorithms designed to provide nearly optimal alternatives [12,13]. Nevertheless, many do not take into account the space of the parameters when they discard solutions. erefore, these algorithms cannot guarantee that the potentially useful solutions have not been discarded. However, the algorithm nevMOGA [11] takes into account the space of the parameters in their discretization, guaranteeing that the potentially useful alternatives are not ruled out.
is algorithm has been evaluated on various examples, obtaining a good approximation to the set P Q,n in every case [14,38].

Materials and Methods
In this work, we use the algorithm nevMOGA (multiobjective evolutionary algorithm available in Matlab Central: https://www.mathworks.com/matlabcentral/ fileexchange/71448-nevmoga-multiobjective-evolutionaryalgorithm) [11]. is algorithm is an evolutionary algorithm based on the algorithm ev-MOGA [39]. nevMOGA provides the designer with a discrete set of optimal and nearly optimal solutions nondominated in their neighborhood (potentially useful solutions) P * Q,n (Definition 8). nevMOGA has four populations: (1) P(t) is the main population. is population must converge towards P Q,n , and not only towards P Q , to achieve diversity in the set found. e number of individuals in this population is Nind P . (2) Front(t) is the archive where a discrete approximation of the Pareto front (P * Q ) is stored. Its size is variable but bounded, depending on the number of boxes (divisions for each dimension, parameter n box) previously defined by the designer.
(3) Sub Front(t) is the archive where a discrete approximation of the nearly optimal solutions nondominated in their neighborhood (P * Q,n ∖P * Q ) is stored.
e size of this population varies.
Nevertheless, the size of this population is limited and based on the number of boxes (divisions for each dimension, parameter n box).

(4) G(t) is an auxiliary population. e population G(t)
stores the new individuals generated in each iteration. e size of this population is Nind GA , which must be multiple of 4. e parameter ε defines the size of the area of nearly optimal solutions (maximum degradation acceptable to the designer, Definition 4), and its definition is necessary for the use of nevMOGA. In addition, the parameter n (neighborhood, Definition 6) defines from which value we consider two significantly different solutions (in the decision space), and their definition is recommended. If the knowledge necessary for its definition is unavailable, there is a simple procedure for calculating this parameter from the ε parameter and a reference solution [38]. So, a very large ε or a very small n can lead to an excessive number of solutions, slowing down the optimization process and complicating the decision stage. However, a very small ε or a very large n can lead to obtaining a very small number of solutions, discarding potentially useful nearly optimal alternatives.

Results and Discussion
In this section, the new approach to the design of a multivariable control system for the cooling system of a PEMFC stack is used. e PEMFC stack can be part of a cogeneration system, for example, the μ-CHP system. e main advantage Optimal and nearly optimal solutions Optimal and nearly optimal solutions Figure 3: An MOP example. On the left, the objective space is shown, and on the right, the parameter space is shown. e optimal solutions are those in set SET 1 and the nearly optimal solutions nondominated in their neighborhood are the those in SET 2 and SET 3 . SET 4 does not contain any nearly optimal solution, and therefore, it is discarded.
Complexity 5 of these systems is that the use of the thermal energy produced in the generation of electrical energy increases overall efficiency. An accurate temperature control of the stack is necessary to improve the behavior of this type of system. erefore, in this section, the control is designed using the new methodology. e μ-CHP system used in this work is shown in Figure 5. e electric load demands electrical power to the PEMFC, requiring an electric current i. is current simulates the electrical demand of a house. To generate this current, the stack must be supplied with hydrogen (H 2 ) and oxygen (Air). In addition to the mentioned electrical energy, the stack generates thermal energy. Two water cooling circuits extract the heat, and the system consists of primary and secondary circuits, coupled by a heat exchanger. e heat generated by the stack is extracted by water from the primary circuit (with flow F w1 ) at a temperature T w out (water outlet temperature of the stack) and transferred to the secondary circuit (with flow F w2 ) through the heat exchanger. e heat finally arrives in the hot water tank (Tank) for use (heating and hot water). e primary circuit consists of Pump 1 that propels the water in the primary circuit, regulating the flow of water that passes through the stack (F w1 ). If F w1 increases, more heat is extracted and the stack cools down. e secondary circuit consists of a hot water tank and Pump 2.
e water flow rate of the secondary circuit (F w2 ) is varied using Pump 2. If this flow rate t: = 0 t: = t + 1 6 Complexity decreases, the amount of heat transferred through the heat exchanger also decreases, and as a result, less heat passes from the primary circuit to the secondary circuit. e water temperature at the stack inlet (T w in ) then increases. In this paper, a nonlinear model of a PEM fuel cell is used. is model (available in https://riunet.upv.es/handle/ 10251/118336) is described in [35].
is model has been made in first principles and has more than 30 equations. is model simplifies some little relevant aspects of the cooling system. However, it is a complex model with high nonlinearities.

MOP.
e methodology proposed in the paper enables the maximum exploitation of a specific control technology by using valuable information in the tuning procedure. For the design of the control system, a multiloop PI control was chosen because of its easier implementation and maintenance. e other control structures can be tuned with the same methodology. e RGA technique is used to establish the loop pairing [40]. Since the system is nonlinear, the static gains matrix is determined at three operating points corresponding to T w out � 65°C and T w in � 60°C By observing and comparing the gains of the matrices K 140 A and K 200 A , the static nonlinearity of the process is evident, with variations in the gains of up to 300%. e RGA matrices clearly show that the most suitable pairing is that of the main diagonal. erefore, the control structure is defined as a multiloop PI control which uses the following loop pairing scheme: output T w out is controlled by F w1 and T w in by F w2 (see Figure 6). at is to say, where Kc 1 and Kc 2 are the proportional gains, Ti 1 and Ti 2 are the integral time constants in seconds, and e 1 � r T w in − T w in and e 2 � r T w out − T w out are the output errors, where r T w in and r T w out are the setpoints for T w in and T w out respectively. e actuators F w1 and F w2 must meet the restrictions 1.85 ≤ F w1 ≤ 6.9 and 2.1 ≤ F w1 ≤ 9.5, and for this the PI incorporates an antiwindup mechanism.
For tuning the controller of this system, the water temperature of the tank is maintained constant at 55°C. us, the secondary circuit of the μ-CHP system is simplified.
e output temperatures of the real system have an associated noise. is noise is filtered to prevent it from spreading to system control actions. Similarly, for the simulation of the system, a noise similar to the real process is introduced. is noise will also be filtered so that the control actions of the system are less influenced by it.
Design objectives are evaluated throughout a defined test. is test has two changes of the current demanded at 500 and 1500 seconds (see Figure 7). In addition, as recommended by the manufacturer, the water outlet temperature from the stack (T w out ) should be 65°C for the optimal stack operation.
ere must be a 5°C gradient between the temperature of the water input and output of the stack. erefore, the system references are constant throughout the entire test: r T w out � 65°C and r T w in � 60°C. In this way, it is possible to operate at the optimum operating point suggested by the manufacturer. erefore, the performance of the controllers is evaluated on the rejection of disturbances (current demands). e heat demand is not evaluated because its influence on temperature changes has much slower dynamics than those produced by current changes. Additionally, its effect is filtered by the capacity of the secondary tank and the heat exchanger to the primary thermal circuit. erefore, the most critical disturbance is the change in the current demand, and presumably, the disturbance produced by the demanded heat will be easily rejected with a valid controller for the rejection of the current effect. e controllers obtained are evaluated by means of objectives that measure output errors and control efforts. e output errors and control effort objectives will presumably be in conflict. erefore, it is valuable to study these as independent objectives to analyze the trade-off between each. However, the system consists of two outputs and two control actions. It seems reasonable to add the output error objectives (in both outputs) and control effort (in both control actions), as they have the same relative importance and equal magnitude.
us, we simplify the optimization process and the decision stage. f 11 is the average absolute error in stack temperature T w out , in°C. e objective f 12 is the average absolute error in stack inlet water temperature T w in , in°C. e objective f 21 is the average absolute value of the rate of change of the control action F w1 , in (l/min)/s. e objective f 22 is the average absolute value of the rate of change of the control action F w2 , in (l/min)/s. e design objectives have been defined as integrals divided by the time to obtain an average measurement. In this way, the objectives have some physical sense, and their interpretation can be easier for the designer. Furthermore, the greater physical sense enables the designer to define the epsilon parameter (maximum Complexity degradation over design objectives) in a simpler way. erefore, the first objective f 1 is defined as the aggregation of the average error in both outputs (T w out and T w in ). e second objective f 2 is defined as the aggregation of the integral of the control action derivative in both control actions (F w1 and F w2 ). erefore, the objective f 1 measures the rejection of disturbances, while f 2 measures the control effort. e MOP is defined as where where subject to x ≤ x ≤ x, f j (x) < 0.375, for j � 11, 12, e constraints (equation (11)) have been chosen to obtain the set of solutions in the designer's region of interest.
us, we achieve an improvement in the pertinency of the set, discarding undesirable solutions. Furthermore, the bounds of the decision space (equation (12)) have been defined to find practical/realizable controllers.
Once the optimization problem is defined, the two main parameters of nevMOGA (ε and n) must be defined. In this MOP, the design objectives make physical sense (f 1 measures the average error in°C and f 2 measures the average variations in control actions). erefore, it is easier to define the ε parameter (maximum acceptable degradation). ε 1 and ε 2 maintain the units of the design objectives f 1 and f 2 respectively. In this problem, ε 1 � 0.05°C and    Figure 6: PI controller pairing scheme used to control the cooling system of the μ-CHP system. 8 Complexity ε 2 � 0.0125 (l/ min)/sec have been defined for f 1 and f 2 , respectively, (ε � 0.05 0.0125 ). We then chose n � 0.5 10 0.5 10 (neighborhood) based on the previously defined search space (approximately 4 − 5% of the search space). To optimize the defined MOP, nevMOGA is used with the following configuration: (i) Nind GA � 16 (ii) Nind P � 250 (iii) Generations � 2500 (iv) n box � 20 20 ese parameters have been defined to obtain an adequate distribution in the objective space (divisions for each dimension, parameter n box), a sufficient number of new candidate solutions (Nind GA and Generations), and an adequate number of individuals Nind P of the population P(t) (population to explore the search space). For the definition of the remaining parameters, the values suggested in [41] for the original algorithm (ev-MOGA) are used. Figure 8 shows the discrete set P * Q,n obtained by nev-MOGA. In the figure, to show the decision variables, we use the level diagram (LD (available in Matlab Central: https:// es.mathworks.com/matlabcentral/fileexchange/62224interactive-tool-for-decision-making-in-multiobjectiveoptimization-with-level-diagrams) [42,43]) visualization tool, using 2-norm (‖ · ‖ 2 ). e objective space is shown in an x-y graph because it is an MOP with only two objectives, and so the trade-off on the design objectives can be analyzed in a simpler way than if we use LD on the objectives space. As seen in the figure, nevMOGA has been able to find a large number of nearly optimal controllers nondominated in their neighborhood with similar performances to the optimal ones. ese controllers provide the designer with valuable information to make a final decision with still greater criterion-as will be shown below.
In all MOPs, the final decision is always a subjective decision based on the designer's preferences, knowledge, and previous experience. All the optimal solutions are equally valid but have different trade-offs between the design objectives. In this paper, we choose different optimal alternatives with different trade-offs (different areas of the Pareto front) that could be chosen by the designer (depending on his/her preferences). In this way, we validate the methodology independently of the specific preferences of the designer in question. ere is no unique procedure for the designer's final decision. However, our procedure consists of the following: (1) we choose an optimal solution in a certain area of interest for the designer and (2) we select significantly different solutions but with similar performance in the design objectives. On these alternatives, we analyze new indicators not included in the design objectives.
us, significantly different solutions with similar performance in the design objectives obtain a significant improvement in the new indicators analyzed with respect to the initially chosen optimal solution. is is valuable information for the designer before the final decision.
To carry out the analysis before the final decision, we chose three optimal controllers (x A1 , x B1 , and x C1 ) in three different zones in the objective space. x A1 is a fast controller, that is, with little error in the output in exchange for aggressive control action. Controller x C1 is slow, that is, it produces more output error in exchange for smooth control action. Finally, x B1 is a balanced controller. In addition, we choose the nearly optimal controllers x A2 , x B2 , and x C2 .
ese controllers obtain a similar performance to x A1 , x B1 , and x C1 respectively, being significantly different in their parameters.
Let us now look at the fast controllers (x A1 and x A2 , see Table 3). Figure 9 shows the behavior of the system with both controllers.
e objective values of both controllers are observed in Table 4. e error in T w in (f 11 ) is greater for the nearly optimal controller x A2 . However, in the output T w out (f 12 ), the opposite occurs, and the optimal controller x A1 produces a greater error. With respect to the control effort, x A1 controller is softer for F w1 (f 22 ) and more aggressive for F w2 (f 21 ) than the controller x A2 (see Table 4). erefore, although x A1 slightly dominates x A2 , there is no significant improvement (small behavioral differences). So, we look for new indicator to make a better informed final decision.
To study the selected controllers, we will make an analysis in four different scenarios: erefore, we will analyze how the increase of this parameter affects the performance of the controllers obtained (through the design objectives). Suppose we increase the sample time to Ts � 0.4 seconds. Considering the dynamics of the process, this sample time is still perfectly valid. In this scenario, the nearly optimal controller x A2 dominates the optimal controller x A1 (the dominance is reversed).
e objective values in this new scenario are observed in Table 5. e electrical power provided by the stack also depends on its degradation. e manufacturer provides a voltagecurrent characteristic curve for certain stack operating temperatures. ese curves are valid at the beginning of the stack life. However, after hours of operation, the electrical power generated by the stack decreases. e characteristic curve degrades, providing less voltage with the same demanded current, which translates into less electrical energy [44,45]. For example, in [45], it is said that the degradation of the stack is approximately 0.3-3% in the voltage provided for every 1000 hours of operation. erefore, in this analysis, we will study how this degradation of the stack affects the x A1 and x A2 controllers. is study is carried out by reducing a percentage (degradation) of the voltage provided by the stack. With a degradation greater than or Complexity equal to 6%, the optimal controller x A1 no longer dominates the nearly optimal controller x A2 (see Table 5). In this scenario, the x A2 controller provides softer control actions. erefore, after hours of stack operation, the nearly optimal controller x A2 is not worse (the controller with the highest objective value being understood as worse) than the optimal controller x A1 . e output temperatures of the real system have noise. is noise is filtered to prevent it from spreading to system control actions.
is noise has been included in the simulated system using white noise. However, the noise introduced has a random component depending on its seed. is value modifies the sequence of the noise values at each moment, maintaining its nature and amplitude. erefore, in this analysis, we will study how the random component of noise affects the performance of the controllers (changing the seed). Suppose that the seed of the noise is randomly modified. In this new scenario, the nearly optimal controller x A2 dominates the optimal controller x A1 (the dominance is reversed, see Table 5). erefore, in this new scenario (just as valid as the initial scenario for the designer), the x A2 controller could be optimal (and x A1 nearly optimal), and therefore, both controllers are equally good. e objective value obtained depends on the noise seed. A statistical study of this aspect is then appropriate.
is analysis can be made by performing more simulations with different nose seeds. is analysis can be carried out in the decision stage only for the optimal and nearly optimal solutions (much less expensive than making the analysis in the optimization stage). is analysis has been carried out on controllers x A1 and x A2 . e controllers have been evaluated on 250 randomly obtained noise seeds. 10% of the noise seeds cause the dominance to be reversed, that is, the x A2 controller dominates the x A1 controller. In addition, the dominance between both controllers disappears with 17% of the seeds. In this analysis, x A1 is a more robust controller for seed changes. Despite this, it is shown that changes in the noise seed can cause a change in the Pareto set obtained.
Finally, we will analyze how the uncertainty of the model affects the controllers studied. To do this, we made 50 variations of the parameter's value of the model. A variation of ± 15% is carried out on all the parameters of the model (30 parameters described in [35]). e controllers will be evaluated through the design objectives on each of the 50 variations of the model. In this way, we measure the robustness of the controllers.
We analyze how the uncertainty of the model affects the controllers studied. To do this, we made 50 variations of the parameters of the model with a variation of ± 15%. is variation is carried out on all the parameters of the model (30 parameters described in [35]). e controllers will be evaluated through the design objectives on each of the 50 variations of the model. In this way, we measure the robustness of the controllers.   Figure 8: Set of optimal and nearly optimal controllers nondominated in the neighborhood (P Q,n ) obtained for the control design of the cooling system. Decision variables are shown using the LD visualization tool, using the 2-norm (‖ · ‖ 2 ). Figure 10 shows the degradation limits of the x A1 and x A2 controllers over the 50 variations of the plant. e degradation of x A2 is practically included within the degradation limits of x A1 . erefore, the nearly optimal controller x A2 is more robust and shows less variability due to uncertainty in the model parameters.
erefore, after studying the fast controllers (x A1 and x A2 ), x A2 may be preferred by the designer. In different scenarios (sample time, stack degradation, random noise change, and robustness analysis), this controller produces a better (or equal) performance than x A1 . us, in this case, considering the nearly optimal controller x A2 has been very useful for the designer. Not considering this controller (obtaining only the optimal controllers) would mean ignoring relevant information that enables the designer to make a more informed decision.
Let us now analyze the compromise controllers (x B1 and x B2 in Figure 8). Figure 11 shows the system responses for both controllers. e controllers and their objective values are shown in Tables 3 and 4, respectively. e optimal controller x B1 shows a smaller error on T w in (f 11 ) with more aggressive control actions on F w2 (f 21 ). However, the nearly optimal controller x B2 shows smaller errors on T w out (f 12 ) with more aggressive control actions on F w1 (f 22 ). Again, although x B1 dominates slightly x B2 , no clear improvement is seen in their response, and therefore, it may be useful to evaluate their performance on the alternative indicators in order to make a more informed final decision.
Firstly, as above, the effect of increasing the sample time is studied. If the sample time is increased to Ts � 0.4 Table 5: Objective values of x A1 and x A2 controllers in different scenarios.

Scenario
Controller  Figure 9: Response of x A1 (optimal) and x A2 (nearly optimal) controllers. Although f(x A1 ) ≺ f(x A2 ), there is no clear advantage of the x A1 controller over x A2 in their response.  seconds, the optimal controller x B1 still dominates the nearly optimal controller x B2 (see Table 6). Secondly, we analyze how the degradation of the stack affects the controllers studied. In this case, with a degradation greater than or equal to 8%, x B2 dominates x B1 (the dominance is reversed, see Table 6). erefore, after hours of stack operation, the nearly optimal controller x B2 is better than the optimal controller x B1 . irdly, we analyze the influence of the noise introduced in the system outputs on the controllers studied. Suppose we slightly decrease the amplitude of the noise introduced on T w out . In this scenario, x B1 does not dominate x B2 (see Table 6). Similarly, if we slightly increase the amplitude of the noise introduced on T w in , again, the x B2 controller is not dominated by x B1 . Finally, we analyze the robustness of the compromise controllers (performed in the same way as in the previous comparison). In this situation, the optimal controller x B1 seems more robust than the nearly optimal controller x B2 (see Figure 12). erefore, after making this detailed study of the compromise controllers (x B1 and x B2 ), we can conclude that x B2 , in certain scenarios, is just as good as the optimal controller x B1 . However, in this case, the preference for the nearly optimal controller x B2 over x B1 is unclear. Again, this analysis is very valuable for the designer before making the final decision.
Let us now study the slow controllers (x C1 and x C2 , see Figure 8).
e objective values of both controllers are observed in Table 4. Figure 13 shows the outputs and control actions of both controllers. Both controllers have a significantly different behavior despite having a similar performance (objective values). erefore, it does not seem reasonable to select the final solution only from the results of the optimization (obtained for a specific scenario). It seems reasonable to let the designer choose between these solutions by consulting additional information that has not been taken into account in the optimization.
e error in T w in is greater for the nearly optimal controller x C2 (f 11 ). However, for the output T w out , the opposite occurs, and the optimal controller x C1 has a greater error (f 12 ). With respect to the control effort, the x C1 controller is smoother for F w1 (f 22 ) and more aggressive for F w2 (f 21 ) than the x C2 controller (see Table 4). erefore, x C1 slightly dominates x C2 , but significant differences in their behavior invite us to conduct a deeper study of both controllers with the goal of making a more informed decision.
Firstly, we analyze how the increase of the sample time affects the performance of the controllers obtained, analogously to the study of previous controllers. Again, suppose we increase the sample time to Ts � 0.4 seconds. In this scenario, the optimal controller x C1 does not dominate the nearly optimal controller x C2 (see Table 7). Secondly, we analyze how the degradation of the stack affects the slow controllers. In this study, x C2 is dominated by x C1 independently of the degradation of the stack (see Table 7). erefore, the degradation of the stack does not affect the dominance of these controllers. irdly, suppose that the seed of the noise is randomly modified. In this scenario, x C1 does not dominate x C2 (see Table 7). erefore, in another scenario (just as valid as the initial scenario, for the designer), the x C2 controller could be optimal, and therefore, both controllers can be equally good. e same study made on the controllers x A1 and x A2 is carried out with controllers x C1 and x C2 . In this case, 55% of the noise seeds (of 250 seeds randomly obtained) make the dominance disappear and there is no seed that reverses dominance.
us, it is shown that varying the noise seed varies the Pareto set obtained. Finally, we analyze the robustness of the slow controllers (in the same way as in the previous comparisons). In this context, x C1 seems to be a more robust controller than x C2 (see Figure 14). erefore, after this deep study of the slow controllers, the nearly optimal controller x C2 could be optimal in another scenario. In fact, in certain scenarios, both controllers can be considered equally good. Again, Table 6: Objective values of x B1 and x B2 controllers in different scenarios.

Scenario
Controller  Figure 12: Degradation limits of x B1 (optimal) y x B2 (nearly optimal) controllers due to uncertainty in the model. e model parameters are varied by ± 15%. this information is very useful for the designer before making a final decision. us, in this section, a deep study has been carried out for the design of the control of the inlet and outlet water temperatures of the PEMFC stack. Using the methodology raised, the utility of considering nearly optimal controllers nondominated in their neighborhood has been demonstrated. ese controllers could be equal (or better) than the optimal ones in different scenarios (sample time, stack degradation, noise change, and robustness analysis). Analyzing all these scenarios in the design objectives is unapproachable. However, the designer can analyze them at the decision stage. In this context, the nearly optimal solutions nondominated in their neighborhood are very relevant alternatives for the designer. anks to obtaining these alternatives, controllers that are significantly different to the optimal ones have been obtained. Due to this difference (in parameters), some of them show improvements in different scenarios not studied in the optimization stage. anks to the analysis made in the decision stage, the designer can carry out the final decision with valuable additional information.

Conclusions
In this paper, the design of the multivariable control system for the cooling circuit of a PEMFC stack has been shown. is system is complex and challenging enough: interactions Time (s) Figure 13: Response of x C1 (optimal) and x C2 (nearly optimal) controllers. Both responses are significantly different, although it has similar performance.  Figure 14: Degradation limits of x C1 (optimal) y x C2 (nearly optimal) controllers due to uncertainty in the model. e model parameters are varied by ± 15%. between variables, highly nonlinear dynamic behavior, etc. In the design, in addition to the set of optimal controllers under a multiobjective approach, the set of nearly optimal controllers potentially useful has been taken into account. Various aspects are analyzed on the potentially useful solutions, for example, the influence of certain parameters of the simulator on the Pareto front. is is an aspect little considered in the literature, and in this work, it has been shown how this can vary the Pareto set obtained (for instance, when the seed of the noise is changed). In addition, the effect of fuel cell degradation and robustness have also been analyzed. As observed in this work, this could be valuable information for the designer before the final decision. erefore, this document highlights the usefulness of this methodology in the indepth analysis of several of the aspects that can influence the tuning of a control structure, especially in the design of the multivariable control of complex systems such as the cooling system of a PEM fuel cell. Including all the interesting aspects for the designer in the optimization stage is generally computationally unapproachable and complicates analysis of the results. However, in this work, we have analyzed these aspects (not contemplated in the optimization stage) in the decision making phase. us, the designer can consider these aspects without excessively increasing the computational cost of MOP resolution.
In this context, the diversity in the set of controllers obtained takes a still more relevant role. Different controllers can provide an improvement (even significant) in several interesting aspects not included in the MOP. erefore, in this situation, nearly optimal controllers nondominated in their neighborhood play an essential role.
ese controllers provide the designer with solutions with similar performances to the optimal ones, but with significantly different characteristics. Consequently, these controllers provide greater diversity in the set, without excessively increasing the number of solutions obtained.
In this paper, we have analyzed various interesting scenarios for the designer using the obtained set of controllers P * Q,n . Firstly, with an increased sampling time, nearly optimal controllers can improve the optimal controllers; this means that the way the simulation itself is carried out can vary the result obtained in the resolution of the MOP. Secondly, we have analyzed how the degradation of the stack affects the performance of the controllers. In this scenario, nearly optimal controllers can again improve the optimal ones. e designer, with this analysis, can opt for nearly optimal controllers instead of controllers that are optimal only at the beginning of the stack's life. In addition, we have analyzed the effect of the seed used for the generation of the introduced noise. A change in this seed can cause the nearly optimal alternatives to improve the performance of the optimal ones. Generally, the seed of noise is a parameter not chosen by the designer but by the simulation tool. is implies that both scenarios (each with a different noise seed) are equally valid for the designer, and therefore, both types of controllers (optimal and nearly optimal) are also equally valid. Finally, a robustness analysis of the controllers obtained has been carried out. An assessment of the impact of the uncertainties in the parameters of the nonlinear model has been carried out. In some cases, the nearly optimal solutions are more robust.
In summary, it is worth considering these additional solutions as they can provide new and perfectly valid design alternatives.
is analysis has a computationally acceptable cost because it is only applied to a limited number of solutions-the set of optimal and nearly optimal controllers obtained. Its incorporation into the MOP statement would entail a high computational cost that may not be assumable. All of these analyses can move the designer's final choice towards a nearly optimal rather than an optimal controller. anks to the methodology presented, the designer can make a final decision with additional valuable information (ignored in the classic MOP) by obtaining potentially useful new controllers. erefore, this work has revealed the relevance of the nearly optimal alternatives nondominated in their neighborhood in the design of multivariable control systems. Given the usefulness of the approach proposed in this work on the controller design, as future work, we plan to use this methodology in system identification where nearly optimal models can be potentially useful for the designer.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.