PSO Based Optimization of Testing and Maintenance Cost in NPPs

Testing and maintenance activities of safety equipment have drawn much attention in Nuclear Power Plant (NPP) to risk and cost control. The testing and maintenance activities are often implemented in compliance with the technical specification and maintenance requirements. Technical specification and maintenance-related parameters, that is, allowed outage time (AOT), maintenance period and duration, and so forth, in NPP are associated with controlling risk level and operating cost which need to be minimized. The above problems can be formulated by a constrained multiobjective optimization model, which is widely used in many other engineering problems. Particle swarm optimizations (PSOs) have proved their capability to solve these kinds of problems. In this paper, we adopt PSO as an optimizer to optimize the multiobjective optimization problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Numerical results have demonstrated the efficiency of our proposed algorithm.


Introduction
Improvement of the availability performance for safetyrelated systems has drawn much attention in Nuclear Power Plant (NPP) nowadays.One way to increase availability of these systems is to improve availability of the equipment that constitutes them.In this way, people often pay attention to the more efficient testing and maintenance activities.NPPs often pursue more efficient testing and maintenance activities to control risk and cost.Actually, safety-critical equipment is normally on standby till occurrence of an accident situation which requires safety-related systems to prevent or mitigate the accident process.In order to keep safe-related systems at a high level of availability or safety, regular testing and maintenance activities are implemented.Efficient regular testing and maintenance strategy can improve the availability performance of the systems, and meanwhile it will lead to great expenditure cost.Therefore, both risk controlling and expenditure effectiveness have drawn much attention in NPP [1,2].
Technical specifications define the limits and conditions for operating NPPs which can be seen as a set of safety rules and criteria required as a part of safety analysis report of each NPP.Both technical specifications and maintenance activities are associated with controlling risk and then with availability of safety-related systems.The resource related to risk controlling rules and criteria formally enter into optimization problems.Using a limited expenditure resource to keep safety-critical equipment at a high level of availability or safety actually is a constrained multiobjective optimization problem where the cost or the burden, that is, number of tests conducted, duration, incurred cost, and so forth, is to be minimized while the unavailability or the performance of the safety-critical equipment is constrained at a given level.
By now, some researchers have made great achievements in nuclear technology area.References [3,4] presented a constrained multiobjective optimization model to solve this problem using genetic algorithm (GA); reference [5] first presented PSA-based techniques to solve risk-cost maintenance and testing model of an NPP using GA; reference [6] puts forward using a multiobjective approach to regulate Nuclear Power Plant (NPP); reference [7] presents using fuzzy-genetic approach to optimize the test interval of safety systems at NPP considering parameters uncertainty.In this paper, we put forward using PSO to solve the constrained multiobjective optimization problem which simulates the testing and maintenance activities.The PSO method is firstly used to solve the multiobjective optimization problem described by testing and maintenance activities in NPPs.It is a heuristic algorithm and can offer the solution by iteratively trying to improve a candidate solution with regard to a given measure of quality.Numerical results have demonstrated the reasonability of PSO method.
The plan of this paper is the following: Section 2 presents the unavailability and cost models of critical systems/components of NPP; Section 3 gives the multiobjective problem model; Section 4 reviews PSO method; Section 5 presents a case study; finally, Section 6 draws a short conclusion.

System Risk and Cost Function
2.1.System Unavailability Model.As to nuclear facilities, the system unavailability is classified into three types: component's unavailability, common failure, and human errors.In this paper, we just consider the component's unavailability which is caused by random failure and test and maintenance activities which are the functions of the optimization variables such as test interval, test duration, maintenance period, allowed outage time, and so on.The system unavailability is often modeled by fault tree using rare-event approximation as follows [4]: where  is the decision variable vector; the sum in  refers to the number of minimal cut sets generated from the considered system structure function and the product in  represents the number of the basic events belonging to the corresponding MCS.The   () represents the unavailability of the basic event  contained in minimal cut sequence (MCS) .
The unavailability expressions of basic events caused by random failure are written as [4]: (2) Equation ( 2) is the time-dependent unavailability evaluated at  = , where  denotes per-demand failure probability and  represents the failure rate.Equation (3) is the average time-dependent over a given time span .
To reflect the effect of age, preventive maintenance, and working conditions, an averaged standby failure rate  is developed [8,9]: Table 1: Meanings of the parameters used in (4)- (5).

Parameter Meaning 𝜆 0
The residual standby failure rate The linear aging factor Ψ() Working condition factor The maintenance effectiveness factor The period of a preventive maintenance  Overhaul period to replace a component Note that ( 4) is applicable for proportional age setback (PAS) mode, and ( 5) is used for proportional age reduction PAR.The meanings of the parameters involved in (4) and ( 5) are listed in Table 1.
Therefore, one could apply (4) or ( 5) into ( 2)-(3) to account for random failure contributions considering effect of age, preventive maintenance, and working conditions.
Next, let us consider the models to account for the testing and maintenance activities effect, [9,10] developed three expressions to characterize such effects: The meanings of notations used in (6) are listed in Table 2. Given that a test interval  and preventive maintenance period , the   (),   (),   () can be calculated as follows: Note that, in (9),   () can be calculated from (2) replacing  with .
Additionally, () is often restricted by the allowed outage time (AOT) in the standard PRA and is often calculated by the following relationships: where  represents the allowed downtime;  is the mean time to repair.
The meanings of the notations used in (11) are listed in Table 3.
Then, the system cost model is easily formulated by summing up the corresponding cost contributions of the relevant components as follows: where  represents the total number of the components in the considered system.Obviously, this model can be described by a multiobjective optimization problem.

Constraints.
The presence of constraints significantly affects the performance of an optimization algorithm, including evolutionary search methods.Satisfying constraints is a difficult problem in itself often.There have been a number of approaches to impose constraints including rejection of infeasible solutions, penalty functions and their variants, repair methods, use of decoders, and so on.A comprehensive review on constraint handling methods is provided by Michalewicz [11].All the methods have limited success as they are problem dependent and require a number of additional inputs.When constraints cannot be all simultaneously satisfied, the problem is often deemed to admit no solution.Hourly cost for loss of production for plant downtime The total cost of replacing a component The number of constraints violated and the extent to which each constraint is violated need to be considered in order to relax the preference constraints.Generally speaking, we can impose constraints over (1) the objective functions and ( 2) the values the decision variables in vector  can take.In our first case, we apply constraints over one of the two possible objective functions, risk or cost function, which will act as an implicit restriction function alternatively.For example, if the selected objective function to be minimized is the risk, then the constraint is a restriction over the maximum allowed value to its corresponding cost.In our second case, we handle constraints directly over the values the decision variables in vector  can take, that is, technical specification and maintenance requirements (TS&M) parameters, which are referred to as explicit constraints.Examples of this type of constraints in optimizing TS&M include limiting the STIs, AOTs, and the preventive maintenance (PM) period to take typified values because of practical considerations of planning, representing each one, for example, an hour, a day, a month, or any other realistic period in the plant, instead of only mathematical solutions that are often unpractical.

Multiobjective Optimization Model
As to a general multiobjective optimization problem, it often has -dimensions decision variables and -dimension optimizing objectives and can be expressed as follows: min  =  () = ( 1 () ,  2 () , . . .,   ()) where  is an -dimension decision space and  is an dimension objective space defining  mapping functions from decision space to objective space.The   () represents  inequality constraints and ℎ  () represents  equalities constraints. min and  max denote the low and upper boundaries of the decision variables, respectively.The specific multiobjective optimization model for optimizing testing and maintenance activities at NPP can be expressed as follows: where   is a given limited unavailability level.In this paper we are using PSO based techniques to solve the minimization problem described with (14).
Multiobjective optimization has been applied in many fields of science, including engineering, economics, and logistics where optimal decisions need to be taken in the presence of tradeoffs between two or more conflicting objectives.For a nontrivial multiobjective optimization problem, there does not exist a single solution that simultaneously optimizes each objective.When one deals with multiobjective optimization problems with conflicting optimization criteria, there is not only a single solution but also a set of alternative solutions [12].None of these solutions can be said to be better than the others with respect to all objectives.Since none of them is dominated by the others, they are called nondominated solutions.The curve formed by connecting these solutions is known as a Pareto optimal front and the solutions that lay on this curve are called Pareto optimal solutions.Classical optimization methods for multiobjective optimization suggest converting it to a single-objective optimization problem through emphasizing one particular Pareto-optimal solution at a time.This is the disadvantage of classical optimization methods, because finding Pareto-optimal solutions set is required often.Recently, a number of evolutionary algorithms (MOEAs) have been proposed [13][14][15].The reason is their ability to obtain multiple Pareto-optimal solutions in one single simulation run.In these algorithms, genetic algorithm (GA) is wisdom and popular for multiobjective optimization problems and GAs are adaptive methods that can be used in searching and optimization problems.Genetic algorithms belong to the larger class of evolutionary algorithms (EA), which generate solutions to optimization problems using techniques inspired by natural evolution, such as inheritance, mutation, selection, and crossover.The particle swarm optimization (PSO) is one type of evolutionary algorithms (EAs); PSO shares many similarities with evolutionary computation techniques such as genetic algorithms (GAs).The system is initialized with a population of random solutions and searches for optima by updating generations.However, unlike GA, PSO has no evolution operators such as crossover and mutation but its particles have mnemonic ability of the historical optimal positions.Thus PSO is of simplicity, flexibility, and easy operation compared with GA.In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles.As many other EAs such as genetic algorithm (GA), Ant Colony Optimization (ACO), and so forth, PSO can find Pareto optimal solutions and have chances to find near global solutions, only with different probability and reliability in finding these solutions.The more details of PSO will be introduced later on.

The Multiobjective Optimization
Algorithm Based on PSO 4.1.Particle Swarm Optimization.Particle swarm optimization (PSO) has been successfully used to optimize nonlinear functions, combinatorial optimization problems and multiobjective problems [16,17] because of its simplicity, flexibility, easy operation, and fast convergence.PSO is originally attributed to Eberhart et al. [18,19] and was first intended for simulating social behaviour, [20] as a stylized representation of the movement of organisms in a bird flock or fish school.The algorithm is simplified and it is observed to be performing optimization.The book by Kennedy and Eberhart [21] describes many philosophical aspects of PSO and swarm intelligence.An extensive survey of PSO applications is made by Poli [22].The PSO's basic idea is having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formula over the particle's position and velocity.Each particle's movement is influenced by its local best known position and is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles.This is expected to move the swarm toward the best solutions.However, as many other metaheuristic methods, PSOs do not guarantee that an optimal solution is ever found.More specifically, PSO does not use the gradient of the problem being optimized; PSO can therefore also be used on optimization problems that are partially irregular, noisy, change over time, and so forth.
Let  be the number of particles in the swarm, each having a position   = ( 1 ,  2 , . . .,   )  in the searchspace and a velocity   = (V 1 , V 2 , . . ., V  )  .Let   = ( 1 ,  2 , . . .,   )  be the best known position of particle .There is the best particle in the swarm labeled by  and let   = ( 1 ,  2 , . . .,   )  be the best known position of the entire swarm; that is to say,   is the optimal position of search history before.Particles update their velocity and position as the following formula: where  = 1, 2, . . ., ,  denotes the th dimension of the particles,  is the number of iterations, and  is the momentum part.A basic PSO algorithm is shown in Algorithm 1.The summary flow chart of the basic PSO is shown in Figure 1.
The basic PSO described above has a small number of parameters that need to be fixed.One parameter is the size of the population.This is often set empirically on the basis of the dimensionality and perceived difficulty of a problem.Values in the range 20-50 are quite common.
Also there are many variant PSOs based on the basic PSO such as the PSO with inertia weight [19], the PSO with constriction coefficients [23], and so forth.The PSO with inertia weight is motivated by the desire to better control the scope of the search, reduce the importance of  max , and perhaps eliminate it altogether.The PSO with constriction coefficients which is based on some form of damping of the dynamics of a particle (e.g.,  max ) is necessary.These variant PSOs are applicable in different situations.In this paper, we only consider the basic PSO version to verify the validity of PSO for minimizing the testing and maintenance cost in NPP.

The Proposed Multiobjective Optimization Algorithm.
The basic idea of multiobjective optimization algorithm based on PSO in this paper is that through spliting and merging the dominated set and nondominated set repeatly, we can reach a better balance between the algorithm efficiency and accuracy.This is based on the idea of fitness dominance, which is similar with the idea mentioned in literatures [23][24][25].Let the initial population size be .  is one nondominated subset of the population whose size is  1 and   is another  2 size dominated subset which meets  1 +  2 =  (1 ≤  1 ,  2 ≤ ).For any element   ∈  , there at least exists an element   ∈   which holds that   dominates   (1 ≤  ≤  1 , 1 ≤  ≤  2 ).In each iteration process, only update the elements in   and then compare them with the elements in   based on fitness dominance rules.Then the dynamic switching strategy can be described as follows: for any   ∈  , if there exist   ∈  , holds   dominates   , then switch the positions of these two elements.After the previous preparation, we are now ready to describe the multiobjective optimization algorithm based on basic PSO (abbreviate it as MOBPSO (see Algorithm 2)).Let the th constraint for the th dimension of   = ( 1 ,  2 , . . .,   )  be  -lower ≤   ≤  -upper (1 ≤  ≤ ).

Case Study
In this section, the high-pressure injection system (HPIS) is considered a case study; the simplified structure diagram is shown in Figure 2.
The system consists of 7 valves and 3 pumps.The function is drawing water from the refueling water storage tank (PWST) and discharges it into the cold legs of the reactor cooling system through any of the two inlets, A or B. The components reliability parameters are listed in Table 4.
The components cost information is shown in Table 5.
The constraints on the test intervals are listed in Table 6.
The constraints on components preventive maintenance intervals are listed in Table 7.
The decision variables vector is chosen as (1) Initialize a population array of particles with random positions and velocities on  dimensions in the search space.(2) For loop (3) For each particle, evaluate the desired optimization fitness function in  variables.(4) Compare particle's fitness evaluation with its   .If current value is better than   , then set   equal to the current value, and   equal to the current location   in -dimensional space.(5) Identify the particle in the neighborhood with the best success so far, and assign its index to the variable .(6) Change the velocity and position of the particle according to ( 15)-( 16).( 7) If a criterion is met (usually a sufficiently good fitness or a maximum number of iterations), exit loop.(8)  the allowed outage time for valves and pumps, respectively.For the valves, the permitted range is 4 h ≤  1 ≤ 24 h.As to the pumps, the permitted range is 24 h ≤  2 ≤ 168 h.Our first case of study encompasses two different single objective optimization problems, considering the optimization of the risk function and adopting the cost function as an implicit constraint or vice versa.
Firstly, we choose the system yearly cost as an objective function and the system unavailability is treated as   a constraint (  ≤ 1.0 − 4).The customized basic PSO described in previous section has been used as optimizer.The optimization results are shown in Figure 3. From Figure 3, we can see the mean value of fitness (cost) in one particle decreases gradually along with the increasing of the iterative number.
Then, we choose the system yearly unavailability as the objective and treat the system yearly cost as a constraint (  < 2460 RMB).The optimization results are shown in Figure 4.The results show that the mean value of unavailability also decreases as the increasing of the iterative number.
The optimized results are shown in Table 8.
As observed in Figures 3-4, objective functions are becoming convergent with the increase of the iteration number.PSO optimizer finally presents a valid optimized variable vector.In case 1, optimized results keep the considered system nearly at the same risk level, but greatly reduce the cost.Moreover, optimized solutions notably decrease the system risk level remaining at almost the same expenditure cost.
In the second case we consider the model as a multiobjective optimization problem and obtain the nondominated solutions with MOBPSO algorithm.Figure 5 shows the obtained nondominated solutions.

Conclusions
One important aspect of the nuclear industry is to improve the availability of safety-related equipment at NPPs to achieve high safety and low cost levels.In this paper, the multiobjective optimization algorithm based on PSO has been applied to solve the constrained multiobjective optimization of testing and maintenance-related parameters of safety-related equipment.The numerical results indicate that PSO is a powerful optimization method for finding NPP configuration resulting in minimal cost and unavailability.Also based on the MOBPSO, we obtain the nondominated solutions set.The results successfully verify that the PSO is capable of finding a nondominated solution of a constrained multiobjective problem in resource effectiveness and risk control of NPP.The multiobjective optimization algorithm based on PSO should attract more attention to apply in optimization of testing and maintenance activities of safety equipments at NPPs.Exploring the capacity (including topology structure, the optimal selection of parameters, and the integration with

Table 2 :
(6)nings of the notations in(6).Mean downtime of corrective maintenance   Fraction of the total time t with the component down  Fraction of the total time  with the component down [8][9][10] Cost Function.As to equipment, the cost expressions due to implementing test and maintenance activities are often expressed as follows[8][9][10]:
1 ,  2 are the acceleration constant, whose values are often in [0, 2].  is the particle velocity of the last iteration, the second part  1  1 (   −    ) is the cognitive part, and the third part  2  2 (   −    ) 1 ∼ (0, 1) and  2 ∼ (0, 1) are two independent identity distribution random numbers.Generally,   is kept within the range [ min ,  max ].The first part of formula (15) V

Table 4 :
end loop Initialize a population array of particles with random positions and velocities on  dimensions in the search space and the population size is  = Pop Max.(2) For  = 1 to  Evaluate the fitness function in  dimensional variables, namely, F(  ) = { 1 (  ),  2 (  ), . . .,   (  )}.End for (3) Divide the initial population into two subsets P Set and NP Set. whose population sizes are  1 and  2 respectively.(4) Update the velocity and position of each particle according to (15)-(16).Where    is selected from the subset of P Set randomly.For the constraint-handling approach, update the  +1  with    +  +1  if    +  +1  is in the constraint interval, namely    +  +1  is feasible.(5) Dynamic switching strategy: Compare each particle in NP Set with that in P Set.Let the particles in NP Set be  1 , . . .,   , . . .,   2 and the elements in P Set be  1 , . . .,   , . . .,   1 .For  = 1 to  2 For  = 1 to  1 If F(  ) < F(  ) Switch   and   then update their index and position in sets.If there exist  same particles in P Set, delete them and re-initialize  particles in NP Set, and vice versa.Update  1 and  2 .If  1 ̸ =  or not reaching the given maximum iterative number, goto Step 3. Components reliability parameters.