Optimization for the Redundancy Allocation Problem of Reliability Using an Improved Particle Swarm Optimization Algorithm

(is paper presents an enhanced and improved particle swarm optimization (PSO) approach to overcome reliability-redundancy allocation problems in series, series-parallel, and complex systems.(e problemsmentioned above can be solved by increasing the overall system reliability and minimizing the system cost, weight, and volume. To achieve this with these nonlinear constraints, an approach is developed based on PSO. In particular, the inertia and acceleration coefficients of the classical particle swarm algorithm are improved by considering a normal distribution for the coefficients. (e new expressions can enhance the global search ability in the initial stage, restrain premature convergence, and enable the algorithm to focus on the local fine search in the later stage, and this can enhance the perfection of the optimization process. Illustrative examples are provided as proof of the efficiency and effectiveness of the proposed approach. Results show that the overall system reliability is far better when compared with that of some approaches developed in previous studies for all three tested cases.


Introduction
With the continuous advance of technology and increasing complexity of industrial systems, it has become imperative for all production processes to perform adequately during their designed life cycle. However, errors do occur and can be linked to the human factors either in the processing, in the utilization, improper storage facilities, and poor maintenance or several other environment-related factors [1]. erefore, in last three decades, system reliability becomes an important issue in improving the performance of any industrial system. Reliability is defined as the probability of achieving a set functionality goal or product function to successfully achieve these goals within a timeframe and in a controlled environment. Two approaches have been commonly used by designers to achieve the desired system reliability. e first is increasing the component reliability and the second is dividing the system into multiple subsystems and using redundant components with the same or less reliability for different subsystems. e first approach is quite expensive (i.e., higher reliability increases the component cost), and the required reliability improvement may fail to be realized even when the most reliable components are used. e second approach involves using a combination of optimal redundant components. Although the reliability of the entire system is improved accordingly, the associated cost, weight, and volume may also be affected. e problem of maximizing system reliability through the selection of redundancy and component reliability is called a reliabilityredundancy allocation problem (RRAP). e RRAP is a nondeterministic polynomial-time hard problem whose solution cannot be realized by direct, indirect, or mixed search approaches because of the discrete search space; that is, the optimum numbers of components are integers, and the components' reliability is floating. Recently, many deterministic methods were tried out, such as the metaheuristic [2][3][4], evolutionary algorithms [5], bee/ ant colony optimization [6,7], Tabu search [8], particle swarm optimization (PSO) [9,10], artificial neural networks [11], artificial immune system [12], fuzzy system [13,14], cuckoo search algorithm [15][16][17], reduced gradient method [18], branch and bound method [19], integer programming [20], dynamic programming [21], and even some biogeography-based optimizations (BBOs) [22]. e whole idea of there being various ways proves that the efficiency and computing time can still be improved, and each of these techniques has its advantages and limitations. For example, dynamic programming is only applicable where there are decomposable objective functions and constraints. To accommodate several constraints within a dynamic programming formulation, Lagrangian multiplier for the weight/volume/cost constraints or a surrogate constraint combining cost, volume, and weight into one is utilized. Beeand colony optimization are more adequate for difficult combinatorial problems. e required memory rapidly goes high with the RRAP size when using the bound and branch approach. Genetic algorithms (GAs) are dysfunctional where an exact solution is required but are the finest when reaching a global region. Cuckoo search algorithm has been a recent discovery in 2009 [17]. e number of cuckoo search algorithm parameters seems to be less than GA and particle swarm optimization. BBO is also a new entrant in the RRAP optimization problem [22]. BBO has certain features collaborating with those of the GA and PSO which means that BBO can share information between solutions. However, BBO does not have crossover step, its set of solutions is maintained and improved from one generation to the next by migrating, and the solutions of BBO are changed directly via migration from other solutions, i.e., BBO solutions have common characteristics with other solutions. In a bid to consider the uncertainty of some RRAP parameters in reallife cases, mathematical models of such problems can be developed in the fuzzy environment. In these cases, it is almost impossible to get the exact optimal solution, and the optimization problem is to try and develop a solution which is close to a specified single objective. Finding an acceptable solution set requires solving the multiobjective fuzzy optimization problem repeatedly.
PSO is a stochastic global optimization technique inspired by social behavior of bird flocking or fish schooling. It is commonly applied in the optimization problem, such as in the RRAP [23][24][25][26]. e algorithm works by starting a group randomly over a specified search area where every bird/fish (i.e., particle) fly from a certain position at a certain velocity with the objective to find the global best position after this is done repeatedly. To reach this global best position and at each iteration, each particle adjusts its velocity based on its momentum and the influence of its previous best position and the best position of the other partakers. Literature studies [23][24][25][26][27][28][29] concluded that using only the PSO is, in some cases, not sufficient to obtain the ideal solution. Indeed, classical/traditional PSO algorithm has flaws: it leads to premature convergence phenomenon, which cannot perform a good global search and fall into the local solution convergence situation, and then it becomes hard to adapt to complex nonlinear optimization problems. Due to this, some authors used hybrid approaches [30][31][32], where the PSO is combined with the genetic algorithm to enhance the effectiveness. e purpose of this work is to develop an efficient PSObased approach in order to fix its deficiencies. In our proposed option, the inertia coefficient, velocity, and position updating are described by considering a normal distribution which can realize the multistep hopping of particles in the search space to meet the objective of improving search efficiency. e improved PSO is tested for solving the RRAP in series, series-parallel, and bridge systems. e system reliability is maximized with the nonlinear system constraints relative to volume, weight, and system cost. Series, seriesparallel, and bridge systems are taken into consideration and optimized to assess the effectiveness and performance of our advanced procedure. All computational results are compared with those from the literature and obtained using the simulated annealing algorithm, biogeography-based optimization, surrogate constraint algorithm, cuckoo search algorithm, classical PSO, and hybrid GA-PSO algorithm. In conclusion, the suggested IPSO outperforms the simulated annealing technique, the traditional PSO algorithm, and the upgraded GA-PSO algorithm in all three system configurations (i.e., series, series-parallel, and bridge systems). Recent strategies such as biogeography-based optimization and the cuckoo search algorithm, on the contrary, appear to be more efficient, albeit the difference is minor. e remaining of the paper is displayed as follows. e RRAP is described in Section 2; Section 3 introduces the classical and our improved PSO approach; numerical examples and computational results are depicted in Section 4; and conclusions are summarized in Section 5.

Redundancy Optimization of the Reliability Problem
Industrial systems can be generally structured in four types: series, parallel, series-parallel, and complex (bridge) systems. Figure 1 shows the systems analyzed in this study. Each subsystem is composed of x parallel components (redundant) with similar features in matters pertaining to weight, cost, volume, and reliability. For each series, series-parallel, and bridge system, the reliabilities are defined, respectively, as Journal of Optimization where R i is the reliability of the i th parallel subsystem, as defined by where x i is the redundancy allocation of the i th subsystem and r i is the individual component reliability of this subsystem. In all the following configurations, we consider x i as a discrete integer that ranges from 1 to 10 and r i as a real number that ranges from 0.5 to 1-10 −6 . e aim of the RRAP is to improve the reliability of the entire system under a specified controlled environment based on the cost, weight, and volume of the system. e system cost function f system cost , which depends on the number of components per subsystem, the number of subsystems, and the reliability of the components, can be defined as where n is the number of subsystems and α i and β i are the parameters of the cost function of subsystems. e value 1000 in the equation is the mean time between failures (i.e., the operating time during which the component must not fail). Figure 2 shows the evolution of the cost function of the reliability in the cases of having 1 and 10 redundant components for one subsystem. e associated cost exponentially increases with reliability and the number of redundancy allocations. Figure 3 shows the evolution of the cost function of redundancy allocation for two levels of reliability (i.e., 0.9 and 0.99), where the cost ratio is 33.94 (for β � 1.5).
e weight function of the system depends on the number of components per subsystem and the number of subsystems and can be defined as where w i is the weight function parameter.
Early studies used a range of 3 to 9 for the weight function parameter. e weight is found to increase exponentially with the reliability and the amount of redundancy allocation (Figure 4). e system volume function also depends on the number of components per subsystem and the number of subsystems, and it can be defined as where v i is the volume function parameter.  In summary, the general RRAP can be expressed in the following formulation, in which the reliability is considered as the objective function.
where C, W, and V are the upper bounds of the cost, weight, and volume of the system, respectively.

Improved Particle Swarm Optimization Algorithm
To solve the three proposed systems, an improved PSO (IPSO) approach was developed. e classical PSO algorithm can easily become premature and fall into the local extremum and is difficult to functionalize in complex nonlinear problems. is study introduces a new formulation of the inertia coefficient, acceleration coefficient, velocity, and position updating to improve the accuracy and fitness of the technique [33]. e PSO implementation involves defining the "workspace" by setting the maximum and minimum variables, population size n, maximum number of iterations i max , and other constants. A generation of the initial position and velocity is done randomly for each particle. en, for each iteration, the inertia weight, position, and velocity are updated.
e inertia weight is a key factor in the PSO. A large inertia coefficient can improve the global search ability of the algorithm and avoid premature convergence caused by falling into the local extremum. On the contrary, a small inertia coefficient value can enable accurate search in the space and improve the convergence accuracy. To avoid falling into the local extremum and improve the diversity of particles, we introduce a normal model of the inertia weight ψ that is described by where ψ max is the maximum inertia coefficient of the particle swarm; ψ min is the minimum inertia coefficient of the particle swarm; f is the fitness value of the particle; f max and f min are, respectively, the maximum and minimum fitness values of the particle swarm; f aver is the average fitness value of the particle swarm; and η 1 , η 2 , and σ are constants in the range of 0 to 1. As the inertia coefficient decreases gradually, the algorithm switches from the initial generalized search to the local fine search in the later stage. For the classical approach, the inertia coefficient is simply defined as    Journal of Optimization where i max and i are the maximum iteration and number of iterations, respectively. e acceleration coefficients (i.e., the self-acceleration c 1 and global acceleration c 2 ) can determine the influence of the particle self-cognition on the particle trajectory and reflect the degree of information exchange between particles in the swarms. More specifically, c 1 and c 2 are a representation of the acceleration weights of particles advancing toward their own extremums and global extremums, respectively. In classical PSO, they are all considered as constants. However, in our improved approach, they are calculated as follows: where c start 1 , c start 2 , c end 1 , and c end 2 are the initial and termination values of the acceleration coefficients, respectively.
Setting a larger global acceleration coefficient and smaller self-acceleration coefficient at the beginning of the PSO algorithm can lead to a stronger social learning ability of the particle and weaker self-learning ability, which is beneficial to strengthening the global search ability. However, defining a smaller global acceleration coefficient and larger self-acceleration coefficient at the later stage of the PSO algorithm brings about stronger self-learning ability for the particle and weaker social learning ability, which is beneficial to local fine search that converges to the global optimal solution with higher precision.
In the PSO, the particle velocity and position are updated in each iteration as follows: where t represents the current number of iterations, i represents the particle number, and j represents the j th dimension of the particle; r 1 and r 2 are random real numbers in the range of 0 to 1. e parameter BP t i,j is the best previous position of the particle called "the best personal position," whereas BG t j is the best position obtained from the population called "the best global position." e initial best personal position is considered as the initial position of the particle, and the initial best global position is set as the initial position of the particle with the best fitness in the particle swarm. e term ψ · v t i,j represents the momentum part of the particle that reflects the inertia of the particle motion and its ability to maintain its previous velocity; c 1 r t 1 (BP t i,j − x t i,j ) represents the self-recognition of the particle, which is an indicator of the memory of the particle's own historical experience, and indicates that the particle has the tendency to approach its best historical position; and c 2 r t 2 (BG t j − x t i,j ) is the social consciousness part of the particle, which reflects the collective historical experiences of cooperation and knowledge common among particles, and indicates that the particle tends to approach the best historical position of the group or neighborhood. If the particle position or velocity exceeds the defined boundary range in the repetitive process, then the boundary value is utilized.
Compared to the classical PSO approach, equation (10) can be written as where c 1 and c 2 are constants and inertia weight w is a constant or can be a linear function of an iterative value. e multiple objectives of the redundancy allocation problem are related with the cost, volume, and weight. A penalty function P is implemented to take into account the constraint functions of the system reliability. Based on recent research, we use the following expression [24,34]: where the fitness function f is defined as where Γ is a sufficiently large positive number. e diagrammatic representation of the improved PSO algorithm is depicted in Figure 6.

Numerical Results
e values of the parameters for series, series-parallel, and bridge problems are given in Tables 1-3, respectively, (i.e., number of subsystems; component cost, volume, and weight values; and maximum value for the cost, volume, and weight) as stipulated in the literature [30,32]. e IPSO method is coded in MATLAB R2019b (MathWorks), and the program was run on an i7-4700MQ @ 2.46 Hz Intel® Core ™ processor with 8 GB of Random Access Memory. So as to measure the stochastic discrepancy, 20 runs were made for each system, which involves 20 different initial positions and initial velocities in the search domain for the particles. Herein, the best solutions are reported.
IPSO algorithm starts with defining the boundaries of the workspace by setting the maximum and minimum limits of the variables and initializing parameters, such as the population size n (50), the maximum and minimum inertia weights (ψ max � 10 and ψ min � 1), and the maximum number of iterations i max (200), as described in the previous section and in Figure 6.
To evaluate the IPSO performance, we report the best, worst, median, and standard deviation (SD) in Table 4. SD is expressed as where R is the system reliability average and R i is the calculated reliability at iteration i.    6 Journal of Optimization   Surrogate constraint algorithm [35] Improved GA-PSO algorithm [32] Biogeography-based optimization [22] Our IPSO  PSO algorithm [20] Improved GA-PSO algorithm [32] Biogeography-based optimization [22] Cuckoo search algorithm [ e results obtained using the algorithms proposed in the literature are taken into comparison with those from our proposed method, as summarized in Tables 5-7. ese tables contain values of the system reliability, component number, and reliability of each subsystem, cost, weight, and volume of the system. ese values correspond to the best solution out of 20 tests. It is realized that the IPSO solution demonstrates sometimes better performance than that reported in the previous works. For the series system, the reliability of the IPSO system is 0.99995469 as compared to 0.99995467 in the previous study [32]. Note that Ha and Kuo [4] found an acceptable overall system reliability, but the cost constraint was altered in their research. For the series-parallel system, the reliability of the IPSO system is 0.99312499 as compared to 0.94426072-0.99997664 in the previous study [22,32,35,36]. IPSO shows better performance over the simulated annealing algorithm, surrogate constraint algorithm, and improved GA-PSO algorithm. However, biogeography-based optimization seems more efficient (7% improvement than the IPSO result). Finally, for the complex system, the reliability of the IPSO system is 0.98935571 and is compared to 0.9821499-0.99988963 in the previous studies [15,20,22,32,36]. IPSO shows its superiority to the simulated annealing algorithm, surrogate constraint algorithm, and improved GA-PSO algorithm. However, biogeography-based optimization and cuckoo search algorithm seem more efficient (1% improvement than the IPSO result).

Journal of Optimization
In conclusion, the proposed IPSO is proven to provide better results in all three system configurations compared to the simulated annealing algorithm, classical PSO algorithm, and improved GA-PSO algorithm. However, recent techniques such as biogeography-based optimization and cuckoo search algorithm seem to be more efficient, although the improvement is quite minute.

Conclusion
is paper reports an enhanced approach based on PSO to solve multiobjective optimization applied to the RRAP. Series, series-parallel, and complex problems are considered, and the conclusions are compared to those of previous research. e purpose of the proposed approach is to improve the system reliability, subject to cost, volume, and weight constraints. To improve computational efficiency, an improved PSO approach is applied to search the solution space more efficiently. e inertia and acceleration coefficients of the algorithm are improved by employing a normal distribution for the coefficients. is was found to enhance the global search ability in the initial stage of the algorithm, restrain the premature convergence of the algorithm, and enable the algorithm to focus on the local fine search in the later stage, by which the optimization precision is improved and not the case for classical PSO. Case studies show that the solutions found by the IPSO are better or close to reported in the literature (i.e., simulated annealing algorithm, PSO algorithm, improved GA-PSO algorithm, biogeography-based optimization, and cuckoo search algorithm). Nowadays, even the simplest of developments in optimization for the redundancy allocation problem of reliability is often hard to come by. e IPSO algorithm proved to be a promising next generation of PSO or hybrid PSO approaches.

Data Availability
e data used to support the findings of this study are available upon request to the author.

Conflicts of Interest
e author declares that there are no conflicts of interest.