Hierarchical Clustering Cuckoo Search Optimization Implemented in Optimal Setting of Directional Overcurrent Relays

is study has proposed a hierarchical clustering cuckoo search optimization (HCSO) to solve the optimal setting problems of directional overcurrent relays (DOCRs) in the power system. Dierent from the randomly generated stepsize θ in traditional CSO, stepsize θ in HCSO is updated by a hierarchical clusteringmechanism, which can balance the searching priority of exploration and exploitation for each solution. e superiority of HCSO is evaluated by solving the DOCRs coordination problem with the 3-bus, 8-bus, and 9-bus system, presented by linear programming (LP), nonlinear programming (NLP), and mixed integer nonlinear programming (MINLP) forms. Results show that under the same parameter setting conditions, the minimum value of the objective function by HCSO has reduced by 10.93%, 2.93%, and 0.46%, respectively, compared to that of CSO in the experimented cases 2, 3, and 4. It veries that HCSO is more competitive than CSO in solving the DOCRs optimal setting problem irrespective of the DOCRs number and formulation type.


Introduction
Relays coordination is important in power system protection. e purpose of relays coordination is to cut the faulted parts as fast as possible to maintain services by the rest of the normal parts. Due to great progress in directional overcurrent relays (DOCRs) technology, they have been employed in the relays coordination eld in the power system.
Modern optimization algorithms are used to solve the optimal setting problems of DOCRs. GA and three improved versions of GA were used in Refs. [1][2][3], while two modi ed PSO were used in Refs. [4,5]. In Ref. [6], the DOCRs problem is regarded as NLP and MINLP formulation, and solved by a Seeker algorithm. In Ref. [7], IGSO is proposed to enhance GSO's searching ability for solving DOCRs with NLP formulation. In Ref. [8], BBO-LP is proposed to cope with the optimal coordination of DOCRs with MINLP formulation. In Ref. [9], MEFO is proposed by modifying the expression of the force for each EMP, and it is proved to be e ective in minimizing the operating time for the DOCRs problem. Recently, an adaptive coordination scheme of numerical DOCRs is proposed in Ref. [10] by utilizing a mathematical programming language based interior point optimization solver. In Ref. [11], optimal settings of DOCRs with di erent characteristic curves for AC microgrids are presented. Enhanced versions of GWO are proposed to solve the coordination of DOCRs in Refs. [12,13]. WCA is recently applied to this eld, and the results are promising [14,15]. Most lately, the DOCRs problem is solved by the newly proposed algorithms such as ISOA and GSA-SQP [16,17].
Cuckoo search optimization (CSO) is developed by imitating the obligate brood parasitic behavior of some cuckoo species together with the Lévy flights behavior of some birds and fruit flies [18]. ough CSO has only one parameter to be tuned, it can balance the exploration and exploitation very well to reach prominent results in continuous and discrete optimization problems. For example, in Ref. [19], CSO with fuzzy logic and Gauss-Cauchy is used for minimizing the localization errors of WSN. In Ref. [20], the traveling salesman problem and path planning for the autonomous trolley inside the warehouse is solved by a discrete variant of CSO. In Ref. [21], the combined heat and power economic dispatch problem is solved by an adaptive CSO. In Ref. [22], a multiobjective CSO is used to optimize the pathways for 75% renewable electricity mix by 2050 in Algeria. e implementations above have verified CSO's performances in a variety of scientific and engineering fields.
However, some works are devoted to developing new parameter control methods to make CSO better suited to various problems. For example, in Ref. [23], Sekhar and Mohanty suggested that the probability and stepsize in CSO should better be adopted as a dynamic variable to improve the solution quality. In Ref. [24], Nguyen and Vo suggested to merge new solutions generated from both Lévy flights and replacement of a fraction of eggs together in order to evaluate and rank the solutions at only one time, which is quite different from traditional CSO. Recently in Ref. [25], a four-solution factor, a fitness ratio FR d , and a predetermined threshold ε d were introduced to update the step size in CSO to extend the feasible searching zones. Actually, in order to make CSO more effective and efficient, there are two issues that should always be considered: (1) How to construct a proper host nest structure for the cuckoos (2) How to guide each cuckoo lay its eggs on more promising host nests To answer the first question, a hierarchical clustering mechanism is introduced to construct the host nests in this article. At the beginning, each host nest is regarded as a separate hierarchy, then the hierarchies are merged successively into larger hierarchies according to the similarity of each hierarchy, and then the hierarchy for host nests (subswarms) is dynamically build. is dynamic procedure can effectively improve the host nest structure by sharing information among different hierarchies via merging the nearest two hierarchies together. For the second question, since the host nests lie in different hierarchy, they own different abilities, and we take the average objective function value as an example. For each hierarchy, if its average objective function value (ψ) is at an inferior state, then we speed up its global search to escape from its current area by guiding the cuckoos lay its eggs at farther nests; if its average objective function value (ψ) is at a superior state, then we perform local search to exploit better nests nearby by guiding the cuckoos lay its eggs in neighborhood. Based on the two answers, a hierarchical clustering cuckoo search optimization (HCSO) is proposed in this article. Specifically, the main contributions of this article are as follows: (1) e objective function of DOCRs optimal setting problem is tested by three types as LP, NLP, and MINLP (2) HCSO is proposed by replacing the randomly generated stepsize with a hierarchical clustering stepsize (3) e sensitivity of stepsize in HCSO is analyzed on benchmark functions (4) e superiority of HCSO is verified by solving the DOCRs optimal setting problem Rest of this article is arranged as follows. In Section 2, the objective function and constrained functions of DOCRs optimal setting problem are illustrated. Related works on HCSO are analyzed in detail in Section 3, including a brief introduction of CSO, hierarchical clustering mechanism, hierarchical clustering stepsize θ, new solutions generated by hierarchical clustering stepsize θ, and alien eggs discovery action. e sensitivity analysis of stepsize θ by a series of constant values vs a hierarchical value is carried on benchmark functions in Section 4. Experimental cases on DOCRs optimal setting problem with 3-bus, 8-bus, and 9bus systems by LP, NLP, and MINLP formulations are conducted in Section 5, and the results and comparisons are provided simultaneously. Discussions on the effects of egg abandon fraction p a and hierarchy number |O| are presented in Section 6. Finally, conclusions are given in Section 7.

Objective Function.
From a mathematical point of view, the DOCRs optimal setting problem is an engineering optimization problem, and the goal of the objective function (OF) is to minimize the summation of the working time of all the primary DOCRs, as expressed below: where χ denotes the OF value, N denotes the number of primary DOCRs, and ] i denotes the relay coefficient and is set to be 1 for all the DOCRs throughout this article. Q i denotes the working time of relay i, which is calculated by where ϕ, ρ, η, and ζ are coefficients with fixed value as 0.14, 0.02, 1.0, and 0, respectively, according to the IEC curves. IF denotes the fault current, IP denotes the pickup current, and CTR denotes the CT ratio. e DOCRs characteristic constraints are the physical limitations which are given as where Eq. (4)-Eq. (6) are the minimum and maximum bounds for the abovementioned parameters in Eq. (1)-Eq. (3).

DOCRs Coordination Constraint.
When failure happens in the power system, the primary DOCR and the backup DOCR will detect it at the same time. is DOCRs coordination constraint ensures that the primary and backup DOCRs operate accordingly and prevent unnecessary tripping or uncoordinated tripping. To ensure proper coordination, the operating time for the backup DOCR must be more than the operating time of the primary DOCR by the predefined coordination time interval (CTI), which is defined as follows: where Q bc and Q pri denote the operating time of the backup and primary DOCRs.

Original Cuckoo Search Optimization (CSO).
ere are three basic rules in original CSO, which could be referred to Ref. [18]. On one side, original CSO has been proved to be super effective for lots of kinds of real-world optimization problem since it keeps excellent balance between global search and local search by switching the parameter p a ∈ [0, 1]. On the other side, original CSO is suffering difficulties of achieving high-quality solutions and fast convergence speeds due to the randomly generated value for stepsize θ within [0, 1], which cannot adapt to the environmental changes as the iteration goes on.
However, in this article, the cluster analysis is combined with original CSO to replace the fixed value of stepsize θinstead of an adaptive hierarchical clustering stepsize θ; hence, the disadvantage of CSO mentioned ahead is solved.

Hierarchical Clustering Mechanism.
Hierarchical clustering is a member of cluster family, and fundamental information has been provided in Ref. [26]. It needs to be noted that in this article, hierarchical clustering refers to the agglomerative clustering and not the divisive clustering. Supposing where dis(h, k) is calculated by equation (8) B5: Go to B2 until the cutoff condition is satisfied In addition, here, three points need to be clarified in particular. First, in B4, to measure the distance between O c and O δ , there are three ways as complete linkage (CL), single linkage (SL), and average linkage (AL). In this article, we adopt SL as the way. But the other two ways as CL and AL are discussed and compared in Section 6.3 to comprehensively analyze the performances of hierarchical clustering mechanism.
Second, in B5, there are different ways for "cutoff condition". In Ref. [27], the maximum number of hierarchies was set as the "cutoff condition." In Ref. [28], the authors used the parameter subSize, which is a prefixed maximum subswarm size, to control the "cutoff condition." In this article [27], the maximum number of hierarchies (denoted as |O|) is used to be the "cutoff condition," and the effects of |O| is discussed in Section 6.2.
Mathematical Problems in Engineering ird, the hierarchical clustering mechanism in this article is extended from Ref. [29]. In Ref. [29], the clustering task is fulfilled by a 4-step K-means algorithm, while in this article, it is by a 5-step agglomerative clustering method. Moreover, Ref. [29] focused on the economic load dispatch (ELD) problem with NLP formulations only, but this article extended its applications on various formulations as LP, NLP, and MINLP in the DOCRs coordination problem (see Section 5).

Hierarchical Clustering
Stepsize. Based on Section 3.2, all solutions in H are classified into different hierarchies. Because different solution h m owns different objective function (OF) value; hence, each hierarchy has a different average OF value as well. us, we can classify the hierarchies based on this value to find a targeted way to improve its searching ability. To describe the hierarchical clustering stepsize clearly, the following definitions are given first.
where M is the solution size of H. Next, for hierarchy O σ , let us compare ψ σ with Ψ, and see what does the comparison mean to evaluate the state of O σ . Assuming that the to-be-solved problem is a minimization type, then two different situations will appear as follows: Situation 1. ψ σ ≥ Ψ, in this situation, hierarchy O σ is in a relatively poorer position because its average OF value (ψ σ ) is greater than that of the entire solution set (Ψ). erefore, the searching range of hierarchy O σ should be increased appropriately to find more competitive solutions.
Situation 2. ψ σ < Ψ, in this situation, hierarchy O σ is in a relatively better position. Hence, the searching scope of hierarchy O σ should be reduced to enhance the local search and to avoid missing the optimal solutions in neighborhood.
According to the analysis above, hierarchical clustering stepsize (denoted as θ σ ) for hierarchy O σ is proposed, which is calculated as follows: where U and U are the upper-limit and lower-limit for stepsize in Situation 1, which is chosen to be 1.0 and 0.5, respectively, in this article; L and L are the upper-limit and lower-limit for stepsize in Situation 2, which is chosen to be 0.5 and 0, respectively, in this article. Here, we should note that the chosen values for U, U, L, and L are just examples to illustrate the hierarchical clustering mechanism, and these values are maintained in Sections 4 and 5. However, they may not be the best values for other problems, and different problems may prefer different choices.
In original CSO, stepsize θ is a fixed value within [0, 1] [18]. Differently, in this article, hierarchical clustering stepsize θ σ is introduced to modify its value according to different hierarchy conditions, which is expressed as follows: (12), the larger the value of ψ σ , the smaller the distance between θ σ and U, then a bigger stepsize θ σ for h m is made according to the current state of hierarchy O σ . erefore, the new solutions are enabled to explore wider searching ranges in a unique and targeted manner.

Condition 2.
If solution h m ∈ O σ and O σ belongs to Situation 2, then θ σ is updated from L to L to narrow its searching area. Similar to Condition 1, the smaller the value of ψ σ , the smaller the distance between θ σ and L, then a smaller stepsize θ σ for h m is made according to the current state of O σ . Hence, the updated solutions can adaptively perform a more targeted neighboring search to achieve the global optimal solution.

New Solutions via Hierarchical Clustering
Stepsize. In original CSO, there are two phases of generating new solutions including Lévy flights and the replacement of a fraction of eggs [18]. However, in HCSO, the generation of new solution h new m via hierarchical clustering stepsize θ σ based on Lévy flights is described as follows: where step ∈ [0, 1] is the standard step in original CSO, θ σ is the stepsize by Equation (12), h m,σ is the m th solution belongs to hierarchy σ, h best is the best solution at present. It is noted that the definition and calculation process of Lévy (β) are all drawn from [24], and it is not repeated here because of the limited space.

Mathematical Problems in Engineering
Here, we should mention that in Equation (13), except for θ σ , the original step is also important for new solutions. If the step is too large, the new solutions may vary in a very large range that the optimal solution is either obtained fast or omitted; however, if it is too small, the new solutions may move around such a small area that it cannot contribute to the solution diversity. is is the reason why we combine θ σ with the step.

Alien Eggs Discovery Action.
In cuckoo's behavior, there is a possibility that alien eggs will be discovered by the host bird, and this discovery action with probability p a is expressed as where the explanation of probability p a and the calculation of increased value Δh dis m are as the same as in Ref. [24], so it is not repeated here.

3.6.
Steps of Implementation. Based on the hierarchical clustering stepsize, a hierarchical clustering cuckoo search optimization (HCSO) is proposed. e steps and pseudo code of implementing HCSO on solving the optimal setting of DOCRs problem are demonstrated in the following. Here we should note that the penalty function, which is denoted as Δ(s) in equation (15), is applied to handle the DOCRs coordination constraint [30]. us, the constrained objective function is transformed into an unconstrained one as follows: where where ξ is the penalty factor and should be a relatively large value to achieve zero penalties in optimal solutions.

Benchmark Functions for Stepsize Analysis
To observe the sensitivity of stepsize θ for HCSO, six benchmark functions as Sphere, Rosenbrock, Ackley, Rastrigin, Griewank, and Weierstrass with dim � 100 are tested in this section [31,32]. e value of θ are from 0.01, 0.1, 0.25, 0.5, 0.75, 1.0, hirarchical clutering value . All the benchmark functions are given the same conditions as α sol � 30, α iter � 5000, p a � 0.1, and hierarchy number |O| � 10. Figure 1 shows the convergence curves with the same randomly generated initial solution H. In general, when θ � 0.01 (blue color) or θ � 1.0 (red color), all the convergent curves from Figures 1(a)-1(f ) show the worst performance. is is because when θ � 0.01, it is so small that the exploitation range is limited in a very small space; hence, it cannot perform a comprehensive search. On the contrary, when θ � 1.0, it is so large that the convergent performance is affected adversely. However, when it comes to the hierarchical clustering value (in brown), the fastest convergent speed can be observed with the lowest y-axis value for all the six functions. It is not hard to understand that constant θ cannot adjust to the environmental changes as the iteration carries on, but hierarchical clustering stepsize θ can update its value according to the hierarchy changes for every iteration, thereby improving the searching balance between exploration and exploitation of each individual solution. So it explains why hierarchical clustering stepsize θ is more competitive than fixed stepsize θ.
Let us further take Figure 1(e) as an example, which is the convergence characteristic of Griewank function to further analyse the influences of stepsize θ. From Figure 1(e), θ � 0.1, 0.25, hierarchical clustering step is the top-performer since they converged to the global optima as y-axis � 0 (circled by dotted lines). Moreover, θ � hierarchical clustering step converges with the fastest speed, followed by θ � 0.25 and 0.1. It is also observed that the second-rate-performer is achieved by θ � 0.5 and 0.75, and they have reached y-axis � 10 − 10 and y-axis � 10 − 8 , which are relatively good results but not as good as the topperformer. Finally, the third-rate-performer belongs to θ � 0.01 and 1.0, which is the smallest θ and the largest θ, respectively, and they show the worst results. e reason has been explained above, so it is not repeated here.
Mathematical Problems in Engineering 5

Case 1: 3-Bus System with LP Formulation.
is 3-bus system is shown in Figure 2. All the related data are provided in Ref. [6]. is case is formulated as the LP problem, and the design variable is TDS.
Parameter conditions are set as α sol � 20, α iter � 50, α var � 6 (6 TDS), p a � 0.1, and |O| � 3. To observe the convergence characteristics of CSO and HCSO visually, Figure 3 picks one randomly chosen convergence curve among 30 running times. It shows that even though CSO and HCSO achieve the same result in the end, CSO takes up around 26 iterations, while HCSO takes up only 19 iterations, which shows HCSO has advantages over CSO in terms of convergence efficiency. Figure 4 is the outlines of OF value, and it is surprising that all the independent runs achieve the same optimal result (OF � 1.7804 s), no matter by HCSO or by CSO. e reason is that LP formula with six variables in this case is a simple optimization problem, so it is quite easy to achieve its best result. e optimal settings of TDS by CSO and HCSO are given in Table 1, along with the simplex method [33], LP using MATLAB [4], PSO [4], the Seeker algorithm (SA) [6], and OJaya [34]. It is obvious that both CSO and HCSO achieved more optimized value as 1.7804(s) than 1.9258 (s). Meanwhile, the standard deviation (Std) reached 0 by both HCSO and CSO, which means every time of its independent run reaches the optima, which can also be observed in Figure 4. Table 2 shows the operating times of primary and backup relays, as well as the CTI. We can tell that the CTI constraints are satisfied in every P/B pair.

Case 2: 8-Bus System with NLP Formulation.
is 8-bus system is shown in Figure 5. All the related data are provided in [35]. is case is formulated as the NLP problem, and the design variables are TDS Build new nests by (14); Check the constraints and calculate Keep h m ; end iter � iter + 1; end ALGORITHM 1: HCSO.   Table 3. It is observed that the OF value by HCSO is 6.2083(s), while it is 6.9703(s) by CSO. e corresponding CTI for each pair of P/B relays are shown in Table 4. e convergence curves are shown in Figure 6, from which we can see, generally speaking, CSO and HCSO converge at similar trends. However, if we observe the zoomed-in view in Figure 6(a), it shows that after 50 iterations, HCSO begins to outperform CSO. en from Figure 6(b), it shows that from 10000 iterations to the end, HCSO keeps surpassing CSO all the time. e amplitudes of 30 running times are shown in Figure 7, it can be clearly observed that the OF value fluctuates in quite a large range by CSO but not that large by HCSO; meanwhile, the general curve by HCSO is below the curve by CSO, which means less OF values for most of the runs. Above analysis illustrates that in aspects of convergence rate, solution quality, and robustness, HCSO behaves much better than CSO.   e optimal results are being compared with published data in Table 5, and we can notice that HCSO achieves the best OF_min as 6.2083(s). Apart from OF_min, HCSO achieves the best value of OF_max, OF_ave, and Std as well, among the compared methods.
Here, we should mention that there are unfeasible solutions by DE [9], PSO [9], and EM [9] with a number of 9/ 30 (9 unfeasible runs among 30 runs in total), 25/30, and 16/ 30. Moreover, the HHO [36] cannot converge to the optimal solutions because the CTI constraints are not satisfied. In fact, this case has a big number of constraints with a big number of decision spaces; hence, it is hard to achieve optimal solutions under the condition of satisfying all the constraints. However, HCSO has completed this task very well.

Case 3: 8-Bus System with MINLP Formulation.
e system structure of this case is same as that of Case 2, but the system data are different, which is given in Refs. [2,6]. e parameters are set as α sol � 50, α iter � 20000, p a � 0.  Figure 8, within around 4500 iterations, CSO and HCSO perform equally overall. However, as the iteration goes on, HCSO overtakes CSO in convergence rate and OF value. e reason is that hierarchical clustering stepsize θ is able to adapt its value according to the solution changes; hence, HCSO provides a more targeted neighborhood search, while the CSO with fixed stepsize θ cannot. e distribution of OF value is shown in Figure 9, from which we can see that among 30 runs, HCSO shows stronger ability in approaching the minimum OF value and maintaining its results stable (with Std as small as 0.2518), while CSO suffers several times of premature problem with relatively bigger OF value (with Std as 0.6024).
Optimum settings of TDS and PS are presented in Table 6. It should be noted that this case is a highly constrained network with limited number of discrete PS values, so it cannot get feasible and optimal solutions easily. Results obtained by Seeker [6], OJaya [34], GA [2], GA-LP [2], BBO [8], BBO-LP [8], Jaya [36], and HHO [36] are displayed in Table 7. It is noticed that GA [2] and GA-LP [2] are not capable of achieving feasible solutions, which has also been mentioned in Ref. [8]. HHO [36] suffers the same problem of unfeasible solutions because some P/B pairs cannot satisfy the CTI constraint. Even though the Seeker [ Table 8. Hence, HCSO is still the best performer in this case among all the compared methods.

Case 4: 9-Bus System with NLP Formulation.
is case is with single-end fed and equal impedance for all the lines, and 3 ϕ fault at the midpoint of each line is considered. All the DOCRs have the same CTR as 500 : 1, and the CTI is selected to be 0.2 s. e TDS is considered continuous in [0.025, 1.2]; PS is also considered continuous, and its maximum value and minimum value are given in Ref. [38]. e P/B pairs, the fault current, and the maximum and minimum fault current are given in Ref. [3]. e algorithm parameters are set as α sol � 30, α iter � 10000, p a � 0.1, α var � 48 (24 TDS + 24 PS), and |O| � 15. e convergence characteristic is shown in Figure 10, and we can observe that during the whole iteration period, HCSO is always converging faster than CSO, and obtaining lower OF value as well. is is because HCSO has stronger searching ability than CSO by adjusting its hierarchical clustering stepsize θ according to the current location of each solution. Moreover, when we come to Figure 11, the superiorities of HCSO over CSO become more obvious   Figure 5: IEEE 8-bus DOCRs coordination problem model. because HCSO maintains its OF value within a much smaller area with respect to y−axis than CSO among 30 times of independent runs.
Optimum settings of TDS and PS are presented in Table 9.
e comparisons with published data are presented in Table 10. It is noticed that the best result of OF_min is obtained by HCSO as 4.8000 s. In addition to OF_min, HCSO has also achieved the best results of OF_max and Std, and the Std reaches as less as 0.0180, which is a very obvious advantage over the other compared methods. Table 11 shows the operating time and CTI, and we can see that no selectivity constraint is violated by both CSO and HCSO.

Effects of Egg Abandon Fraction p a .
Here, to observe the effects of fraction p a comprehensively, a series of p a from 0.1 to 1.0 in increments of 0.1 is experimented by 30 running times on the 9-bus system (Case 4). e other parameters as α sol , α iter , and |O| are held to be 30, 300, and 10 during the whole process.
Observed from Figure 12, when p a � 0.9 and 1.0, their performances are apparently different from the other p a because their y-axis range varies as large as [6,70] and [30,110], respectively, while y-axis for the other p a varies      within a much smaller range as [5.5, 8.0] overall, which can be observed clearly from the zoomed-in view. is shows that when p a < 0.9, its role in the final result is very limited. e authors believe it is because in original CSO, p a plays an important role in the exploitation stage. However, in HCSO, the exploitation stage is performed by gradually adjusting stepsize θ through the hierarchical clustering mechanism, so p a is no longer as important as in CSO.

Effects of Hierarchical Cluster Number |O|.
Let us still take case 4 as an example, where |O| is set within [3,30] in step 3, while α sol � 30, α iter � 300, and p a � 0.1 are kept the same by different |O|. For each |O|, we record its average OF value (OF_ave), as well as the average CPU time, to observe the effects of different |O|.
In Figure 13, let us observe the situation of two extreme values as 3 and 30 (in red color). When |O| � 3, y-axis shows the worst value because it is as large as 7.24 s; when |O| � 30, y-axis shows the second to last. is performance proves that |O| value cannot be too small or too large, but the problem is on how to decide the most suitable value. e answer is, there is no "perfect" value for |O|, and it depends on the characteristics of the problem to be solved. According to Figure 13, we choose |O| � 15 in case 4, which is an integer within [1, α sol ].
However, for CPU time (in blue color), the situation changes since the y-axis keeps almost unchanged within [0.935, 0.94] regardless of |O| value. It is because the time complexity of the hierarchical clustering mechanism is O(α 3 sol ), and it has nothing to do with |O|.

Different Hierarchy Distance Measurement.
As mentioned in Section 3.2, there are three ways for measuring the distance between any two hierarchies O c and O δ as single linkage (SL), complete linkage (CL), and average linkage (AL). In SL, we merge in each step O c and O δ , whose two closest solutions have the smallest distance, as shown in Ref. (9). In CL, we merge in each step O c and O δ with the smallest maximum pairwise distance, as shown in Ref. (17). In AL, the distance between O c and O δ is defined as the average distance between all pairs of solutions, as shown in Ref. (18).
Here, we still take case 4 as an example, where α sol � 30, α iter � 300, p a � 0.1, and |O| � 15 are kept the same by SL, CL, and AL. For each measurement, the OF_ave is recorded by 30 independent runs. We can observe from the two randomly generated simulation results in Figures 14(a) and 14(b) that from a statistical point of view, there is no significant difference between (a) and (b). For each figure, no matter we choose SL, CL, or AL, it has few impacts on the statistical results.

Conclusion
In this article, a hierarchical clustering mechanism is cooperated with CSO to solve the optimal setting problem of DOCRs. To observe the sensitivity of HCSO, the parameters as hierarchical clustering stepsize θ, eggs abandon fraction p a , and hierarchy number |O| are analyzed comprehensively on benchmark functions. e conclusion is that in the DOCRs optimal setting cases, HCSO is more successful than CSO in reducing the objective function value and in maintaining its robustness. Taking case 2 as an example, compared with CSO, the minimum objective function value reduced by 10.93% and the Std value reduced by 54.93% in HCSO. In future works, more improved CSO will be studied and compared with the proposed HCSO in aspects of adaptive change of parameters, convergence rate, and solution stability. Furthermore, larger test systems of DOCRs' optimal setting problems as 15-bus, 30-bus, and 42-bus systems are supposed to be experimented in the following research to deeply expand HCSO's applications in this field.

Data Availability
e data used to support the findings of this study are included within the article.

Conflicts of Interest
e authors declare that they have no competing financial interests or personal relationships that could have appeared to influence the work reported in this article.