The Comparison of Lately Proposed Harris Hawks Optimization and Jaya Optimization in Solving Directional Overcurrent Relays Coordination Problem

In this paper, a lately proposed Harris Hawks Optimizer (HHO) is used to solve the directional overcurrent relays (DOCRs) coordination problem. To the best of the authors’ knowledge, this is the rst time HHO is being used in the DOCRs coordination problem. ­e main inspiration of HHO is the cooperative behavior and chasing style of Harris’ hawks from dierent directions, based on the dynamic nature of scenarios and escaping patterns of the prey. To test its performances in solving the DOCRs coordination problem, it is adopted in 3-bus, 4-bus, 8-bus, and 9-bus systems, which are formulated by three kinds of optimization models as linear programming (LP), nonlinear programming (NLP), and mixed integer nonlinear programming (MINLP), according to the nature of the design variables. Meanwhile, another lately proposed optimization algorithm named Jaya is also adopted to solve the same problem, and the results are compared with HHO in aspects of objective function value, convergence rate, robustness, and computation eciency. ­e comparisons show that the robustness and consistency of HHO is relatively better than Jaya, while Jaya provides faster convergence rate with less CPU time and occasionally more competitive objective function value than HHO.


Introduction
Relay coordination task is considered of great importance for the operation of power systems. e optimal coordination of relays is supposed to guarantee that faults in the protected zones are cleared rstly by the corresponding primary relays, and if they fail, the corresponding backup relays act after a coordination time interval (CTI). With the development of relays, directional overcurrent relays (DOCRs) have been applied to the design of economical alternatives for the primary and backup protection of power systems. e operating times of DOCRs are decided by two design variables as time dial setting (TDS) and pickup current setting (IP) or plug setting (PS). Optimal coordination between the operating times is able to maintain the reliability of the overall protection system. Mathematically speaking, this coordination problem is a highly constrained optimization problem, which can be modeled in three ways as follows, according to the nature of the design variables: Firstly, when the DOCRs coordination problem is formulated as an LP problem, the value of IP or PS is assumed to be xed; hence, the operating time of each relay (T i ) is calculated as a linear function of TDS. Even though LP is a simple formulation, it requires experts for setting the initial values of IP or PS and it easily gets stuck in local minima [1].
Secondly, when formulated as an NLP problem, both TDS and IP are considered as design variables and calculated to optimize the relay operating time (T i ), where TDS and IP take continuous values. By NLP, the total operational time of the primary relays can be reduced and the coordination can be maintained well. irdly, when formulated as an MINLP problem, both TDS and PS are calculated and optimized. e difference between NLP and MINLP is that the parameter of PS takes discrete values in MINLP, while IP takes continuous values in NLP.
Regardless of the variety of these algorithms, exploration (diversification) and exploitation (intensification) phases are two common phases that should always be considered. In the exploration phase, the optimizer should be able to promote its randomized solutions as many as possible to thoroughly explore the whole space. In the exploitation phase, only the solutions with better fitness values are searched further on its neighborhood to intensify the searching quality. ese two phases should be made balanced reasonably; otherwise, the optimizer would be trapped in local optima or suffer from immature convergence drawbacks.
Recently, a number of nature-inspired modern algorithms were proposed to effectively balance the exploration phase and exploitation phase by Mirjalili, such as grey wolf optimizer (GWO) [19], whale optimization algorithm (WOA) [20], ant lion optimizer (ALO) [21], and moth-flame optimization (MFO) [22]. ey have achieved good results. en, in 2019, a new nature-inspired technique named Harris' Hawks Optimizer (HHO) is proposed by Heidari et al. [23], with the same purpose to make fine balance between exploration and exploitation. e main idea of HHO is inspired from the cooperative behaviors of one of the most intelligent birds, Harris's hawks, in hunting escaping preys (rabbits in most cases). Different mathematical models are constructed to mimic different stages of hunts used by Harris's hawks; then, a new stochastic metaheuristic algorithm is proposed and designed based on the constructed models to tackle various optimization problems.
Moreover, another algorithm named Jaya is proposed by Rao in 2016 [24]. e most significant benefit of Jaya is that it is totally free from algorithm-specific parameters and only two common parameters are required, which are maximum number of iteration (Max_iter) and population size (N pop).
In this paper, Jaya algorithm is used to be compared with the HHO algorithm in solving the DOCRs coordination problem, with the purpose of testing the advantages and disadvantages of each other, according to the objective function value, convergence rate, robustness, and computation efficiency. en, the main contributions of this study can be summarized as follows: (i) e DOCRs coordination problem is experimented by 3 kinds of formulations, which are LP, NLP, and MINLP (ii) e lately proposed HHO and Jaya are used to solve the DOCRs coordination problem (iii) HHO and Jaya are implemented on 3-bus, 4-bus, 8bus, and 9-bus test systems, and the problems are solved effectively in most cases (iv) e result comparisons show that the robustness and consistency of HHO is relatively better than Jaya, while Jaya provides faster convergence rate and occasionally more competitive objective function value than HHO.
Rest of this paper is organized as follows. In Section 2, the formulation of the DOCRs coordination problem is illustrated. Related works on HHO and Jaya are described in Section 3 and Section 4. Experimental results and comparisons are presented in Section 5. Finally, discussion and conclusion are given in Section 6.

Objective Function.
e coordination problem of DOCRs in a ring fed distribution system can be formulated as an optimization problem, where the objective function is the sum of the operating times (T i ) of the primary relays in a system, as expressed below: where N is the total number of primary relays, W i is the weight assigned for relay R i which is equal to 1 for all the relays in this study, and T i is the operating time of relay R i calculated by the following 3 kinds of formulations: LP, NLP, and MINLP.  (2) and (3): where α, β, c, and L are constant parameters which, according to the IEC curves, are assumed to be 0.14, 0.02, 1.0, and 0, TDS i is the time dial settings of relay R i , IF i is the fault current, and IP i constant is the pickup current flowing through relay R i for a particular fault located in a particular zone. Since PS i constant stands for the plug setting which is a known constant and CT i stands for the CT ratio, the pickup current IP i constant is a known constant by equation (3). Finally, the value of T i is linearly related with the value of TDS i ; therefore, the problem is simplified as a linear programming (LP) problem.

NLP Formulation.
When formulated as an NLP problem, the design variables are TDS and IP, the operating time is calculated by where the values of α, β, c, and L are the same as in LP formulation. TDS i and IF i also have the same meaning as in LP formulation. e difference is that IP i continous in NLP formulation takes continuous variables, while IP i constant in LP formulation is a known constant.

MINLP Formulation.
For MINLP formulation, the design variables are TDS and PS, where PS takes discrete values. erefore, the operating time is calculated by where α, β, c, L, TDS i , and IF i have the same meaning as in LP and NLP formulations. e difference is that the continuous variable of IP i continous in equation (4) is replaced by PS i discrete × CT i in equation (5), where PS i discrete takes discrete values [37,38].

Constrained Functions
2.2.1. Relay Coordination Constraints. In a power system, when fault happens, it is sensed by primary and backup relays simultaneously. To avoid maloperation, the backup relay should takeover the tripping action, only after the primary relay fails to operate. So, the operating time of the backup relay (T backup ) is decided by the operating time of the primary relay (T primary ) plus the coordination time interval (CTI). is is necessary for maintaining the selectivity of primary and backup relays.
is relay coordination constraint can be stated as e value of CTI varies from 0.30 s to 0.40 s for electromechanical relays, while it varies from 0.10 s to 0.20 s for numerical relays.

Relay Characteristic Constraints.
e relay characteristic constraints are the physical and operational bounds of the relay parameters as follows where T min i and T max i in equation (7) are the minimum and maximum operating time of relay R i for the fault at any point, TDS i min and TDS i max in equation (8) are the minimum and maximum values of TDS i of relay R i , IP i min and IP i max in equation (9) are the minimum and maximum values of IP i for relay R i , and PS i min and PS i max in equation (10) are the minimum and maximum values of PS i for relay R i .

Constraint Handling.
In this study, the penalty method is used to handle the constrained functions. It consists of adding a penalty term to the objective function to penalize the unfeasible solutions that violate the constraints. A comprehensive survey of the most popular penalty functions is given in [39].
In the DOCRs coordination problem, the coordination constraints and the characteristic constraints are included in the objective function using the penalty method, as shown in equation (11). If any constraint is violated, a value of penalty is added to the objective function. Since the objective function is of minimization type, a large number is taken as the penalty factor: where N is the number of primary relays, M is the number of relay pairs, and the penalty term Penalty(k) is given by the following equation: and ξ is the penalty factor for the penalty method to make the value of the objective function more significant during minimisation. ξ is usually given a relatively high value, with the aim to achieve zero penalties in optimal solutions [40].

Harris Hawks Optimization (HHO)
In this section, the exploratory and exploitative phases of Harris Hawks Optimization (HHO) are modeled. HHO is inspired by exploring a prey, surprise pounce, and different attacking strategies of Harris hawks. It is a population-based, gradient-free optimization technique, which can be applied to any optimization problems subject to proper formulations [23].

Exploration Phase.
In HHO, Harris' hawks represent the candidate solutions and the intended prey represents the best candidate solution in every iteration.
ere are two strategies for Harris' hawks to perch and detect the prey: firstly, perch based on the positions of other family members and the prey; secondly, perch on random locations such as random tall trees. e strategies are modeled as follows: where X(t) is the current position of hawks, X(t + 1) is the updated position of hawks, X rabbit (t) is the current position of the prey, r 1 , r 2 , r 3 , r 4 , and q are random numbers inside (0, 1), LB and UB show the upper and lower bounds of variables, X rand (t) is a randomly selected hawk from the population, and X m (t) is the average position of the hawks which is calculated by the following equation: where N is the total number of hawks and X i (t) is the location of each hawk in iteration t.

Transition from Exploration to Exploitation.
e transition from exploration to exploitation is based on the escaping energy of the prey, which decreases during the escaping behavior. In HHO, the energy of a prey is modeled as where E is the escaping energy of the prey, E 0 is the initial energy randomly changes within (− 1, 1), and T is the maximum number of iterations. When E 0 decreases from 0 to − 1, it means that the rabbit is physically flagging; when E 0 increases from 0 to 1, it means that the rabbit is strengthening. It is obvious that the value of E is decreasing as t is increasing. When |E| ≥ 1, the hawks search different regions to explore the rabbit, working as the exploration phase; when |E| < 1, the algorithm performs the exploitation phase on the neighborhood of the solutions.

Exploitation Phase.
e preys are always trying to escape from threatening situations. Based on the escaping behaviors of the prey, the hawks perform four different chasing strategies to catch the prey.
Suppose that r is the chance of a prey in successfully escaping (r < 0.5) or not successfully escaping (r ≥ 0.5). To catch the prey, the hawks will encircle it softly or hardly, according to the remaining energy of the prey. Here, we use the E parameter to model this process; when |E| ≥ 0.5, the soft besiege happens; when |E| < 0.5, the hard besiege occurs.

Soft Besiege.
When r ≥ 0.5 and |E| ≥ 0.5, the prey still has enough energy to escape, so the hawks encircle it softly to make the prey exhausted to perform surprise pounce. It is modeled by the following rules: where ΔX(t) is the position difference between the prey and the hawks in iteration t, r 5 is within (0, 1), and J � 2(1 − r 5 ) is the random jump strength of the prey during escaping procedure.

Hard Besiege.
When r ≥ 0.5 and |E| < 0.5, the prey is exhausted with low escaping energy. So, the hawks encircle the prey hardly to perform the surprise pounce. In this situation, the current positions are updated using the following equation:

Soft Besiege with Progressive Rapid Dives.
When |E| ≥ 0.5 and r < 0.5, the prey has enough energy to escape successfully, so a soft besiege is constructed. To perform the soft besiege, we suppose the hawks can evaluate their next movement based on the following equation: en, they compare the movement to the previous dive to detect if the previous dive is better. If not, they will dive based on the levy flight (LF) pattern using the following equation: where D is the problem dimension, S is a random vector by size 1 × D, and LF is the levy flight function calculated by using the following equation: where u and v are random values within (0, 1), β equals to 1.5, and σ is calculated by using the following equation: Finally, the updating strategy of the hawks in this phase is presented as follows: 4 Complexity It is to be noted that, in all search agents, only the better position Y or Z will be selected to the next iteration.

Hard
Besiege with Progressive Rapid Dives. When |E| < 0.5 and r < 0.5, the rabbit has not enough energy to escape and a hard besiege is constructed. In the prey side, this step is similar to the soft besiege, but for the hawks, they try to decrease their distances with the escaping prey. So, the following rule is performed with different Y and Z: where Y and Z are obtained using equations (25) and (26): where X m (t) is calculated by equation (14).

Pseudocode of HHO.
e pseudocode of the HHO algorithm is reported in Algorithm 1.

Application of HHO Algorithm.
e main procedures of using HHO to solve the DOCRs coordination problem are illustrated with further details below: Step 1. Set parameters. ree common parameters are initialized; they are number of design variable (N var), population size (N pop), and maximum iteration number (Max iter).
Step 2. Initialize the location X of Harris's hawks in the form of where where X i,j is the i th solution of the j th variable where i ∈ [1, N pop] and j ∈ [1, N var]. X min j and X max j are the lower and upper limits of the j th variable given by relay characteristic constraints of equations (8) and (9) or (10) in Section 2.2.
Step 3. e fitness value of Harris's hawks F(X i,j ) is calculated, which is problem dependent. In Section 5, we use equation (2) as the objective function for LP formulation, equation (4) for NLP formulation, and equation (5) for MINLP formulation.
Step 4. Check the constraints according to Section 2.2.
Step 5. Update the location of the prey X rabbit and its energy F(X rabbit ).
Step 6. Calculate the escaping energy E of the prey by equation (15).
Step 7. Update the location X i of Harris's hawks according to the value of E, which is based on the following conditions: If |E| ≥ 1, then we perform the exploration phase by equation (13); If |E| < 1, then we perform the exploitation phase by 4 strategies regarding to the behavior of the rabbit, which are as follows: If r ≥ 0.5 and |E| ≥ 0.5, then we perform soft besiege by equation (16); Initialize N var, N pop, and Max iter; Generate initial population X; Evaluate the fitness value F(X); Set t � 1; while t < Max iter do Set X rabbit as the prey (best location); for each hawk (X i ) do Update initial energy E 0 and jump strength J; Update the E by equation (15); if |E| ≥ 1 then Update the location vector using equation (13); end if |E| < 1 then if r ≥ 0.5 and |E| ≥ 0.5 then Update the location vector using equation (16); end if r ≥ 0.5 and |E| < 0.5 then Update the location vector using equation (18); end if r < 0.5 and |E| ≥ 0.5 then Update the location vector using equation (19); end if r < 0.5 and |E| < 0.5 then Update the location vector using equation (24); Keep the old value; end end t � t + 1; end ALGORITHM 1: HHO.

Complexity
If r ≥ 0.5 and |E| < 0.5, then we perform hard besiege by equation (18); If r < 0.5 and |E| ≥ 0.5, then we perform soft besiege with progressive rapid dives by equation (19); If r < 0.5 and |E| < 0.5, then we perform hard besiege with progressive rapid dives by equation (24); Step 8. e updated fitness value F[X i (t + 1)] is calculated, which is as the same as in Step 3.
Step 9. Compare the present fitness value with its former fitness value and update it as follows: Step 10. Check the stopping condition. If Max iter is reached, stop the loop and report the best solution; otherwise set t � t + 1 and go to Step 4 to continue the iteration.

Jaya Algorithm
Jaya algorithm is a lately developed yet powerful heuristic algorithm for solving constrained and unconstrained optimization problems [24]. Compared with most of the other heuristic algorithms requiring for algorithm-specific parameters, Jaya is totally free from algorithm-specific parameters, and only two common parameters named maximum number of iteration (Max_iter) and population size (N_pop) are required, whose values can be initialised easily. In this section, the working principle of Jaya and the application of Jaya in solving DOCRs coordination problem are explained in the following parts.

Jaya Algorithm.
Suppose the objective function F(X) is required to be minimized or maximized. Let the population size be N pop where the index u ∈ [1, N pop], let the total design variable number be N var where the index v ∈ [1, N var], and let the maximum iteration number be Max iter where the index w ∈ [1, Max_iter]. en, let X u,v,w be the value of the u th candidate population for the v th variable during the w th iteration, then the new modified value where X new u,v,w is the updated value of X u,v,w , r 1 and r 2 are two uniformly generated random numbers ranged in [0, 1], X best,v,w is the best population with the best fitness value, and X worst,v,w is the worst population with the worst fitness value.
It should be explained that, in equation (29), the first term "X u,v,w " represents the original position, which provides the necessary start point for each population (as a moving particle) to roam among the fitness space. e second term "r 1 × (X best,v,w − |X u,v,w |)" encourages the population to fly toward the spot of the best position found so far.
represents the tendency of the population to run far away from the worst position found so far. Pseudocode of Jaya is shown in Algorithm 2.

Application of Jaya Algorithm.
According to the previous work, Jaya algorithm is implemented on the DOCRs coordination problem. As shown in Algorithm 2, it starts by setting values for common parameters. en, the initial population is created. After this, each population is updated by the Jaya function. en, we compare the current fitness value with its previous fitness value to keep the better one. At last, if the maximum iteration number is reached, stop the iteration and record the best solution. Otherwise, go to the next iteration. e main procedures of using Jaya to solve the DOCRs coordination problem are illustrated with further details below: Step 1. Set parameters. ree common parameters are initialized; they are number of design variable (N var), population size (N pop), and maximum iteration number (Max iter).
Step 2. Initialization: initial population X is generated in the form of where where X u,v,w is the u th candidate solution of the v th variable and w is the iteration index number which could be ignored in the initialization step. X min v and X max v are the lower and upper limits of the v th variable given by relay characteristic constraints of equations (8) and (9) or (10) in Section 2.2.
Step 3. Evaluation: fitness value F(X u,v,w ) is calculated by the objective function, which is as the same as Step 3 in Section 3.5.
Step 4. Check the constraints according to Section 2.2.
Step 5. Identify X best,v,w and X worst,v,w according to the best and worst value within F(X).
Step 6. Update the population. e updated population X new u,v,w is calculated by equation (29).
Step 7. Evaluation: the updated fitness value F(X new u,v,w ) is calculated as the same as in Step 3.
Step 8. Comparison: compare the present fitness value with its former fitness value and keep the better one.
Step 9. Check the stopping condition. If Max iter is reached, stop the loop and report the best solution; 6 Complexity otherwise set w � w + 1 and go to Step 4 to continue the iteration.

Numerical Experiments
To evaluate the effectiveness of HHO and Jaya in solving the DOCRs coordination problem, four test systems such as 3bus, 4-bus, 8-bus, and 9-bus (one LP case, four NLP cases, and two MINLP cases) have been investigated in this section, where each system has its own different set of design variables. All the cases are developed using MATLAB software (version R2018b) and executed on a computer under windows 7 on Intel(R) Core(TM) i5-6500 CPU 3.20 GHz with 8 GB RAM environment. In the following tables, "Pop" is denoted as population size, "Iter" is the maximum iteration number, "Time" is the average CPU time spent on one time of independent run, "Std" is the standard deviation of objective function value among 20 times of independent runs, "OF" is the best objective function value, "Feasible" means if this solution satisfies all the constraints, and "✓" means "yes" and "✕" means "no." Jaya algorithm owns the same control parameters: N pop, Max_iter, and N var. To make fair comparison with HHO algorithm in solving the DOCRs coordination problem under the same condition, these parameters are set as the same values as HHO algorithm in each case in the following sections.

3-Bus System.
is 3-bus system consists of 3 buses, 3 generators, 3 lines, and 6 relays, as shown in Figure 1. 3ϕ fault at the midpoint of each line is considered. e CT ratio, the listed primary/backup (P/B) relay pairs, and the 3ϕ fault current of each line are given in Table 1. e time dial setting (TDS) is considered continuously lying in [0.1, 1.1], the coordination time interval (CTI) is equal to 0.2 s, and the plug setting (PS) lies in [1.5, 5.0]. All the relays have IDMT characteristic. is system is experimented separately by LP, NLP, and MINLP formulations.

Case 1: 3-Bus System with LP Formulation.
In this case, the DOCRs coordination problem is formulated as an LP problem. e common parameters for HHO and Jaya are both set as follows: N pop � 5, Max_iter � 20, and N var � 6. e IF, PS, and CT are predefined fixed constants. e only variable is TDS, which is a continuous value and needs to be optimized. Data of this case could be found in Table 1. e optimum settings of TDS obtained by Jaya and HHO are given in Table 2. Simultaneously, simplex method [1], LP using matlab [7], PSO [7], and seeker algorithm (SA) [16] have also been presented to compare with the Jaya and HHO. Initialize N var, N pop, and Max iter; Generate initial population X; Evaluate the fitness value F(X); Set w � 1; while w < Max iter do Identify X best,v,w and X worst,v,w according to the best and worst value within F(X); Keep the old value; end end w � w + 1; end ALGORITHM 2: Jaya.

Complexity
It is obvious from Table 2 that all the compared algorithms give the same objective function value as 1.9258 (s), but Jaya and HHO are able to give more optimized value as 1.7804 (s). It is to be noted that the standard deviation of HHO achieves 0, which means every time of its independent run reaches the global optima. However, the average CPU time by Jaya is 0.0218 (s), which is an extraordinary short time compared with LP, PSO, Seeker, and HHO.
To observe the convergence characteristics of Jaya and HHO more visually, Figure 2 depicts one convergence curve from 20 times of independent runs. We can see that Jaya shows faster convergence rate than HHO at the beginning and reaches its best value within less than 3 iterations. Figure 3 provides the value distributions of OF. We can observe that most of the runs are able to reach optimum results with LP formulation, but there exist some "outliers" with extreme values by Jaya. ose extreme values illustrate that, in this case, Jaya is suffering problems of falling into local optima which is far away from the global optima, but HHO is able to achieve the global optima in all independent runs with Std equals to 0. Table 3 shows the operating times of primary and backup relays; we can see that the CTI constraints are satisfied in every P/B pair by both Jaya and HHO.  Table 4. Simultaneously, GSO [18], IGSO [18], and analytic [41] algorithms have been provided to be compared. It is observed from Table 4 that IGSO achieves the best OF value as 1.2918 (s) and both Jaya and HHO performs worse than IGSO. From Figure 4, we can observe that similar to Case 1, Jaya performs faster convergence rate than HHO. Figure 5 shows the value distribution of OF by 20 running times, from which we can see that the stability of Jaya (with Std equal to 2.2628) is not as good as that of HHO (with Std equal to 0.0995). e authors think that this is the price Jaya has to pay for very fast speed of the convergence rate. Table 5 illustrates that the CTI constraints are satisfied in all P/B pairs by both Jaya and HHO.

Case 3: 3-Bus System with MINLP Formulation.
In this case, parameters for HHO and Jaya are set as follows: N pop � 20, Max iter � 50, and N var � 12. e TDS is continuous, but PS is discrete in steps of 0.5 from 1.5 to 5.0, which is different from Case 2. System data are obtained from Table 1.
We can observe from Table 6 that the minimum value of OF is achieved by HHO as 1.4984 (s), followed by Jaya as 1.5477 (s). e average time spent on one time of independent run by Jaya and HHO is 0.0286 (s) and 0.1228 (s), which are super-short times compared with the other algorithms. In Figure 6, Jaya reaches its optima in less than 15 iterations, while HHO needs about 25 iterations to reach its optima, which means Jaya converges faster than HHO. From Figure 7, we can observe intuitively that the OF value varies in a large range by Jaya (with Std equal to 1.7432), but HHO can maintain the OF value very well (with Std equal to 0.1030). Table 7 illustrates that the CTI constraints are satisfied in all P/B pairs by both Jaya and HHO.

8-Bus System.
is 8-bus system is composed of 8 buses, 2 generators, 2 transformers, 7 lines, and 14 relays, as shown in Figure 8. e near-end 3ϕ fault is considered. e minimum and maximum values of TDS are set to be 0.1 and 1.1, while those of PS are set to be 0.5 and 2.5. e CTI is selected as 0.3. is system is experimented by NLP and MINLP formulations.

Case 4: 8-Bus System with NLP Formulation.
e coordination problem in this case is formulated as an NLP problem. Parameters for HHO and Jaya are set as follows: N pop � 50, Max_iter � 1000, and N var � 28. TDS and IP are continuous values. e CT ratio and 3ϕ short-circuit current for each P/B pair are given in Table 8. e optimum settings of TDS and IP obtained by Jaya and HHO are displayed in Table 9. e results are compared with EFO [38], MEFO BH [38], EM [38], HS [38], and PSO [38].
From Table 9, it is found that Jaya can converge to its global optimum in this case, but HHO cannot. Different       e convergence curve by Jaya (without HHO because it is unfeasible) is shown in Figure 9. It can be seen that Jaya reaches the optimal setting with no more than 200 iterations. e amplitudes of OF values are shown in Figure 10; it can be seen that the values fluctuate in quite large ranges, which means that the robustness of Jaya still needs to be improved further. e operating times and CTI are given in Table 10. It is seen that the constraints are all respected by Jaya. But by HHO, there are several CTI values less than 0.3 (s) (underlined by bold format), which violate the constraint of CTI. is illustrates that the solution obtained by Jaya is feasible but that by HHO is unfeasible.      Table 11, and they are compared with Seeker [16], GA [3], GA-LP [3], BBO [14], and BBO-LP [14]. Here, we should note that system data in this case are different from Case 4, as shown in Table 12, even though they both are 8-bus systems.

Case 5: 8-Bus
Because this case is a highly constrained network with limited number of discrete PS values, it can not get feasible and optimal solutions easily. As shown in Table 11, GA and GA-LP are not capable of achieving feasible solutions, which is also mentioned in [14]. HHO suffers the same problem, but Jaya is able to obtain its optimal result as 10.2325 (s). is illustrates that, in the 8-bus system, no matter PS is continuous or discrete, HHO lacks ability of finding feasible solutions, but Jaya is always able to reach its optima. e reason was explained in Case 4, so it is not repeated here. e convergence behaviour by Jaya is represented in Figure 11, and the distribution of OF value is shown in Figure 12. Table 13 shows the operating time and CTI.

Case 6: 9-Bus System.
is case is modeled as an NLP problem. It is with one single-end fed and equal impedances for all of the lines, as shown in Figure 13. Parameters for         12 Complexity where I n,i is the nominal current rating of the circuit protected by the relay R i , OLF is the overload factor equal to 1.25, and I min f,i is the minimum fault current detected by R i . e optimum settings of TDS and PS are presented in Table 15. It is noticed that the best result is obtained by GA-NLP as 6.1786 (s), followed by HHO and Jaya with values as 7.0297 (s) and 7.1378 (s), respectively. No feasible solutions can be found by NLP. e convergence characteristics could be seen in Figure 14, from which we can observe that HHO converges a little bit faster than Jaya and obtained slightly lower OF value as well. But generally speaking, there is not much difference between them. However, from Figure 15, we can see the differences become obvious. Among 20 times of independent runs, HHO shows much stronger ability in maintaining the minimum value (with Std equal to 1.9196), while Jaya suffers several times of premature problem (with Std equal to 2.7335). Table 16 shows the operating time and CTI; we can see that there is no selectivity constraint violated by both Jaya and HHO.

Case 7: 4-Bus
System. Different from 3-bus, 8-bus, and 9bus systems, in the 4-bus system, both near-end and far-end 3ϕ fault locations are considered, as shown in Figure 16. e network consists of 4 buses, 4 branches, and 8 DOCRs, and it is formulated as an NLP problem. Parameters for HHO and Jaya are set as follows: N pop � 50, Max_iter � 2000, and N var � 16.    14 Complexity e objective function and the selectivity constraint of the system are mathematically expressed as follows: OF � m p�1 T near pr,p + n q�1 T far pr,q , where T near pr,p and T far pr,q are the operating time of the primary relay at the near-end 3ϕ fault (at the p th location) and the farend 3ϕ fault (at the q th location), respectively and T jk and T ik are the operating time of the j th backup and i th primary relay for a 3ϕ fault which happens at the k th location, respectively. ey can be computed by the following equations:     Table 19 shows the optimized results by Jaya and HHO, as well as TLBO [9], TLBO(MOF) [9], DE [5], MDE [5], BBO [14], and BBO-LP [14]. Although MDE ranks the first place as 3.6694 (s), it failed to achieve a feasible solution because there are some constraints unsatisfied, as mentioned in [9]. However, Jaya and HHO are able to achieve feasible solutions with no violations in every independent    Convergence characteristic is represented in Figure 17. Jaya reaches its optima within 200 iterations, while HHO needs about 800 iterations. is proves the convergence rate of Jaya is much faster than HHO. However, the robustness and consistency of Jaya is not as good as that of HHO, as shown in Figure 18. Table 20 shows the operating time and CTI; we can see that CTI constraints are satisfied in all P/B pairs.  [5,9].

Conclusion
is paper compares the performances of the lately proposed Harris Hawks Optimization (HHO) and Jaya optimization in solving the directional overcurrent relays (DOCRs) coordination problem. Especially HHO, which to the best of the authors' knowledge, is being used for the first time in the DOCRs coordination problem. Four test systems including 3-bus, 4-bus, 8-bus, and 9-bus (one LP case, four NLP cases, and two MINLP cases) are experimented. e conclusion is that the robustness and consistency of HHO is relatively better than Jaya, while Jaya provides faster convergence rate and occasionally more competitive objective function value than HHO.
It should be addressed that, in both Jaya and HHO, different population size (N pop) results in different solution quality. If N pop is too small, the population diversity will be limited and will cause premature problem. On the other hand, if N pop is too big, there will be unnecessary calculations which reduce the computational efficiency. In fact, when solving real-world problems, which we cannot know the specific scale for N pop, there is no way to decide the most perfect value for N pop. In this paper, we set the value of N pop as 5, 20, 30, or 50, just for the convenience of comparison. However, the authors are thinking about a new way to determine N pop, which may adopt a self-adaptive mechanism determined by the change strength of the population, without setting the N pop value in advance.
It also worth mentioning that according to the No-Free-Lunch theorem, there is no perfect algorithms for solving all the optimization problems. It means that an algorithm may perform better than another algorithm in a set of problems, but may fail to perform better in another different set of problems. As in this paper, the authors compared the performances of HHO and Jaya in different test cases, and we can not theoretically conclude that HHO is superior to Jaya or inferior to Jaya because each of them has advantages and disadvantages and they can never be the universally-best optimizer.
In future works, two research directions will be studied. Firstly, what can be done to improve the robustness and consistency of Jaya without being trapped into local optima; secondly, how to accelerate the convergence rate of HHO as well as to achieve better objective function value. Moreover, larger test systems of the DOCRs coordination problem such as 15-bus, 30-bus, and 42-bus are supposed be studied in the following research, to deeper expand HHO and Jaya's applications in this field.

Data Availability
All data used to support the findings of this study are included within the article.