A Bio-Inspired Method for Engineering Design Optimization Inspired by Dingoes Hunting Strategies

A novel bio-inspired algorithm, namely, Dingo Optimization Algorithm (DOA), is proposed for solving optimization problems. The DOA mimics the social behavior of the Australian dingo dog. The algorithm is inspired by the hunting strategies of dingoes which are attacking by persecution, grouping tactics, and scavenging behavior. In order to increment the overall eﬃciency and performance of this method, three search strategies associated with four rules were formulated in the DOA. These strategies and rules provide a ﬁne balance between intensiﬁcation (exploitation) and diversiﬁcation (exploration) over the search space. The proposed method is veriﬁed using several benchmark problems commonly used in the optimization ﬁeld, classical design engineering problems, and optimal tuning of a Proportional-Integral-Derivative (PID) controller are also presented. Furthermore, the DOA’s performance is tested against ﬁve popular evolutionary algorithms. The results have shown that the DOA is highly competitive with other metaheuristics, beating them at the majority of the test functions.


Introduction
Typically, a constrained optimization problem can be described as a nonlinear programming problem (NLP) [1], as shown below.
Minimize f(x), Subject to g i (x) ≤ 0, i � 1, . . . , p, h j (x) � 0, j � 1, . . . , q, x (l) k ≤ x k ≤ x (u) k , k � 1, . . . , D. (1) In the above NLP problem, the function f is the objective function, where f(x): R D ⟶ R, there are D variables, x � (x 1 , . . . , x D ), is a vector of size D, x ∈ R D , R D representing the whole search space, g i are the inequality constraints, h j are the equality constraints, and x (l) k , x (u) k are the lower bound constraints and upper bound constraints, respectively, where p and q are defined as the number of inequality and equality constraints, respectively. us, the optimization goal is to find a feasible vector x to minimize the objective function. When the vector x contains a subset of μ and ] vectors of continuous real and integer variables, respectively, |μ| + |]| � D, then the NLP problem becomes a mixed− integer nonlinear programming problem (MINLP). Nonconvex NLPs and MINLPs are commonly found in realworld situations. erefore, the scientific community continues developing new approaches to obtain optimal solutions with acceptable computation time in various engineering, industrial, and science fields. For example, on design optimization, the design objective could simply be to minimize the cost or maximize the efficiency of production. However, the objective could be more complex, e.g., controlling the highly nonlinear behavior of the pH neutralization process in a chemical plant. e need to solve practical NLP/MINLP problems has led to the development of a large number of heuristics and metaheuristics over the last two decades [2,3]. Metaheuristics, which are emerging as effective alternatives for solving nondeterministic Polynomial Time hard (NP-hard) optimization problems, are strategies for designing or improving very general heuristics procedures with high performance in order to find (near) optimal solutions. e goal of the metaheuristics is efficient exploration (diversification) and exploitation (intensification) of the search space, where an effective algorithm sets a good ratio between this two parameters. For example, we can take advantage of the search experience to guide search engines by applying learning strategies or incorporating probabilistic decisions. Metaheuristics can be classified in two main groups, local search and population based. In the first group, the search process starts with one candidate solution and then it is improved in each iteration over the runtime. Algorithms such as variable neighborhood search (VNS) [4], Tabu search (TS) [5], Simulated annealing (SA) [6], and Iterated local search [7] are considered as part of the local search metaheuristics group. On the other hand, in the second group, among the population-based metaheuristics, evolutionary algorithms are metaheuristics inspired by the process of natural selection. Genetic Algorithms (GA) [8], Genetic Programming [9], Differential Evolution (DE) [10], and Evolution Strategy (ES) [11] are considered as state-ofthe-art population-based evolutionary algorithms. Moreover, Ant Colony Optimization (ACO) [12], Cuckoo Search Algorithm (CSA) [13], and Particle Swarm Algorithm (PSO) [14] are some representative population-based metaheuristics categorized as swarm based [15]. Even though there is a great amount of research on metaheuristics, it continues to be used in many fields, e.g., cluster analysis, scheduling, artificial intelligence, process engineering, etc., with flattering results. However, there is no particular heuristic algorithm suitable for all optimization problems [16]. erefore, designing new optimization techniques is an active research field within the scientific community [17]. A survey of some of the most relevant animal or natured based bio-inspired algorithms includes but is not limited to Virus Colony Search (VCS) [18], Plant Propagation algorithms [19], Lightning Search Algorithm (LSA) [20], Ant Lion Optimizer (ALO) [21], Lion Optimizer Algorithm (LOA) [22], Spotted Hyena Optimizer (SHO) [23], Harris Hawks Optimization (HHO) [24], Dragonfly Algorithm (DFA) [25], Grey Wolf Optimizer (GWO) [26], Dolphin Echolocation Algorithm [27], Water Strider Algorithm (WSA), [27], Slime Mould Algorithm (SMA) [28], Moth Search Algorithm (MSA) [29], Colony Predation Algorithm (CPA) [30], Black Widow Optimization Algorithm (BWOA) [31], Grasshopper Optimization Algorithm (Goa) [32], and the Hunger games search (HGS) [33]. Additionally, some outstanding physical phenomena based bio-inspired algorithm for optimization are [27]: Magnetic Charged System Search (MCSS), Colliding Bodies Optimization (CBO), Water Evaporation Optimization (WEO), Vibrating Particles System (VPS), ermal Exchange Optimization (TEO), Cyclical Parthenogenesis Algorithm (CPA), among others.
Here, a novel bio-inspired algorithm, namely Dingo Optimization Algorithm (DOA), is proposed for solving optimization tasks. It is based on the simulation of the hunting strategies of Dingoes, which are attacking by chasing, grouping tactics, and scavenging behavior. e remainder of this paper is organized as follows. Section 2 illustrates the DOA details, including the inspiration and mathematical model. In order to illustrate the proficiency and robustness of the proposed approach, several numerical examples and their comparison with state-of-the-art metaheuristics are presented in Section 3. Finally, Section 4 summarizes our findings and concludes the paper with a brief discussion on the scope for future work.

Dingo Optimization Algorithm (DOA)
In this section, the inspiration of the proposed method is first discussed. en, the mathematical model is provided.

Biological Fundamentals.
e dingo is Australia's native largest mammalian carnivore, and their scientific name is Canis lupus dingo. Several studies have been conducted to study the dingoes' feeding behavior and diet, showing that these canines prey on several species such as mammals, birds, vegetation (seeds), reptiles, insects, fish, crabs, and frogs, just to mention some [34]. ey are opportunistic hunters but will also scavenge food when they are exploring new territories and suddenly find dead prey. eir hunting behavior can be variable. Usually, they pursue and attack their prey from behind. Group attack is their most used hunting strategy in which they surround the prey inside a perimeter and begin to chase it until they fatigue it. Further details about the dingoes' behavior can be found in [35,36].

Mathematical Model and Optimization Algorithm.
In this section, the mathematical model of Dingoes different hunting strategies is first provided. e DOA algorithm is then proposed. e hunting strategies considered are attacking by persecution, grouping tactics, and scavenging behavior. In addition, dingoes' survival probability is also considered.

Strategy 1: Group Attack.
Predators often use highly intelligent hunting techniques. Dingoes usually hunt small prey, such as rabbits, individually, but when hunting large prey such as kangaroos, they gather in groups. Dingoes can find the location of the prey and surround it, such as wolves, see Figure 1. is behavior is represented by the following equation: where x → (t + 1) is the new position of a search agent (indicates dingoes' movement), na is a random integer number generated in the invertal of [2, SizePop/2], where SizePop is the total size of the population of dingoes. φ → k (t) , is a subset of search agents (dingoes that will attack) where φ ⊂ X, X is the dingoes' population randomly generated, is the best search agent found from the previous iteration, and β 1 is a random number uniformly generated in the interval of [− 2, 2]; it is a scale factor that changes the magnitude and sense of the dingoes' trajectories. e Group attack pseudocode is shown in Algorithm 1.

Strategy 2: Persecution.
Dingoes usually hunt small prey, which is chased until it is caught individually. e following equation models this behavior: is the best search agent found from the previous iteration, β 1 has the same value as in equation (2), β 2 is a random number uniformly generated in the interval of [− 1, 1], r 1 is the random number generated in the interval from 1 to the size of a maximum of search agents (dingoes), and x → r1 (t) is the r 1 -th search agent selected, where i ≠ r 1 .
Equation (3) is used to represent the dingoes' trajectories while hunting for their prey. At the same time, Figure 2 is used to graphically illustrate its parameters.

Strategy 3: Scavenger.
Scavenger behavior is defined as the action when dingoes find carrion to eat when they are randomly walking in their habitat. Equation (4) is used to model this behavior, which is graphically displayed in Figure 3 x where x → (t + 1) indicates the dingoes' movement, β 2 has the same value as in equation (3), r 1 is the random number generated in the interval from 1 to the size of a maximum of search agents (dingoes), x → r 1 (t) is the r 1 -th search agent selected, x → i (t) is the current search agent, where i ≠ r 1 and σ is a binary number randomly generated by Algorithm 2, σ ∈ 0, 1 { }.

Strategy 4: Dingoes' Survival
Rates. e Australian dingo dog is at risk of extinction mainly due to illegal hunting. In the DOA, the dingoes' survival rate value is provided by the following equation: where fitness max and fitness min are the worst and the best fitness value in the current generation, respectively, whereas fitness(i) is the current fitness value of the i − th search agent. e survival vector in equation (5) contains the normalized fitness in the interval of [0, 1]. Equation (6) is applied for low survival rates by Algorithm 3, e.g., for survival rates values equal to or less than 0.3.
where x → i (t) is the search agent with low survival rates that will be updated, r 1 and r 2 are random numbers generated in the interval from 1 to the size maximum of search agents (dingoes), with r 1 ≠ r 2 , x → r 1 (t) and x → r 2 (t), are the r 1 , r 2 -th search agents selected, x → * (t) is the best search agent found from the previous iteration and σ is a binary number randomly generated by the second algorithm, σ ∈ 0, 1 { }. Note that equation (6) is an addition or subtraction of vectors, defined by the random value of σ.

Pseudocode for DOA.
e pseudocode of the DOA is explained in Algorithm 4, whereas the overall flow is shown in Figure 4.

DOA Algorithm Analysis
In this section, an in-depth analysis of the DOA algorithm is carried out.
is analysis includes the DOA's time complexity, parameters settings studies, the hunting strategies analysis, and the associated effects of the population size in the algorithm's performance.

Hunter Strategies Analysis.
To better understand the performance of each of the hunting strategies separately, a unimodal and a multimodal problem was conducted. e algorithm was modified to use only one strategy on each run time and executed with the number of iterations set as 500 and population size (search agents) of 100. e output of this analysis is displayed in Figure 5. Note that the group attack strategy has obtained the best performance for the unimodal function F2, while the persecution strategy has the worst, while, for the multimodal function F14, the persecution strategy overcomes the scavenger strategy and shows competitive results compared to the attack strategy.

Population Size
Analysis. e effects of the population size on the performance of the DOA algorithm are studied by fixing the number of iterations to 100 and then varying the population size initially at 15, then 30, 40, 50, 100, and 200 for the F2 F14 functions. e output of these tests is summarized in Table 1 and Figure 6, where the best optimal value is found at a population size of 100.
Notice that from the population size analysis results described in Table 1, row two corresponding to size 30 shows that the DOA algorithm outperformed, in F2 and F14, the algorithms reported in Table 2. Nevertheless, Table 1 data were conducted with a fixed population size of 100, whereas Table 2 data were calculated with a population size of 500. Moreover, for a smaller population size of 15, it still outperforms the other algorithms.

P and Q Parameters Analysis.
e DOA algorithm uses two parameters, P and Q. P is a fixed value that indicates the probability of the algorithm to choose between the hunting or scavenger strategy. If the hunting strategy is selected by the algorithm, then a fixed Q value indicates its probability to choose between group attack or persecution strategy.
In order to determine the effects of P and Q parameters on the DOA performance, an analysis of said variables is carried out by means of the benchmark problems F1 to F23; see Tables 3-5. e methodology consists of setting P fixed at 0.5 while Q starts on 0.25 and is incremented on 0.25 steps during four runtimes, one for each Q value until 1 is achieved. Afterward, a similar approach is conducted, leaving Q fixed at 0.5 while P starts on 0.25 and vary in 0.25 increments until P is equal to 1. e convergence analysis results of this parameters test are shown in Figures 7 and 8. It is to be noticed that regardless of P and Q values, the algorithm converges to the solution reported in Table 2. is is due to the incorporation of the survival strategy, which improves the quality of search agents by updating those with low survival values.
Based on this, for the rest of the paper, the DOA tests will be conducted with P and Q fixed at 0.5 and 0.70, respectively.

Experimental Setup
In this section, 23 classical benchmark functions, reported in the literature [37], are optimized to investigate the effectiveness, efficiency, and stability of the DOA algorithm. e functions are categorized as unimodal, multimodal, and fixed-dimension multimodal. Table 3 shows unimodal functions, labeled from F1 to F7, whereas functions F8 to F13 are considered as multimodal, as shown in Table 4. Additionally, functions F14 to F23 are defined as fixed-dimension multimodal in Table 4. Unimodal functions allow testing the exploitation ability since they only have one global optimum, whereas multimodal functions and fixed-dimension multimodal functions are able to test the exploration ability since they include many local optima. Tables 3-5 summarizes these benchmark functions where Di m indicates the dimension of the function, Interval is the boundary of the function's search space, and f min is the optimum value. Figure 9 shows the typical 2D plots of the cost function for some test cases considered in this study.    Mathematical Problems in Engineering For each benchmark function, the DOA algorithm was run 30 times, the size of the population (search agents) was set to 30, while the number of iterations was defined as 500. Figure 10 shows the convergence graphs of all functions. e DOA algorithm was compared with the following algorithms: the Whale Optimization Algorithm (WOA) [38], Particle Swarm Optimization (PSO) [39], Gravity Search Algorithm (GSA) [40], Differential Evolution (DE) [41], and Fast Evolutionary Programming (FEP) [42]. Our approach is implemented in MATLAB R2018a. All computations were carried out on a standard PC (Linux Kubuntu 18.04 LTS, Intel core i7, 2.50 GHz, 16 GB). e six algorithms were ranked by computing their Mean Absolute Error (MAE). MAE is a valid statistical criterion and an unambiguous measurement of the average error magnitude. It shows how far the results are from actual values. e MAE formula is as follows: where m i indicates the mean of the optimal values, k i is the corresponding global optimal value, and N represents the number of test functions. Table 6 shows the average error rates obtained in the 23 benchmark functions. e ranking of all the algorithms based on their MAE calculations is illustrated in Table 7.

Results and Discussion
According to the statistical results given in Table 2, the Dingo Optimization Algorithm (DOA) is able to provide very competitive results. In the exploitation analysis, unimodal functions, the DOA outperforms all other algorithms, as Whale Optimization Algorithm (WOA), Particle Swarm Optimization (PSO), Gravity Search Algorithm (GSA), and Fast Evolutionary Programming (FEP), in F1, F2, F3, and F7 functions and similar to Differential Evolution (DE), it found the optimal result in F4. Table 8 shows the exploitation capability results summary. We can see that this result represents an accumulated rate of 71.43 % that outperforms or ties over the others algorithms. erefore, these results show DOA superior performance in terms of exploitation at the optimum. On the other hand, the exploration analysis shows that the DOA was most efficient in F10, F15, F16, F17, F19, F20, F21, and F23, multimodal and fixed-dimension multimodal functions (see Table 2). In addition, the DOA showed similar behavior to other metaheuristics to find the optimal result in F9, F11, F14, F18, and F22 functions. is result represents an accumulated rate of 81.25 % that outperforms or ties compared with other algorithms. Table 8 confirms that the DOA also has a very good exploration capability. We can see that the DOA algorithm is at least the second-best and frequently the most efficient on the majority of the test functions due to the      Table 6), whereby DOA algorithm appears ranked in the first position (see Table 7). Additionally, a convergence analysis was carried out. e purpose of the convergence analysis is to understand and visualize the search on promising regions by the algorithm exploration and exploitation capabilities. e DOA and WOA algorithm are compared during the convergence analysis due to WOA better performance over the metaheuristics reported in [38]. Figure 11 illustrates the DOA convergence analysis results for selected test functions versus the highest-ranking algorithms taken from MAE test. We can see that the DOA Table 3: Unimodal testbench functions [37].  [37].

Real-World Applications
In this section, a constrained optimization problem, typically represented by (1), is considered. e DOA algorithm was tested with four constrained engineering design problems: a cantilever beam, a three-bar truss, a pressure vessel, and a gear train design problem. e pressure vessel design 0  20  40  60  80  100  0  20  40  60  80  100  0  20  40  60  80  100  0  20  40  60  80  100  Generation number  Generation number  Generation number  Generation number   0  20  40  60  80  100  0  20  40  60  80  100  0  20  40  60  80  100  0  20  40  60  80  100  Generation number  Generation number  Generation number  Generation number   0  20  40  60  80  100  0  20  40  60  80  100  0  20  40  60  80  100  0  20  40  60  80  100  Generation number  Generation number  Generation number  Generation number   0  20  40  60  80  100  0  20  40  60  80  100  0  20  40  60  80  100  0  20  40  60  80        problem and the Gear train design problem contain discrete variables. e constraint handling method used is based on [43], where infeasible solutions (that is, at least one constraint is violated) are compared based on only their constraint violation. e constraint handling methods are formulated as follow: where f max is the objective function value of the worst feasible solution in the population. Note that the fitness of a feasible solution is equal to its objective function value. On the other hand, the fitness of an infeasible solution is punished. Typically, it is defined as the value of the worst feasible solution in the current population plus the sum of the values obtained when evaluating each constraint violated.
On the other hand, also the DOA algorithm was tested to find the optimal tuning parameters of a PID controller.   e objective is to minimize weight. In this problem, there are five optimization variables, one for each cantilever that represents the length of their side and includes one optimization constraint [44]. e cantilever weight optimization is formulated in the following equation: variable range 0.01 ≤ x 1 , x 2 , x 3 , x 4 , x 5 ≤ 100.  Table 9, where Table 9 was taken from [44] and updated with DOA's algorithm results. In f(n) In f(n) In f(n) Figure 11: e highest-ranking algorithms taken from the MAE test, DOA, WOA, GSA, and PSO are compared by means of its convergence curves, tested from some of the benchmark problems described in the text. Note that DOA outperforms other techniques when obtaining the lowest weight and shows very competitive results compared to SSA.

ree-Bar Truss Design Problem.
Here, the problem is to design a truss with three bars to minimize its weight. In this test, there are two optimization variables with three optimization constraints, stress, deflection, and buckling. It is formulated as shown in (10). is example is reported in [44] as a highly constrained search space. e overall structure of the three-bar truss is shown in Figure 13.
Table 10 was taken from [44] and updated with DOA's algorithm results. Some of the algorithms that are chosen for comparison are Salp Swarm Algorithm (SSA), Differential Evolution with dynamic stochastic selection (DEDS), Hybridsced Particle Swarm Optimization with Differential Evolution (PSO-DE), Mine Blast Algorithm (MBA), Swarm with intelligent information (Ray and Sain), Tsa Method (Tsa), and Cuckoo Search Algorithm (CSA). e comparison with the aforementioned algorithms shows that the DOA algorithm provides very competitive and very close results compared to SSA and DEDS (the discrepancy is equal to 1E − 7) and outperforming the rest of the algorithms.

Pressure Vessel Design Problem.
e goal of this problem is to minimize the total cost. is includes material, forming, and welding of a cylindrical pressure vessel [38]. Here, there are four optimization variables and four optimization constraints, which are the thickness of the shell (Ts), the thickness of the head ( ), the inner radius (R), and the length of the cylindrical section without considering the head (L), by which the pressure vessel is to be fabricated, as shown in Figure 14. Ts and are discrete variables in multiples of 0.0625 in., while R and L are real variables. e mathematical formulation of the optimization problem is described as follows: Some of the algorithms that are chosen for comparison are Differential Evolution (DE), Genetic Algorithm (GA), Whale Optimization Algorithm (WOA), Particle Swarm Table 9: Comparison results for the cantilever design problem, taken from [44] and updated with the DOA's algorithm results.
Algorithm [44] Optimal  Optimization (PSO), among others [38]. Table 11 was taken from [38] and updated with DOA's algorithm results. In this table, the comparison results show that the DOA algorithm is ranked as the first best solution obtained.

Discrete Engineering Problem-Gear train Design Problem.
Here, the objective is to find the optimal number of the teeth of a four gear train while minimizing the gear ratio, as shown in Figure 15, where its four parameters are discrete [21]. In order to handle discrete values, each search agent was rounded to the nearest integer number before the fitness evaluation. e design engineering constraint is defined as the number of teeth on any gear that should only be in the range of [12,60]. Accordingly, the optimization problem can be formulated as follows: Some of the algorithms that are chosen for comparison are Ant Lion Optimizer (ALO), Cuckoo Search Algorithm (CSA), Mine Blast Algorithm (MBA), Interior Search Algorithm (ISA), Genetic Algorithm (GA), Artificial Bee Colony (ABC), and Augmented Lagrange Multiplier (ALM) [21]. Table 12 was taken from [21] and updated with DOA's algorithm results. It shows that the DOA algorithm gives competitive results for numbers of function evaluations and is suitable to solve discrete constrained problems.

Tuning of a Proportional-Integral-Derivative (PID)
Controller: Sloshing Dynamics Problem. Sloshing dynamics is a well-studied phenomenon in fluid dynamics. It is related to the movement of a liquid inside another object, altering the system dynamics [45]. Sloshing is an important effect on ships, spacecraft, aircraft, and trucks carrying liquids, as it causes instability and accidents. Sloshing dynamics can be depicted as a Ball and Hoop System (BHS). is effect illustrates the dynamics of a steel ball that is free to roll on the inner surface of a rotating circular hoop. e ball exhibits an oscillatory motion caused by the continuously rotated hoop through a motor. e ball will tend to move in the direction of the hoop rotation and will fall back, at some point, when gravity overcomes the frictional forces. e BHS behavior can be described by seven variables: hoop radius (R), hoop angle (θ), input torque to the hoop (T(t)), ball position on the hoop (y), ball radius (r), ball mass (m), and ball angles with vertical (slosh angle) (ψ). A schematic representation is shown in Figure 16. e transfer function of the BHS system, taken from [46,47], is formulated in equation (13), where θ is the input and y is the output of the BHS system.
Pareek et al. studied the optimal tuning of a Proportional-Integral-Derivative (PID) controller using metaheuristic algorithms [47], specifically by using Bacteria Foraging Optimization (BFO), Particle Swarm Optimization (PSO), and Artificial Bee Colony Algorithm (ABC). In this study, we updated Tables 13 and 14, taken from [47], with the DOA algorithm's results. e transient response parameters of the Proportional-Integral-Derivative (PID) controller are Rise time, Settling time, Peak time, and Peak overshoot [48]. e PID controller      is designed to minimize the overshoot and settling time so that the liquid can remain as stable as possible under any perturbance and if it moves, it can rapidly go back to the steady state. It is to be noticed that the DOA outperforms the aforementioned algorithms, obtaining the lowest rising and settling time value, as well as the peak overshoot, see Table 14. In addition, in Figure 17, we can see the PID controller step response for the four algorithms. Note that the DOA-PID (blue line) is more stable with very fine control without exceeding the setpoint (dotted line).

Conclusions
is study presented a novel population-based optimization algorithm based on three different hunting strategies of the Canis lupus dingo. ese strategies, attacking by persecution, grouping tactics, and scavenging behaviors, were carefully designed to guarantee the exploration and exploitation of the search state and evaluated by 23 mathematical benchmark functions. e DOA showed an exploitation and exploration accumulated rate of 71.43% and 81.25%, respectively, that outperforms or ties the other algorithms. DOA uses two parameters, P and Q, to indicate the probability of the algorithm to choose between the hunting or scavenger strategy. It is to be noticed that regardless of P and Q values, the algorithm converges to the solution due to the incorporation of the survival strategy.
e DOA performance was compared with five well-known state-of-the-art metaheuristic methods available in the literature: Whale Optimization Algorithm (WOA), Particle Swarm Optimization (PSO), Gravity Search Algorithm (GSA), Differential Evolution (DE), and Fast Evolution Programming (FEP). e statistical analysis mean absolute error (MAE) was conducted to measure the performance of the DOA and the previously mentioned algorithms. DOA was found to be highly competitive in the majority of the test functions. e capabilities of DOA were also tested with classical engineering problems (design of a cantilever beam, design of a three-bar truss and design of a pressure vessel). e results obtained by DOA in most cases overcomes several well-known metaheuristics Additionally, the DOA demonstrated its capability to find the optimal tuning parameters of a PID controller, which is the rise time, the settling time, the peak time, and the peak overshoot, which were efficiently optimized for the sloshing dynamics problem. In order to expand the algorithm scope, it was also tested with an engineering discrete problem (design of a gear train), showing competitive results. Finally, this paper opens up several research directions for future studies. ey include the incorporation of self-adaptive parameters, a method to handle multiobjective optimization problems with large problem instances using parallelization strategies, e.g., GPU computing and multicore resources.

Data Availability
e source code used to support the findings of this study have been deposited in the Mathworks repository (https:// www.mathworks.com/matlabcentral/fileexchange/98124dingo-optimization-algorithm-doa).  Step Response

Conflicts of Interest
e authors declared no potential conflicts of interest with respect to the research, authorship, funding, and/or publication of this article.