Solving Interval Quadratic Programming Problems by Using the Numerical Method and Swarm Algorithms

In this paper, we present a new approach which is based on using numerical solutions and swarm algorithms (SAs) to solve the interval quadratic programming problem (IQPP). We use numerical solutions for SA to improve its performance. Our approach replaced all intervals in IQPP by additional variables. (is new form is called the modified quadratic programming problem (MQPP). (e Karush–Kuhn–Tucker (KKT) conditions for MQPP are obtained and solved by the numerical method to get solutions. (ese solutions are functions in the additional variables. Also, they provide the boundaries of the basic variables which are used as a start point for SAs. Chaotic particle swarm optimization (CPSO) and chaotic firefly algorithm (CFA) are presented. In addition, we use the solution of dualMQPP to improve the behavior and as a stopping criterion for SAs. Finally, the comparison and relations between numerical solutions and SAs are shown in some well-known examples.

Interval nonlinear programming problems are used in modeling and solving many real applications such as planning of waste management activities [13]. e mathematical model and the proofs of interval analysis can be found easily in [14]. Many researchers and authors solve the interval nonlinear programming problems by different methods [15][16][17][18], but all these methods try to get the optimal solution under some specific conditions. For example, in [8,9], Hladík divided the problem into subclasses which can be reduced to easy problems. He put a condition for solving these problems that they must be convex quadratic programming. Jiang et al. [11] suggested a method to solve the nonlinear interval number programming problem with uncertain coefficients both in nonlinear objective function and nonlinear constraints. Liu and Wang [17] presented a numerical method to interval quadratic programming. Li and Tian [18] generalized Liu and Wang's method [17] to solve interval quadratic programming. eir proposed method requires less computing compared with Liu and Wang's method.
As mentioned above, there are many approaches for solving IQPP, but the most common one is dividing the interval problem into two problems. In the first problem, the optimal solution of the lower objective function on the largest feasible region is found, while in the second one, the optimal solution of the upper objective function on the lowest feasible region is obtained. So, the solution value of the interval problem is between values of the lower objective function and the upper objective function. As is known, this process is very difficult in many applications which lead to the difficultly in reaching the lowest value of the objective function of the problem.
KKT conditions are first-order necessary conditions for solving quadratic programming problems. KKT conditions are optimality conditions for the optimization problems with interval-valued objective functions and real-valued constraint functions investigated and discussed in [19][20][21][22]. Wolfe's duality theorems, strong duality theorems, and duality gap in interval-valued optimization problems are discussed in a good mathematical view in [12]. Chalco-Cano et al. [19] introduced a new concept of a stationary point for an interval-valued function based on the gH derivative.
SAs are an important concept in computer science [23,24]. SAs can be described as a population of agents or individuals interacting with each other and with their environment and working under very few rules. e inspiration often comes from nature, especially biological systems. ey are successfully applied in real-life applications. Examples of SAs are ant colony optimization [24,25], particle swarm optimization (PSO) [26,27], firefly algorithm (FA) [28], glowworm algorithm [29], krill herd algorithm [30], monkey algorithm [31], and grasshopper optimization algorithm [32].
On the other hand, most of the researchers proposed hybrid algorithms to improve the solution quality, to benefit from their advantages, and to overcome any deficiencies. For example, a gaining sharing knowledge-based algorithm for solving optimization problems over the continuous space was proposed in [33]. Yulian Cao et al. presented a comprehensive learning particle swarm optimizer (CLPSO) embedded with local search (LS) which has the strong global search capability of CLPSO and fast convergence ability of LS [34]. In [35], the authors presented an adaptive particle swarm optimization with supervised learning and control (APSO-SLC) for the parameter settings and diversity maintenance of particle swarm optimization (PSO) to adaptively choose parameters, while improving its exploration competence. A new hybrid PSO algorithm that introduces opposition-based learning (OBL) into PSO variants for improving the latter's performance is proposed in [36]. In [37], a surrogate-assisted PSO with Pareto active learning was proposed to solve the multiobjective optimization problem with high computational cost. Finally, the historical memory-based PSO (HMPSO) is proposed in [38] which used an estimation of distribution algorithm to estimate and preserve the distribution information of particles' historical promising p bests.
In this paper, a new approach is suggested to solve the interval quadratic programming problem (IQPP). IQPP is converted into the modified quadratic programming problem (MQPP) by replacing all intervals by additional variables. KKT conditions of MQPP are derived and solved by a numerical method. e numerical method provides the boundaries of the basic variables which are used as starting points in SAs. e solutions of KKT conditions are obtained by using the Mathematica program. CPSO and CFA are used to solve these problems to give the decision maker (DM) a fast view about the position of the optimal solution in the intervals. e dual of MQPP is discussed. e solutions of this problem are used to improve the behavior of the proposed approach and as a stopping criterion for this approach.

Interval Quadratic Programming Problem (IQPP)
e interval quadratic programming problem (IQPP) is an interval nonlinear programming problem [9][10][11]. e objective function is the quadratic function, and the constraints are linear functions. IQPP can be defined as e optimal solution of the interval programming problem cannot be defined exactly because at each value belonging to interval coefficients in the objective function and/or the constraints, there may be a new optimal solution. So, no one can define the exact optimal solution. Many researchers descried the optimal solution of IQPP by the objective function values. In [4,15], the authors defined the optimal solution for the interval linear programming problem. For example, Garajová and Hladík [4] defined the optimal set of the interval linear programming problem and examined sufficient conditions for its closedness, boundedness, connectedness, and convexity. So, we defined the optimal solution as the union of all optimal solutions of IQPP. We explored the whole feasible region to get all possible optimal solutions of IQPP. In our approach, by numerical methods, we tried to get all optimal solutions in the feasible region, while by SAs, we found the best objective value in the whole interval.

The Solution of IQPP
e idea of our new vision to solve IQPP starts by replacing all intervals by additional variables and converting IQPP to the modified interval nonlinear problem (MQPP). KKTconditions of MQPP are obtained and solved by a numerical method. e solutions of the numerical method are functions in the additional variables. ese solutions are providing the boundaries of the basic variables which are used as start points for SA. e dual of MQPP is presented and solved. CPSO and CFA are used to solve MQPP and its dual form. Furthermore, the solution of dual MQPP is used as a stopping criterion for our approach and to improve its performance. e proposed approach leads to explore the whole feasible region to get the optimal solution anywhere in the intervals.

Numerical Methodology.
KKT conditions are a system of equations solved by two methods, with additional variables and with interval coefficients. Mathematica is used to solve this system program. In the first method, the equations are solved as algebraic equations where the solutions can be expressed as a 2 Complexity function of the additional variables. ese solutions are very helpful for DM if the optimal solution at certain values of interval coefficients is required. We use the Newton method in the second method if we want to know the boundary of the variables. e solution of this method is used as an initial stage of CPSO, and CFA improved their ability to find the solution in short time than using the whole space of the variables. e following theorems are used for solving the system of nonlinear equations with interval coefficients. Let MQPP have a continuous function G: A 0 ⊆ IR n ⟶ IR n which has a zero y * in a given subset A of A 0 , i.e., a vector y * ∈ A ⊆ A 0 exists such that G(y * ) � 0, where IR n is the set of real intervals' n vectors. Let R n be the set of real n vectors, R n×n be the set of n × n matrices, y be an element of an interval vector y, B H be a hull inverse of the n × n interval matrix B, int(A) be an interior of the m × n interval matrix A, IA be the set y ∈ IR n | y ⊆ A , int(y) ≡ ]y, y[ be the interior of an interval y, and vol(y) be a volume-reducing property of the Newton iteration.
Theorem 1 (see [38]). Let G: A 0 ⊆ R n ⟶ R n be Lipschitz continuous on A ⊆ A 0 and let B be a regular Lipschitz set on A, then , then a * ∈ G * (A), i.e., the equation G(y * ) � a * has a unique solution y * ∈ A Theorem 2 (see [38]). Under the assumption of eorem 1 above, if y ∈ y ∈ IA, then every y ′ ∈ IR n satisfying N(y, y) ≔ y − B H G(y) ⊆ y ′ has the following three properties: If y * ∈ int(y) and y ′ ⊆ y, then G contains a unique zero in y (and hence in y ′ ) Since y, y * ∈ y implies y * ∈ N(y, y), it is natural to consider the general Newton iteration [38]: With the general Newton operator: Theorem 3 (see [38]). Let B be a strongly regular Lipschitz matrix on y ∈ IA 0 for G: such that CB is regular and let (CB) I be an inverse of CB. If (CB) I is regular, then the Newton iteration (6) is strongly convergent for every choice of y l ∈ y l . Moreover, for all l ≥ 0, we have either y l ∉ y l+1 or y l+1 � y l and G y l � 0.
Corollary 1 (see [38]). If, for some C ∈ R n×n , CB is an M-matrix or I − CB < (1/2), then the optimal Newton iteration (6) is strongly convergent for every choice of y l ∈ y l and the relations (8) and vol(y

Swarm Algorithm (SA).
Chaos theory (CT) is used to improve the performance of many SAs [39], where the high randomness of the chaotic sequence improves the convergence and diversity of the solutions. CT is considered as irregular behavior in nonlinear systems due to using the chaotic maps. ese maps are worked as particles which move in a small range of nonlinear dynamic systems without knowing the traveling path of these particles. Many researchers proposed combinations between CT and meta-heuristic algorithms to improve the solution quality such as hybrid chaos-PSO [40], chaotic genetic algorithm [41], combined evolutionary algorithm with chaos [42], chaotic whale optimization algorithm [43], and chaotic artificial neural networks [44].

Chaotic Firefly Algorithm (CFA).
FA is an evolutionary computation technique [28]. e main advantages of FA are exploitation and exploration. e improved FA with CT which is called the chaotic firefly algorithm (CFA) is applied to solve IQPP. e details of the main steps of CFA are described as follows: Step 1. Initialization. A population of random N fireflies (solutions) is initialized t � 0, where T is the total number of iterations. e position of the i-th firefly in an n− dimensional space is denoted as x i and represented as Step 2. Evaluation. Evaluating the fitness value (the light intensity I(X K i ) ∀i � 1, 2, . . . , N of each firefly in the population or simply Step 3. Determination of Best Solution. For minimization problems, the firefly that has minimum light intensity is the best solution x b .
Step 4. Updating Positions of Fireflies. For every firefly i � 1, 2, . . . , N and for every firefly j � 1, 2, . . . , N do the following: if I(x t j ) < I(x t i ), the i-th firefly is attracted to the firefly j, and its position X K i is updated according to the following equation: where β 0 the attractiveness at r ij is 0, c is the light absorption coefficient, r ij is the Cartesian distance between the two fireflies i and j, α is a step size factor controlling the step size, and ε G is a vector drawn from a Gaussian or other distribution.
Step 5. Stopping Condition. If a prespecified stopping criterion is satisfied, stop the run; otherwise, go to Step 4.

Chaotic Particle Swarm Optimization Algorithm (CPSO).
PSO can solve many difficult optimization problems. It has a faster convergence on some problems in comparison [45]. e idea of PSO is that several random particles are placed in the search domain of the optimization problem. At its current location, each particle evaluates the objective function. After that, each particle determines the direction of movement in the search domain by combining some aspects of the history of its own current and best locations with particles located nearby in the swarm, but with some random disturbance. e next iteration takes place after all particles have been moved. Eventually the swarm, like a flock of birds collectively foraging for food, is likely to move close to an optimum of the fitness function. e i-th particle is described by an n-dimensional vector as x i � (x i1 , x i2 , . . . , x in ), while its velocity is represented as v i � (v i1 , v i2 , . . . , v in ). e best position of the particle in its memory that it visited is denoted as p best i � (p i1 , p i2 , . . . , p in ). e best position in the swarm is denoted as g best � (g 1 , g 2 , . . . , g n ). e steps of the CPSO algorithm are described as follows: Step 1. Initialization.
where w is an inertia term, c 1 and c 2 are the positive constants, and r 1 and r 2 are the random numbers belonging to (0, 1). (f ) All positions (x t i ∀i the particle) are updated according to the following equation: (g) Chaotic repairing of the new position x t+1 Step 2(a) Step 3. Termination. If a prespecified stopping criterion is satisfied, stop the run; otherwise, go to Step 2.

Chaotic Repairing of Infeasible Solution. If the new position x t+1
i is infeasible, it is repaired according to the following equation: If x t+1 i is still infeasible, x t+1 i is repaired according to the following equation: where FS is any feasible solution in the search space and ϕ is a chaotic number generated by the following logistic map: where m is the age of the infeasible solution, c � 4, ϕ 0 ∈ (0, 1), and ϕ 0 ∉ 0, 0.25, 0.5, 0.75, 1 { }.

e Proposed Approach.
In this section, we discuss the proposed approach. e following steps describe the proposed approach clearly: Step 1. Replacing all intervals in IQPP by additional variables which is called MQPP and obtaining the dual form of MQPP.
Step 2. Constructing KKT for MQPP, and solving KKT equations by the numerical algorithm.
Step 3. Using the solutions of the numerical algorithm as a start point of CPSO and CFA.
Step 4. Solving MQPP and its dual form by CPSO and CFA.
Step 5. e values of the objective function which are obtained from solving the problem by SA and its dual form are compared. If their values are the same, the global optimal solution of our problem is found. If there is a difference between the outputs from the problem and its dual, we solve the problem and its dual form again until the difference between them is ε, where ε can be computed as 4 Complexity ε � δ the optimal value of the problem , (13) where δ is the difference between the optimal value of the problem and the optimal value of its dual problem. is solution is a local optimal solution. is comparison is used as a new stopping criterion. e suggested method is suitable for convex and nonconvex problems. e steps of the proposed approach are illustrated in Figure 1.

Results and Discussion
e proposed algorithm is tested by solving three problems taken from the literature. Each problem was independently run 30 times. e proposed algorithm was programmed in MATLAB (R2016b) and implemented on the PC with P4 CPU 3.00 GHz, 1 GB RAM with an i5 processor, Windows 7 operating system. e proposed algorithm, as any nontraditional optimization algorithms, involves a number of parameters that affect the performance of the algorithm. e parameters adopted in the implementation of CFA and CPSO are listed in Table 1.

Problem 2.
is problem is formulated as follows [1]: By replacing all intervals by additional parameters, the problem becomes where e dual form of problem (22) is In [8,9], problem (21) is divided into two problems. e first problem is Its solution is (x 1 , x 2 ) � (1.5, 0.5) and f(x) � − 3.5. e second problem is Its solution is (x 1 , x 2 ) � (0.5, 0) and f(x) � − 0.75. e solutions of KKT conditions in (24) can be expressed as In e results of problem (2) by using SAs are shown in Table 3.
In addition, the statistical results obtained, by original PSO, original FA, CPSO, and CFA, over the 30 runs are summarized in terms of CPU time, mean value, standard deviation, and worst and best values in Table 5.   Results show that the proposed SAs (CFA and CPSO) outperform the other original algorithms in terms of the optimality. In addition, these results prove that the proposed SAs can solve IQPP effectively with low computational cost where the CPU time is less than the other original algorithms as shown in Table 5. In other words, the solutions, of the test problems, of CPSO and CFA are the same as the solutions of previous methods, but they are very fast without any effort of computation. On the other hand, the numerical approach gives the solution as a general formula in the additional variables, where we can obtain, by this formula, the solution at any values inside the intervals. In addition, the numerical solution provides the boundaries of the basic variables which are used in the step of initialization in SAs. Finally, we can say that our approach, as any SAs, is more generalized and suitable for real applications than traditional methods.

Conclusion
is paper deals with a new approach to solve IQPP. We aim to explore the feasible region to get the optimal solution anywhere. All intervals were replaced by additional variables. e new form with additional variables is MQPP. KKT conditions for MQPP were solved numerically to get the solutions as a function in the additional variables and provide the boundaries of the basic variables. e solutions are used as start points for SAs. CPSO and CFA are used to solve MQPP and its dual form. e advantages of our procedure are (1) the solution of the numerical method is more general than previous methods, (2) giving the decision maker a very fast view of the optimal solution inside the intervals, (3) using the optimal solution of the dual problem as a stopping criterion for SAs is more suitable than other criteria, and (4) its effectiveness is verified as compared with other studies. Also, we compare PSO, FA, CPSO, and CFA with each other. Real applications of interval nonlinear programming problems should be conducted in the future. In addition, we are planning to use this vision to solve multiobjective linear programming with interval coefficients. Also, we aim to discuss the GSK algorithm to solve IQPP.

Data Availability
All data used to support the findings of this study are included within the article. Iteration