Information Literacy Assessment with a Modified Hybrid Differential Evolution with Model-Based Reinitialization

Information literacy assessment is extremely important for the evaluation of the information literacy skills of college students. Intelligent optimization technique is an effective strategy to optimize the weight parameters of the information literacy assessment index system (ILAIS). In this paper, a new version of differential evolution algorithm (DE), named hybrid differential evolution with model-based reinitialization (HDEMR), is proposed to accurately fit the weight parameters of ILAIS. The main contributions of this paper are as follows: firstly, an improved contraction criterion which is based on the population entropy in objective space and the maximum distance in decision space is employed to decide when the local search starts. Secondly, a modified model-based population reinitialization strategy is designed to enhance the global search ability of HDEMR to handle complex problems. Two types of experiments are designed to assess the performance of HDEMR. In the first type of experiments, HDEMR is tested and compared with seven well-known DE variants on CEC2005 and CEC2014 benchmark functions. In the second type of experiments, HDEMR is compared with the well-known and widely used deterministic algorithm DIRECT on GKLS test classes. The experimental results demonstrate the effectiveness of HDEMR for global numerical optimization and show better performance. Furthermore, HDEMR is applied to optimize the weight parameters of ILAIS at China University of Geosciences (CUG), and satisfactory results are obtained.


Introduction
With the arrival of the new economic era based on information and knowledge, information has become an important factor in all fields of society. Information literacy (IL) skills which are the ability to locate, evaluate, and effectively use the needed information have become more important, especially to college students. At present, there were some college education information literacy standards such as the American Association of College and Research Libraries (ACRL) standard, including 5 first-level indexes, 22 second-level indexes, and 86 third-level indexes, the Australian and New Zealand Institute of Information Literacy (ANZIIL) standard consisting of 6 first-level indexes, 19 second-level indexes, and 67 third-level indexes, and the Society of College, National and University Libraries (SCONUL) standard in the United Kingdom which was composed of 7 first-level indexes and 17 second-level indexes. Nevertheless, in the face of the explosion of such digital information, it is difficult for college students to obtain the required information accurately.
During the last few decades, information literacy has attracted much attention by the researchers. Webber and Johnston [1] identified some key definitions of information and related the student response to two models of information literacy. Cooney and Hiris [2] described the use of a collaborative framework for integrating information literacy into a graduate finance course and used a checklist for assessing the results. Valerie et al. [3] proposed the revised assessment method that took the form of a portfolio and reported the results of a case study evaluating the revision of the assessment methods of an information literacy module. e results showed that it was also economical and efficient. Pinto [4] designed the IL-HUMASS survey on information literacy, and it contained 26 items grouped into four categories and three self-reporting dimensions. Naimpally et al. [5] introduced a broad overview of the different assessment tools in information literacy assessment and highlighted four assessment tools for assessment of IL in engineering. Kousar and Mahmood [6] assessed IL skills of first year undergraduate engineering students at a Pakistani university in order to plan instruction. ey used independent samples t-test and ANOVA to analyze the reliable data for integration of instruction in the university curricula. Krakowska [7] gathered the data from the questionnaire and analyzed it briefly in a quantitative and qualitative manner. e evaluation of the results helped to understand the students' information behavior, increased awareness of information literacy implementation, and development and status within the academic information environment. Pavlovski and DunCer [8] presented results of a survey regarding information literacy which was carried out on undergraduate students of University of Zagreb and gained an insight into the students' information retrieval accuracy in course-related research. Kultawanich et al. [9] reported on the development and validation of the information literacy assessment tools for undergraduate students. ese assessment tools consisted of three instruments: (1) IL Test, (2) IL Rubric, and (3) Information Literacy Self-Efficacy (ILSE) Scale. Douglas et al. [10] presented the development and initial validation study of a self-directed information literacy assessment for engineering and technology students. e exploratory factor analysis results provided evidence of structural aspects of validity and support for scoring structure. Kavšek et al [11] used two groups of first year psychology students to evaluate the effect of information literacy (IL) training in the first year psychology students and to follow changes in acquired IL in time. e results had shown an important role of IL training in students' IL development over time. Lanning and Mallek [12] analyzed multiple factors from current university students' high school experiences to evaluate the students' information literacy skills. e results of regression analyses demonstrated that only current university GPA and standardized test scores had any influence on information literacy test scores.
An optimization problem is the problem of finding the best solution from all feasible solutions. Optimization problems can be divided into two categories depending on whether the variables are continuous or discrete. e standard form of an optimization problem is formulated as where D 0 � x ∈ E n |ℓ ≤ x ≤ u { } is a simple box constraint set. Problem category included discrete domain and continuous domain. Optimization approaches included deterministic search methods (such as direct search methods, simplexbased search, and branch and bound search methods) and nondeterministic (heuristic) search methods (such as simulated annealing, genetic algorithms, differential evolution, and particle swarm optimization).
In the deterministic search methods, the researchers proposed the global optimization methods that can tackle multimodal optimization problems satisfying the Lipschitz condition [13]. Sergeyev and Kvasov [14] proposed a new efficient algorithm for solving the multidimensional "black-box" functions satisfying the Lipschitz condition. In this algorithm, a novel technique balancing usage of local and global information during partitioning, and a new procedure for finding lower bounds of the objective function over hyperintervals were considered. Jones et al. [15,16] proposed the well-known and widely used algorithm DIRECT (DIviding RECTangles) which objective function was evaluated at several sample points by using all possible weights on local versus global search expressed by the Lipschitz constant during each iteration. Liuzzi et al. [17] focused on the selection strategy and proposed two strategies to exploit the information on the objective function.
e first one was based on the knowledge of the global optimum value of the objective function. e second one did not require any a priori knowledge on the objective function and tried to exploit information on the objective function gathered during progress of the algorithm. Paulavičius et al. [18] proposed a globally biased simplicial partition DISIMPL algorithm for global optimization of expensive Lipschitz continuous functions to improve the search efficiency of the algorithm. Gimbutas andŽilinskas [19] introduced a bi-criteria choice of simplices for the subdivision to enhance the efficiency of the simplicial partition-based minimization of black box objective functions. e first criterion was minimum of an estimated Lipschitz bound, and the second was the size of the candidate simplex. Sergeyev et al. [13] proposed a new efficient visual technique for a systematic comparison of global optimization algorithms and tried to bridge the gap between the deterministic methods and heuristic methods.
In the heuristic search methods, DE and its variants have emerged as one of the most competitive and versatile families of evolutionary algorithms which belong to computational intelligence. It was first proposed by Storn and Price [20] to solve global numerical optimization problems over continuous search spaces. It is a simple yet powerful evolutionary algorithm and exhibits excellent capability in solving a variety of numerical and real-world optimization problems, such as space trajectory design [21][22][23][24][25], hydrothermal optimization [26], underwater glider path planning [27], vehicle routing problem [28,29], short-term optimal hydrothermal scheduling [30], satellite scheduling [31,32], and satellite image enhancement [33]. 2 Computational Intelligence and Neuroscience During the last decade, there were many researchers working on the improvement of DE and significant progress in the developments of DE. In 2011 and 2016, Das et al. [34,35] presented a comprehensive survey on DE, including the basic concepts and major variants of DE, as well as the applications and theoretical studies of DE. Next, we will briefly introduce the recent developments of hybrid DE in the last two years.
Hybrid DE was tried to combine DE with other global or local search techniques. Awad et al. [36] introduced a novel hybridization between differential evolution and update processes of the stochastic fractal search algorithm. Ali et al. [37] and Awad et al. [38], respectively, focused on the development of hybridizing cultural algorithm (CA) with DE.
e purpose was to combine the explorative and exploitative capabilities of two evolutionary algorithms. Cai et al. [39] proposed an adaptive social learning (ASL) strategy for DE, named SL-DE, to extract the neighborhood relationship information of individuals in the current population. Jadon et al. [40] proposed a hybridization of artificial bee colony (ABC) and DE algorithms, called HABCDE, to develop a more efficient meta-heuristic algorithm than ABC and DE. Peng et al. [41] hybridized DE with commensal learning and uniform local search (CUDE). In CUDE, commensal learning was proposed to adaptively select optimal mutation strategy and parameter setting simultaneously under the same criteria. Moreover, uniform local search enhanced exploitation ability. Zhao et al. [42] proposed a hybrid optimization algorithm based on chaotic differential evolution and estimation of distribution (cDE/EDA). e proposed algorithm could discover the optimal solution in a fast and reliable manner. Chaotic policy was used to strengthen the search ability of DE. Peng et al. [43] proposed an improved memetic differential evolution algorithm, called MDE, which hybridized differential evolution with a local search (LS) operator and periodic reinitialization to balance exploration and exploitation for solving global optimization problems. Under the framework of LSHADE in [44], Mohamed et al. [45] further proposed a new semiparameter adaptation approach to effectively adapt the values of the scaling factor, named LSHADE-SPA. And then, a hybridization framework named LSHADE-SPACMA, a combination of LSHADE-SPA and a modified version of CMA-ES, were introduced. Sabar et al. [46] proposed a heterogeneous framework that integrated a cooperative coevolution method with various types of memetic algorithms for solving big data optimization problems. In this heterogeneous framework, DE was utilized to optimize the subproblems generated by the cooperative coevolution method.
is work is motivated by the following observations: (1) e main difficulties in college students' information literacy assessment index system are the logical structure of the index system and how to assign weight parameters to the index system. ere are mainly two ways to determine weights: one is subjective assignment, and the other is objective assignment. Subjective assignment mainly refers to determining weights according to the human experience and knowledge. Objective assignment is based on the data of questionnaire. Hence, it is very necessary for ILAIS to use advanced optimization techniques to fit weight parameters.
However, DE has been demonstrated to converge to a fixed point, a level set [23] or a hyperplane not containing the global optimum [47]. Furthermore, in some cases it is shown to have slow local convergence. us, integrating additional strategies to improve DE optimization performance is worth investigating.
Based on the above considerations, in this paper, we propose a modified hybrid DE, called HDEMR, with the local search algorithm Broyden-Fletcher-Goldfarb-Shanno (BFGS) to improve local convergence. A modified contraction criterion based on the entropy of population in objective space and the maximum distance in decision space is employed to trigger local search. In addition, we define an archive to save the best individual in the population after the local search. And then, the population is reinitialized based on the probability model of the archive. e paper is organized as follows: in Section 2, the college students' information literacy assessment index system and the weight parameter optimization model of ILAIS are introduced. DE is briefly introduced in Section 3. e proposed algorithm is described in details in Section 4. e experimental study and results analysis are presented in Section 5. HDEMR for weight parameter optimization of ILAIS is derived in Section 6. Finally, in Section 7, we conclude this paper and suggest ideas for future work.

The Weight Parameter Optimization Model of ILAIS
For the sake of completeness, in this section, the college students' ILAIS at China University of Geosciences (CUG) is briefly described first. en, the weight parameter optimization model of ILAIS based on an objective function is introduced. includes four first-level indexes and thirteen second-level indexes, which is shown in Table 1.

e Weight Parameter Optimization
Model. Before using optimization techniques to identify the weight parameters of the ILAIS at CUG, there are two issues that must be addressed. First, the weight parameters that will be optimized must be chosen. In the above ILAIS at CUG established in Table 1, the number of weight parameters of ILAIS at CUG is related to the level. To first-level index, there are four weight parameters, and to second-level indexes, there are thirteen weight parameters.
In this work, all weight parameters are treated as unknown parameters to be identified by the optimization algorithm. us, the decision vector x is formulated as where D is the number of decision variables. An example for the determination of the first-level index weight parameters is given here. Let the weight of Information Consciousness be x 1 , the weight of Information Knowledge be x 2 , the weight of Information Ability be x 3 , and the weight of Information Morality be x 4 . e other important issue in evolutionary algorithms is to determine an objective function. When optimization techniques are used in parameter identification problems of ILAIS, the objective function should be defined first. In this work, the weight deviation sum is used as the objective function in the following equation: where P and Q are, respectively, the number of teachers and students participating in the questionnaire. t and s are the authority coefficient of teachers and students. x p,j is the weight vector given by the p-th teacher and x q,j is the weight vector given by the qth student. x i,j is the i-th weight individual, N p is the population size, and j � 1, 2, . . . , D in HDEMR.

A Short Introduction to Differential Evolution
DE is a population-based stochastic parallel direct search optimization method. It begins with a randomly generated population in decision space. en, DE iteratively uses a trial vector generation strategy (i.e., mutation and crossover operators) and selection operator to evolve the population until a stopping criterion is met. For a mutation operator, there are five frequently used mutation schemes for generating a mutant vector. It can be denoted as DE/a/b, where a stands for the mutation strategy and b specifies the number of difference vectors. e following are the five most frequently used mutation strategies: "DE/current-to-best/1" "DE/rand/2" where r 1 , r 2 , r 3 , r 4 , r 5 ∈ 1, . . . , N p and r 1 ≠ r 2 ≠ r 3 ≠ r 4 ≠ r 5 ≠ i. v i is the mutant vector. x best refers to the best fitness in the population at current generation. e scaling factor F controls the difference vectors with a positive value.
To diversify the current population, following mutation, DE uses a crossover operator to produce the trial vector u i between x i and v i . e most commonly used operator is the binomial crossover performed on each component as follows: where i � 1, . . . , N p , j � 1, . . . , D, rand[0, 1) represents a number drawn uniformly between 0 and 1, CR ∈ [0, 1] is : the recognition of the value of information and the objective evaluation of the role of information L12: attitudes towards various social problems involving in the process of access to and use of information L13: recognise the right useful information L2: Information Knowledge L21: effectively select information retrieval tools and know the advantages and disadvantages of different information retrieval tools L22: have information source knowledge, and understand the value of various sources of information and communication process L23: identify reliable and significant information sources, master basic information science knowledge L3: Information Ability L31: the ability of information retrieval L32: the ability to use and process information L33: the ability to share, deliver, and create information ability L34: the ability to learn information knowledge independently and information development cooperation L4: Information Morality L41: ability to master information law L42: information security and privacy L43: information ethic cognition and behavior 4 Computational Intelligence and Neuroscience the crossover rate, and j rand is a randomly selected integer within [1, D]. After mutation and crossover, each generated trial vector undergoes boundary constraint check. If some variables of trial vector are out of the boundary, a repair method is applied as follows: where x min j and x max j are the lower bound and upper bound, . rand[0, 1] returns a uniformly distributed random number between 0 and 1 (0 ≤ rand[0, 1] ≤ 1).
At last, the selection operator is performed to select the better one between x i and u i to enter the next generation: where f(x) is the objective function to be optimized.

Proposed Approach
In this section, we propose a novel DE, named HDEMR. HDEMR contains three main components: improved contraction criterion, BFGS local search, and model-based reinitialization strategy. And then, the complete proposed framework of HDEMR is shown in Algorithm 1. Next, the implementation of the above three main components will be introduced in detail.

Improved Contraction Criterion.
In order to design an effective and efficient hybrid algorithm for global optimization, we need to take advantage of both the exploration capabilities of EA and the exploitation capabilities of LS and combine them in a well-balanced manner. For successful incorporation of LS in DE, a triggering condition, called contraction criterion, is needed to decide when the local search has to start. ere are several kinds of methods to define a contraction criterion. Qin and Suganthan [48] applied a local search method after a fixed number of generations (every 200 generations). Sun et al. [49] used the condition that if the promising solution was not updated in t-consecutive generations, LS would start. Simon et al. [50] used the minimum fitness in the objective space as the contraction criterion. Vasile et al. [23] performed LS when the maximum distance in decision space was below a given threshold. Peng et al. [43] proposed a new contraction criterion which combined the improved maximum distance in objective space and the maximum distance in decision space.
To have a good judgement to the individuals' distribution, a good measure to evaluate the crowding degree around each solution is needed as a mixed triggering condition. In HDEMR, we also propose an improved contraction criterion which combines two criteria: (a) E is the population entropy in objective space and (b) S is the maximum distance in decision space.

Theorem 1. Let R be the range of the fitness function values of the population in the proposed algorithm. If
. . ., A n are the true subsets of the population, the population entropy of HDEMR is defined as: where s i is the number of individuals in the subset A i and N p is the population size in HDEMR. E is the measure to represent the diversity of the population in objective space.
e distance in decision space is defined as where ‖·‖ is the Euclidean distance and S is the measure to represent the diversity of the population in decision space.

BFGS Local Search.
e local search utilizes the better solutions obtained by the global search to update the population and thus enhances algorithm's exploitation ability to find the best solution. In HDEMR, we use the BFGS algorithm as the local search method. It is one of the quasi-Newton methods which do not need the precise Hessian matrix and be able to approximate it based on the individual successive gradients. BFGS is considered to be the most effective and popular quasi-Newton method and has been proven to have good performance for solving unconstrained nonlinear optimization problems. e details can be found in [51,52].

Model-Based Reinitialization Strategy.
After the local search is performed, if the best solution has not been improved, a reinitialization of the whole population is used to give the algorithms more opportunities to find the global optimum. Sun et al. [49] chose the individuals that had the largest distances from the local optima to form the next population from a temporary population. Simon et al. [50] proposed a partial reinitialization of the population. Every 20 generations, the algorithm selected the best M individuals from a temporary population of 2M + 2 individuals as the reinitialization pool. Zamuda et al. [53] proposed a population size reduction method as the reinitialization strategy. In this paper, we propose a model-based reinitialization strategy. If the local search does not improve the best solution in the current generation, the population will be restarted. A counter C keeps track of the number of restarts. For C < C max , where C max is user-defined, the new population, P ′ , is generated randomly in the search space. For C ≥ C max , P ′ is initialised from a Gaussian distribution model base on the mean and standard deviation of the best solutions in the Archive. Algorithm 2 summarises the model-based reinitialization procedure.

Experimental Study
To assess the performance of HDEMR, three groups of experiments are conducted. We compare HDEMR with Computational Intelligence and Neuroscience seven well-known algorithms including JADE [54], CoDE [55], jDE [56], MPEDE [57], SHADE [58], LSHADE [44], and LSHADE-ε [59]. In the first group of experiments, we select the 21 nonnoisy benchmark functions (excluding noisy functions F4, F17, F24, and F25) at IEEE CEC2005 [60]. In the second group of experiments, we use 30 test functions at IEEE CEC2014 [61] to further demonstrate the effectiveness of HDEMR. In the third group of Input: Control parameters: N p , D, Max NFEs, Flag, C, E max and S max ; Output: e best final solution; Initialize the population P randomly; C � 0; Calculate the objective function value of each solution in the population P; Produce the mutant vector v i in Equation (3); Produce the trial vector u i in Equation (8); Apply the boundary constraint check to the violated solution as shown in Equation (9); Evaluate the trial vector u i . NFEs � NFEs + 1; if u i is better than x i then x i � u i Calculate the contraction criterion in Equations (12) and (13) as described in Section 4. A; if E < E max or S < S max then / * x best is the best solution in the current generation, x g min is the global optimum * /; Pick up the x best as the initial point of the local search; Apply BFGS(x best ) to find the resultant new local optimum x local ; if x local is better than x best then Replace x best with x local ; Run model-based reinitialization to create a new population P′ as described in Algorithm 2; ALGORITHM 1: e pseudocode of the HDEMR algorithm.

Input:
Control parameters: C max , M is the number of individuals in the Archive; Output: e new population P′; if C < C max then / * reinitialize the population P′ randomly * /; / * reinitialize the population P′ based on the probability distribution model of the Archive * /; ALGORITHM 2: e pseudocode of model-based reinitialization. 6 Computational Intelligence and Neuroscience experiments, we compare HDEMR with the well-known and widely used deterministic algorithm DIRECT [15,16] on GKLS test classes [62] to further verify the performance of HDEMR.

Performance Test of HDEMR on CEC2005 Benchmark
Functions. For each algorithm, we conduct 25 independent runs and limit each run to 10000 * D max function evaluations at 21 benchmark problems as in [60]. Because the dimension of weight parameter identification of ILAIS is approximately 10, HDEMR is just tested by the benchmark problems at D � 10 to evaluate its performance at lowdimensional problem. e algorithms are evaluated in terms of function error value [60], defined as where x * is the global optimum of the test function. Mean error and standard deviation of the function error values are recorded. e parameters of HDEMR are set as N p � 30, For the other seven well-known algorithms, we use the same parameter settings as in their original papers.
To effectively analyze the results, two nonparametric statistical tests, as similarly done in [63], are used in the experiments. (i) e Wilcoxon rank-sum test at α � 0.05 is performed to test the statistical significance of the experimental results between two algorithms in singleproblem and multiproblem. (ii) e Friedman test is employed to obtain the average rankings of all the compared algorithms. e Wilcoxon rank-sum test in singleproblem is calculated by Matlab, while the Wilcoxon test in multiproblem and the Friedman test are carried out by the software KEEL [64].
According to the results shown in In addition, some interesting phenomena can be observed according to the experimental results in Table 2. For the unimodal functions (i.e., F1, F2, and F3), the performance of HDEMR is significantly better than that of the other algorithms. Moreover, HDEMR also outperforms the other algorithms on F6 and F8. To hybrid composition functions (i.e., F18-F23), HDEMR exhibits the best performance among the eight algorithms.
e outstanding performance of HDEMR on hybrid composition functions can be attributed to its capability to balance the exploration and exploitation. JADE, CoDE, SHADE, LSHADE, and LSHADE-ε perform quite well on the expanded multimodal test functions (i.e., F13 and F14), which means history-based parameter and trial vector generation adaptation strategies can effectively improve the performance of advance DE to the expanded multimodal functions.
In order to further validate the performance of modelbased reinitialization strategy, we compare HDEMR with HDE (hybrid differential evolution with reinitialization randomly). Table 3 shows the results of HDEMR and HDE on CEC2005 benchmark functions. It can be seen that HDEMR performs significantly better than HDE on 8 test functions. HDE wins on 1 test function. Also, HDEMR obtains similar results with HDE on 12 test functions. Some interesting phenomena can be observed according to the experimental results. HDEMR performs quite well on most of the hybrid composition functions (i.e., F15 and F18-F22), which further demonstrates that model-based reinitialization strategy can effectively improve the performance of HDEMR to the difficult functions.
To further detect the significant differences between HDEMR and the seven competitors, the multiple-problem Wilcoxon test and the Friedman test are carried out, respectively. As shown in Table 4, it can be seen that HDEMR can obtain higher R + values than R − values in all cases, where R + is the sum of ranks for the functions on which HDEMR outperforms the compared algorithm, and R − is the sum of ranks for the opposite [63]. According to the Wilcoxon test at α � 0.05, the significant differences can be observed in three cases (i.e., HDEMR vs. JADE, HDEMR vs. jDE, and HDEMR vs. MPEDE). When α � 0.1, there are significant differences in five cases (i.e, HDEMR vs. JADE, HDEMR vs. jDE, HDEMR vs. MPEDE, HDEMR vs. SHADE, and HDEMR vs. LSHADE-ε), which means that HDEMR is significantly better than JADE, jDE, MPEDE, SHADE, and LSHADE-ε on 21 test functions at α � 0.1. Moreover, the results shown in Figure 1 indicate that HDEMR and LSHADE have the best ranking (3.4762) among the eight compared algorithms. In general, the above comparison clearly demonstrates that HDEMR and LSHADE are significantly better than the other competitors at CEC2005 benchmark functions.

Sensitivity in Relation to the Parameters E max and S max .
e improved contraction criterion in HDEMR contains two parameters E max and S max . ey are the triggering condition to the local search. In order to investigate the sensitivity of the above two parameters, we test HDEMR with different E max � 1.  Table 5.
From Table 5, we can see that HDEMR provides the best average ranking value at E max � 3.0 and S max � 2.0 than other 8 combinations on the test functions. In general, we can conclude that it is better to set a large value to E max and a small value to S max .  When seven algorithms obtain the global optimum, the intermediate results are reported at NFEs � 10000. "+," "-," and "�" denote that the performance of this algorithm is, respectively, better than, worse than, and similar to HDEMR according to the Wilcoxon rank-sum test at α � 0.05.  Table 6, where the values of C max are set to C max � 3, 5, 7, 9, 11, 13, 15 { }. All other parameters are kept unchanged as described in Section 5. In addition, all experiments are conducted for 25 independent runs for each function.
It can be seen from Table 6 that HDEMR with C max � 9 gets better average ranking value (3.3333) than other six cases. C max � 15 is the second best choice to HDEMR. Some interesting phenomena can be observed in Table 6. e large C max (i.e., 13 and 15) is better than the small values (i.e., 3 and 5). Generally speaking, the large C max value such as 9 or 15 is good to enhance the performance of HDEMR.

Performance Test of HDEMR on CEC2014 Benchmark
Functions. In this section, we compare our approach with the same seven algorithms at 30 benchmark problems of CEC2014 as in [61]. All functions are conducted for 51 independent runs and limit each run to 10000 * D max function evaluations. e average function error value smaller than 10 −8 is taken as zero.
e parameters of HDEMR are set as N p � 30, E max � 3.0, S max � 2.0, C max � 9, CR ∈ N(0.8, 0.1), and F ∈ N(0.5, 0.1). For the other seven well-known algorithms, we use the same parameter settings as in their original papers. Table 7 shows mean error and standard deviation of the function error values. Table 8 shows the multiple-problem Wilcoxon test results. Figure 2 is the Friedman test results. Additionally, some representative convergence graphs of eight algorithms are shown in Figures 3 and 4.
In addition, some interesting phenomena can be observed according to the experimental results in Table 7. For the unimodal functions (i.e., F1, F2, and F3), HDEMR can produce similar results compared with the other algorithms. To simple multimodal functions and hybrid functions, HDEMR has better or similar results than JADE, CoDE, jDE, and MPEDE. LSHADE and LSHADE-ε are very good at solving the simple multimodal and hybrid functions. To the composition functions (i.e., F23-F30), HDEMR exhibits the best performance among the eight algorithms. e outstanding performance of HDEMR on the composition functions can further demonstrate its capability to balance the exploration and exploitation. Some representative convergence graphs of the eight algorithms in some typical test functions have been shown in Figures 3 and 4. Figures 3  and 4 show that HDEMR has a fast convergence speed at most of the functions, especially on the composition functions at CEC2014 benchmark.

Performance Test of HDEMR on GKLS Test Classes.
To further validate the performance of HDEMR, the popular GKLS generator is used. is generator allows one to randomly generate classes of test problems with the same dimension, number of local minima, and difficulty, and each 12 * When six algorithms obtain the global optimum, the intermediate results are reported at NFEs � 10000. "+," "-," and "�" denote that the performance of this algorithm is, respectively, better than, worse than, and similar to HDEMR at α � 0.05. class contains 100 problems. Here, 1000 test functions are used in total, which consist of 10 classes of problems with dimension D � 2, 3, 4, 5, and 10. e control parameters of GKLS test classes used in the experiments are presented in Table 9. Control parameters of the GKLS generator include (1) the radius of the convergence region ρ, (2) distance between the paraboloid vertex and the global minimizer r, and (3) the tolerance Δ (for details, see Sergeyev and Kvasov [14]). e parameters of HDEMR are set as N p � 30, E max � 3.0, S max � 2.0, C max � 9, CR ∈ N(0.8, 0.1), and F ∈ N(0.5, 0.1). In order to make this comparison more reliable, parameters of DIRECT are used as recommended in its original papers. For each of the generated problems, HDEMR is conducted separately 100 independent runs and DIRECT is executed one run. e maximal number of trails (or function evaluations) is Max NFEs � 10 6 .
To effectively analyze the results, Operational Characteristics and Operational Zones, as similarly done in [13], are used in the experiments.
Because of the huge amount of data, only average results are included in Table 10. m(i) means that the algorithm does not solve a global optimization problem i times in 100 runs × 100 problems (i.e., in 10,000 runs for HDEMR and in 100 runs for DIRECT). Figures 5 and 6 show operational characteristics for DIRECT and operational zones for HDEMR on the different dimensions. Figure 5 shows the results on the 2-, 3-, 4-dimensional simple and hard classes for HDEMR and DIRECT. It can be seen from Figure 5(a) that operational characteristics of DIRECT is higher than the upper boundary of the zone of HDEMR and, therefore, on this class DIRECT has a better performance. Figure 5(b) shows that the average performance with the zone (the red line in the zone) of HDEMR is higher than characteristics of DIRECT. It means that HDEMR outperforms DIRECT. Figures 5(c) and 5(d) show that the average performance of HDEMR is better than DIRECT on both the 3-dimensional simple and hard classes. Figures 5(e) and 5(f ) depict that on the 4-dimensional simple and hard classes, if the number of function evaluations is less than 30000, the average performance of HDEMR is better than DIRECT. If the number of function evaluations is higher than 30000, DIRECT behaves better since its characteristic is higher than the average line of the HDEMR zone. We can also see the similar results on the 5-dimensional simple and hard classes in Figures 6(a) and 6(b).
Figures 6(c) and 6(d) illustrate that the average performance of HDEMR is better than DIRECT on the 10dimensional simple and hard classes. It is suitable to solve the ILAIS problem because the dimension of weight parameter identification of ILAIS is approximately 10.

D � 10 Parameters
Ranking    "+," "-," and "�" denote that the performance of this algorithm is, respectively, better than, worse than, and similar to HDEMR according to the Wilcoxon rank-sum test at α � 0.05.

12
Computational Intelligence and Neuroscience In addition, some interesting phenomena can be observed according to the experimental results in Figures 5 and  6. e red average lines indicate that HDEMR has a fast convergence speed at most of the classes except for 2-dimensional simple class, especially on the hard classes. However, the number of solved problems by DIRECT is more than HDEMR on 4-dimensional and 5-dimensional hard classes (see Figures 5(f ) and 6(b)).
At last, one can see that the performance of HDEMR is not very stable on 4-, 5-, 10-dimensional classes. e reason is that HDEMR gets trapped into local minima in several cases of 100 runs and is not able to exit from their attraction regions producing the operational zones with the wider interval.
From Table 10, the number of solved problems of DI-RECT is more than HDEMR at GKLS test classes. For example, on the 10-dimensional hard class, DIRECT does not solve 2 problems in 100 runs (98% of success), whereas HDEMR does not solve 599 problems in 10000 runs (94.01% of success). On the other hand, the average number of trails (or function evaluations) for DIRECT is 21774.72, while for HDEMR is 60191.

HDEMR for Weight Parameter Optimization of ILAIS
According to the results of CEC2005, CEC2014, and GKLS test functions, HDEMR achieves better or similar results among nine algorithms. Hence, it is a good alternative for weight parameter optimization of ILAIS. In this section, the experiments aim at using HDEMR to solve the weight parameter identification of ILAIS. Questionnaire survey and individual evaluation are conducted at China University of Geosciences.
ere are 1000 questionnaires collected from the students, and 300 questionnaires from the teachers. e parameters of HDEMR are set as N p � 100, E max � 3.0, S max � 2.0, C max � 9, CR ∈ N(0.8, 0.1), F ∈ N(0.5, 0.1), and Max NFEs � 50000. HDEMR is conducted for 50 independent runs. e optimization objective function is defined in Equation (2) and t � 0.6, s � 0.4. To effectively analyze the results, mean value and standard deviation are also used in the experiments. Table 11 shows the weight results of the information literacy assessment index at CUG by HDEMR. It can be seen that the weights of first-level indexes such as Information Consciousness, Information Knowledge, Information Ability, and Information Morality are 0.1958, 0.2085, 0.3115, and 0.2958, respectively. Information Ability and Information Morality are more important than Information Consciousness and Information Knowledge according to the questionnaire survey. It means that Information Ability and Information Morality have a greater impact to assess the information literacy of the college students. Table 11 also shows the results of the thirteen secondlevel index weights by HDEMR. We can see that the weight of L11 is 0.4031 and L12 and L13 are approximately      For each test class, the dimension of test class (D), the radius of the convergence region ρ, distance betwen the paraboloid vertex and the global minimizer (r), and the tolerance Δ are given.  (2) For each test class the average number of trails (or function evaluations) required to solve all 100 problems is presented for DIRECT algorithm. For HDEMR, the average number of trails (or function evaluations) required to solve each problem on 100 runs has been calculated, and the average of these 100 values is presented. e record " > m(i)" means that the algorithm does not solve a global optimization problem i times in 100 runs × 100 problems (i.e., in 10,000 runs for HDEMR and in 100 runs for DIRECT). In this case, the maximal number of trails (or function evaluations) set to 10 6 is used to calculate the average number of trails (or function evaluations) m. mastering basic information science knowledge, clarifying your own information needs, and describing keywords and terminology are very important to assess the collage students' Information Knowledge. Furthermore, it is also seen that the weights of L32, L33, and L34 are close to 0.3. It means that these three indexes have the same effect on Information Ability. At last, to Information Morality, the weight of L41 is close to 0.5. is means that the ability to master information law has the significant impact on evaluating the college students' Information Morality. e second important factor of L42 is 0.2983 to Information Morality. Finally, according to the results in Table 11, it can be seen that all the standard deviations are within 10 −3 . It can draw the conclusion that HDEMR is stable to optimize the weights of ILAIS. e optimal weights are able to objectively determine the evaluation of the college students' information literacy at CUG.

Conclusion
Apart from developing a new weight parameter optimization model of ILAIS, in this study, an efficient HDEMR method is proposed to solve the weight parameter identification problem of ILAIS at CUG with greater speed and accuracy. e HDEMR method uses an improved contraction criterion to decide when the local search starts. A model-based reinitialization strategy is also proposed to improve the diversity of the algorithm and the performance of global search. ese improvements are simple yet efficient and make HDEMR a powerful alternative for the real-world applications. e effectiveness and efficiency of HDEMR are validated by CEC2005, CEC2014 benchmark functions, and GKLS test classes. e superiority of HDEMR is also verified after comparing it with seven well-known DE variants and the widely used deterministic algorithm DIRECT. At last, HDEMR is successfully used to optimize the weight parameters of ILAIS at China University of Geosciences (CUG).
In our future work, HDEMR will be tested on other realworld applications problems. Moreover, we believe that some other local search algorithms and parameter adaptation strategies can also be used in HDEMR.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.