^{1}

^{2}

^{3}

^{4}

^{1}

^{5}

^{6}

^{7}

^{1}

^{2}

^{3}

^{4}

^{5}

^{6}

^{7}

Education is mandatory, and much research has been invested in this sector. An important aspect of education is how to evaluate the learners’ progress. Multiple-choice tests are widely used for this purpose. The tests for learners in the same exam should come in equal difficulties for fair judgment. Thus, this requirement leads to the problem of generating tests with equal difficulties, which is also known as the specific case of generating tests with a single objective. However, in practice, multiple requirements (objectives) are enforced while making tests. For example, teachers may require the generated tests to have the same difficulty and the same test duration. In this paper, we propose the use of Multiswarm Multiobjective Particle Swarm Optimization (MMPSO) for generating

In the education sector, evaluation of students’ study progress is important and mandatory. There are many methods such as oral tests or writing tests to evaluate their knowledge and understanding about subjects. Because of the scalability and ease of human resources, writing tests are used more widely for the final checkpoints of assessment (e.g., final term tests), where a large number of students must be considered. Writing tests can be either descriptive tests, in which students have to fully write their answers, or multiple-choice tests, in which students pick one or more choices for each question. Even though descriptive tests are easier to create at first, they consume a great deal of time and effort during the grading stage. Multiple-choice tests, on the other hand, are harder to create at first as they require a large number of questions for security reasons, as in Ting et al. [

One of the challenges in generating multiple-choice tests is the difficulty of the candidate tests. The tests for all students should have the same difficulty for fairness. However, it can be seen that generating all tests having the same level of difficulties is an extremely hard task, even in the case of manually choosing questions from a question bank, and the success rate of generating multichoice tests satisfying a given difficulty is low and time-consuming. Therefore, to speed up the process, some authors chose to automatically generate tests with the use of computers and approximate the difficulties of the required difficulties. This is also known as generating tests with a single objective where the level of difficulty is the objective. For example, Bui et al. [

In this paper, we propose a new approach that uses Multiswarm Multiobjective Particle Swarm Optimization (MMPSO) to extract

The main contributions of this paper are as follows:

We propose a multiswarm multiobjective approach to deal with the problem of extracting

We propose the use of SA in combining with PSO for extracting tests. SA was selected as it is capable of escaping local optimum solutions.

We propose a parallel version of our serial algorithms. Using parallelism, we can control the overlap of extracted tests to save time.

The rest of this paper is organized as follows. Section

Recently, evolutionary algorithms have been applied to many fields for optimization problems. Some of the most well-known algorithms are Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). GAs were invented based on the concept of Darwin’s theory of evolution, and they seek solutions via progressions of generations. Heuristic information is used for navigating the search space for potential individuals, and this can achieve globally optimal solutions. Since then, there have been many works that used GAs in practice [

Particle swarm optimization is a swarm-based technique for optimization that is developed by Eberthart and Kennedy [^{n}, in which _{best}, which is the best-known position of the particle visited in the past movements. The second is _{best}, which is the best-known position of the whole swarm. In the original work proposed by Eberhart and Kennedy, particles traverse the search space by going after the particles with strong fitness values. Particularly, after disjointed periods, the velocity and position of each individual are updated with the following formulas:

While PSO is mostly used for the continuous value domain, recently, some works have shown that PSO can also be prominently useful for discrete optimization. For example, Sen and Krishnamoorthy [

To further improve the performance for real-life applications, some variants of PSO have been proposed and exploited such as multiswarm PSO. Peng et al. [

In practice, there exist a lot of optimization problems with multiple objectives instead of a single objective. Thus, a lot of work for multiobjective optimization has been proposed. For example, Li and Babak [

An extension of multiple objective optimization problems is the dynamic multiple objective optimization problems, in which each objective would change differently depending on the time or environment. To deal with this problem, Liu et al. [

To make it easier for readers, Table

Summarizes different application domains in which PSO algorithms have been applied for different purposes.

Categories | References | Algorithm | Types of optimization problems | Modern optimization techniques | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|

Unconstrained | Constrained | Continuous | Discrete | Single-objective | Multiobjective | Single-swarm | Multiswarm | ||||

Academia and scientometrics | [ | Particle swarm optimization | X | X | X | Multiobjective PSO with uniform design generates the initial population instead of traditional random methods | |||||

[ | X | X | X | The proposed approach used | |||||||

[ | An improved version, circular crowded sorting, and combined with multiobjective PSO | X | X | X | The individuals of initial populations across the search space to be better at gathering the Pareto frontier | ||||||

[ | An adaptive local search method for multiobjective PSO | X | X | X | An adaptive local search method for multiobjective PSO using the time variance search space index to improve the diversity of solutions and convergence | ||||||

[ | Combining utopia point-guided search with multiobjective PSO | X | X | X | A strategy that selects the best individuals that are located near the utopia points | ||||||

[ | A novel MOPSO with enhanced local search ability and parameter-less sharing | X | X | X | The proposed approach estimates the density of the particles’ neighborhood in the search space. Initially, the proposed method accurately determines the crowding factor of the solutions; in later stages, it effectively guides the entire swarm to converge close to the true Pareto front | ||||||

[ | Chaotic particle swarm optimization | X | X | X | The work improves the diversity of the population and uses simplified mesh reduction and gene exchange to improve the performance of the algorithm | ||||||

[ | A coevolutionary technique based on multiswarm particle swarm optimization | X | X | X | X | The authors combined their proposed algorithm with special boundary constraint processing and a velocity update strategy to help with the diversity and convergence speed | |||||

[ | Particle swarm optimization algorithm based on dynamic boundary search for constrained optimization | X | X | X | The authors proposed a strategy based on dynamic search boundaries to help escape the local optima | ||||||

[ | A new PSO-based algorithm (FC-MOPSO) | X | X | X | X | X | X | FC-MOPSO algorithm can work on a mix-up of constrained, unconstrained, continuous and/or discrete, single-objective, multiobjective optimization problems algorithm that can work on a mix-up of constrained, unconstrained, continuous, and/or discrete optimization problems | |||

[ | A novel particle swarm optimization algorithm with multiple learning strategies (PSO-MLS) | X | X | X | X | The authors proposed an approach for multiswarm PSO that pairs the velocity update of some swarms with different methods such as the periodically stochastic learning strategy or random mutation learning strategy. | |||||

[ | Cellular Learning Automata (CLA) for multiswarm PSO | X | X | X | X | Each swarm is placed on a cell of the CLA, and each particle’s velocity is affected by some other particles. The connected particles are adjusted overtime via periods of learning | |||||

[ | Improved particle swarm optimization algorithm based on dynamical topology and purposeful detecting. | X | X | X | In order to balance the search capabilities between swarms. The extensive experimental results illustrate the effectiveness and efficiency of the three proposed strategies used in MSPSO | ||||||

[ | Particle swarm optimization with differential evolution (DE) strategy | X | X | X | X | The purpose is to achieve high-performance multiobjective optimization | |||||

[ | Coevolutionary multiswarm PSO | X | X | X | To modify the velocity update equation to increase search information and diversity solutions to avoid local Pareto front. The results show superior performance in solving optimization problems | ||||||

Application (artificial intelligence) | [ | Particle swarm optimization with local search | X | X | X | X | The authors proposed a strategy to improve the speed of convergence of multiswarm PSO for robots’ movements in a complex environment with obstacles. Additionally, the authors combine the local search strategy with multiswarm PSO to prevent the robots from converging at the same locations when they try to get to their targets | ||||

[ | Based on new leader selection strategy to improved particle swarm optimization | X | X | X | The algorithm used triangular distance to select leader individuals that cover different regions in the Pareto frontier. The authors also included an update strategy for | ||||||

[ | Discrete PSO | X | X | X | For solving the problem of transmitting information on networks. The work result proves that the proposed discrete PSO outperforms Simulated Annealing (SA) | ||||||

Application (multichoice question test extraction) | [ | Novel approach of particle swarm optimization (PSO) | X | X | X | X | The dynamic question generation system is built to select tailored questions for each learner from the item bank to satisfy multiple assessment requirements. The experimental results show that the PSO approach is suitable for the selection of near-optimal questions from large-scale item banks | ||||

[ | Particle swarm optimization (PSO) | X | X | X | X | X | The authors used particle swarm optimization to generate tests with approximating difficulties to the required levels from users. The experiment result shows that PSO gives the best performance concerning most of the criteria | ||||

[ | Multiswarm single-objective particle swarm optimization | X | X | X | X | X | The authors use particle swarm optimization to generate multitests with approximating difficulties to the required levels from users. In this parallel stage, migration happens between swarms to exchange information between running threads to improve the convergence and diversities of solutions |

The abovementioned works can be effective and efficient for the optimization problems in Table

In our previous works [

Let _{1}, _{2}, _{3}, …, _{n}} be a question bank with _{i} ∈

The problem of generating multiple _{i} = {_{i1}, _{i2}, _{i3}, …, _{im}} (_{ij} ∈

The objective difficulty of a test

Besides the aforementioned requirements, there are additional constraints each generated test must satisfy as follows:

_{ki} ∈ _{ki} ·OD ≠ 0.6.

The model for MMPSO for the problem of generating multiple tests can be represented as follows:

Assume that F is an objective function for multiobjective of the problem; it can be formulated as follows:_{i} is a single-objective function. In this paper, we use an evaluation of the two functions, which are the average levels of difficulty requirements of the tests

_{1} satisfies the conditions {

_{2} satisfies the conditions {

The objective function

In this case, the better the fitness, the smaller the

For example, provided that we have a question bank as in Table

The question banks.

QC | 01 | 02 | 03 | 04 | 05 | 06 | 07 | 08 | 09 | 10 |

OD | 0.3 | 0.2 | 0.8 | 0.7 | 0.4 | 0.6 | 0.5 | 0.8 | 0.2 | 0.3 |

QD | 100 | 110 | 35 | 40 | 65 | 60 | 60 | 35 | 110 | 100 |

An example of test results.

An individual that satisfies the requirement | QC | 05 | 08 | 01 | 04 | Fitness (1) (ODR = 0.6; TR = 300; |

OD | 0.4 | 0.8 | 0.3 | 0.7 | ||

QD | 65 | 35 | 100 | 40 |

This paper proposes a parallel multiswarm multiobjective PSO (MMPSO) for extracting multiple tests (MMPSO) based on the idea in Bui et al. [

The _{best} and _{best} by using the location information of _{best}. The movement is the replacement of some questions in the candidate test according to the velocity

_{best} moves towards the final optimal solution in random directions. The movement is achieved by replacing its content with some random questions from the question bank. In a similar way to _{best} value will not be updated.

The algorithm ends when the fitness value is lower than the fitness threshold

Based on the idea in Nguyen et al. [

In the dual-sector model [_{best} positions of its swarm. However, when applying those theories, some adjustments are made so that the parallel MMPSO can yield better optimal solutions.

The direction of migration changes when individuals with strong _{best} may be replaced by the incoming

Backward migration from the weak swarms to strong swarms also happens alongside forwarding migration. For every individual that moves from a strong swarm to a weak swarm, there is always one that moves from the weak swarm back to the strong swarm. This is to ensure that the number of particles and the searching capabilities of the swarms do not significantly decrease.

The foremost condition for migration to happen is that there are changes in the fitness values of the current _{best} compared to the previous _{best}.

The probability for migration is denoted as

The number of migrating particles is equal to

The migration parallel MMPSO-based approach to extract multiple tests is described in a form of a pseudocode in Algorithm

Generate the initial population with random questions;

_{best} and all _{best};

_{best}

Move _{best} towards _{best} using location information of _{best,};

Update velocity using equation (

Update position using equation (

_{best} moves in a random direction to search for the optimal solution;

_{best};

_{best} weaker than the one in the current thread;

Unlock the current thread and the chosen thread;

The particle updates its velocity (_{best}, with

The process of generating multiple tests at the same time in a single run using migration parallel MMPSO includes two stages. The first stage is generating tests using multiobjective PSO. In this stage, the algorithm proceeds to find tests that satisfy all requirements and constraints using multiple threads. Each thread corresponds to each swarm that runs separately. The second stage is improving and diversifying tests. This stage happens when there is a change in the value of _{best} of each swarm (for each thread) in the first stage. In this second stage, migration happens between swarms to exchange information between running threads to improve the convergence and diversity of solutions based on the work of Nguyen et al. [

The flowchart of the MMPSO algorithm in migration parallel.

As mentioned above, the initial population affects the convergence speed and diversity of test solutions. The creation of a set of initial solutions (population) is generally performed randomly in PSO. It is one of the drawbacks since the search space is too wide, so the probability of getting stuck in a local optimum solution is also high. In order to improve the initial population, we apply SA in the initial population creation step of migration parallel MMPSO instead of the random method. SA was selected since it is capable of escaping local optimums in Kharrat and Neji [_{best} using the received information about the location of _{best} (which is commonly used in PSO). The MMPSO with SA is described by a pseudocode in Algorithm

_{best}_{best}

_{best} and all _{best};

_{best}

Move _{best} towards _{best} using location information of _{best;}

Apply velocity update using (

Apply position update using (

End for

_{best} moves in a random direction to search for the optimal solution;

Bui et al. [

All proposed algorithms are implemented in C# and run on 2 computers which are a 2.5 GHz Desktop PC (4-CPUs, 4 GB RAM, Windows 10) and a 2.9 GHz VPS (16-CPUs, 16 GB RAM, Windows Server 2012). The experimental data include 2 question banks. One is with 998 different questions (the small question bank) and the other one is with 12000 different questions (the large question bank). The link to the data is

The allocation of the difficulty level and time of question in the small question bank.

The allocation of the difficulty level and time of question in the large question bank.

Experimental parameters.

The required level of difficulty (ODR) | 0.5 |

The total test time (TR) | 5400 (seconds) |

The value of | [0.1, 0.9] |

The number of required questions in a test | 100 |

The number of questions in each section in the test | 10 |

The number of simultaneously generated tests in each run | 100 |

The number of questions in the bank | 1000 and 12,000 |

The PSO’s parameters | The number of particles in each swarm: 10 |

Random value | |

The percentage of _{best} individuals which receive position information from _{best} ( | |

The percentage of _{best} which moves to final goals ( | |

The percentage of | |

The percentage of | |

The stop condition: either when the tolerance fitness <0.001 or when the number of movement loops >1000 | |

The SA’s parameters | Initial temperature: 100 |

Cooling rate: 0.9 | |

Termination temperature: 0.01 | |

Number of iterations: 100 |

Experimental results in the small question bank.

Algorithms | Weight constraint ( | Number of runs | Successful times | Average runtime for extracting tests (second) | Average number of iteration loops | Average fitness | Average duplicate (%) | Standard deviation |
---|---|---|---|---|---|---|---|---|

Parallel multiswarm multiobjective PSO (parallel MMPSO) | 0.1 | 50 | 11 | 61.3658 | 999.75 | 0.003102 | 2.43 | 0.0007071 |

0.2 | 50 | 445 | 47.9793 | 981.03 | 0.003117 | 2.64 | 0.0014707 | |

0.3 | 50 | 425 | 35.8007 | 957.53 | 0.004150 | 2.73 | 0.0021772 | |

0.4 | 50 | 530 | 30.5070 | 928.10 | 0.004850 | 2.80 | 0.0027973 | |

0.5 | 50 | 774 | 29.5425 | 877.65 | 0.004922 | 2.85 | 0.0033383 | |

0.6 | 50 | 1410 | 22.6973 | 754.82 | 0.003965 | 2.91 | 0.0034005 | |

0.7 | 50 | 2900 | 14.9059 | 470.13 | 0.002026 | 2.97 | 0.0022461 | |

0.8 | 50 | 3005 | 16.7581 | 488.31 | 0.001709 | 3.01 | 0.0017271 | |

0.9 | 50 | 3019 | 28.5975 | 619.34 | 0.001358 | 3.04 | 0.0009634 | |

Parallel multiswarm multiobjective PSO with SA (parallel MMPSO with SA) | 0.1 | 50 | 4 | 142.2539 | 999.98 | 0.003080 | 2.98 | 0.0006496 |

0.2 | 50 | 2912 | 132.3828 | 900.42 | 0.001265 | 3.27 | 0.0007454 | |

0.3 | 50 | 3681 | 111.9513 | 650.42 | 0.001123 | 3.38 | 0.0008364 | |

0.4 | 50 | 3933 | 100.0204 | 474.91 | 0.001085 | 3.44 | 0.0009905 | |

0.5 | 50 | 4311 | 91.7621 | 318.75 | 0.000938 | 3.48 | 0.0008439 | |

0.6 | 50 | 4776 | 84.7441 | 161.23 | 0.000746 | 3.53 | 0.0005124 | |

0.7 | 50 | 4990 | 81.1127 | 76.75 | 0.000666 | 3.54 | 0.0002421 | |

0.8 | 50 | 4978 | 84.6747 | 131.32 | 0.000679 | 3.52 | 0.0002518 | |

0.9 | 50 | 4937 | 98.8690 | 339.89 | 0.000749 | 3.41 | 0.0002338 | |

Migration parallel multiswarm multiobjective PSO (migration parallel MMPSO) | 0.1 | 50 | 575 | 51.3890 | 959.29 | 0.002138 | 5.19 | 0.0008091 |

0.2 | 50 | 1426 | 33.2578 | 837.87 | 0.002119 | 5.60 | 0.0011804 | |

0.3 | 50 | 1518 | 25.3135 | 779.76 | 0.002587 | 5.82 | 0.0017130 | |

0.4 | 50 | 1545 | 21.0524 | 745.36 | 0.002977 | 5.89 | 0.0021845 | |

0.5 | 50 | 1650 | 17.9976 | 710.92 | 0.003177 | 5.95 | 0.0025374 | |

0.6 | 50 | 1751 | 16.0573 | 680.97 | 0.003272 | 5.92 | 0.0028531 | |

0.7 | 50 | 2463 | 12.9467 | 540.21 | 0.002243 | 5.94 | 0.0022161 | |

0.8 | 50 | 3315 | 12.7420 | 402.75 | 0.001374 | 5.90 | 0.0012852 | |

0.9 | 50 | 3631 | 19.1735 | 439.27 | 0.001067 | 5.85 | 0.0006259 | |

Migration parallel multiswarm multiobjective PSO with SA (migration parallel MMPSO with SA) | 0.1 | 50 | 816 | 139.6821 | 952.42 | 0.002183 | 3.82 | 0.0009039 |

0.2 | 50 | 3641 | 111.4438 | 638.08 | 0.001039 | 5.19 | 0.0005336 | |

0.3 | 50 | 3958 | 98.9966 | 463.53 | 0.000984 | 5.36 | 0.0006349 | |

0.4 | 50 | 4084 | 92.6536 | 357.13 | 0.000973 | 5.30 | 0.0007475 | |

0.5 | 50 | 4344 | 88.3098 | 255.46 | 0.000898 | 5.10 | 0.0007442 | |

0.6 | 50 | 4703 | 83.4776 | 144.00 | 0.000758 | 4.90 | 0.0004939 | |

0.7 | 50 | 4874 | 81.2981 | 84.57 | 0.000697 | 4.70 | 0.0003271 | |

0.8 | 50 | 4955 | 84.2094 | 106.68 | 0.000683 | 4.45 | 0.0002609 | |

0.9 | 50 | 4937 | 94.1685 | 267.30 | 0.000746 | 4.19 | 0.0002345 |

Experimental results in the large question bank.

Algorithms | Weight constraint ( | Number of runs | Successful times | Average runtime for extracting tests (second) | Average number of iteration loops | Average fitness | Average duplicate (%) | Standard deviation |
---|---|---|---|---|---|---|---|---|

Parallel multiswarm multiobjective PSO (parallel MMPSO) | 0.1 | 50 | 2931 | 23.50 | 888.85 | 0.001137 | 0.95 | 0.000476 |

0.2 | 50 | 4999 | 14.05 | 484.33 | 0.000725 | 1.03 | 0.000219 | |

0.3 | 50 | 4997 | 9.56 | 296.80 | 0.000689 | 1.04 | 0.000234 | |

0.4 | 50 | 4999 | 5.99 | 190.55 | 0.000676 | 1.04 | 0.000233 | |

0.5 | 50 | 5000 | 3.61 | 121.24 | 0.000668 | 1.05 | 0.000236 | |

0.6 | 50 | 5000 | 2.77 | 79.32 | 0.000663 | 1.05 | 0.000235 | |

0.7 | 50 | 5000 | 3.22 | 92.33 | 0.000669 | 1.05 | 0.000238 | |

0.8 | 50 | 5000 | 4.92 | 173.19 | 0.000673 | 1.04 | 0.000231 | |

0.9 | 50 | 5000 | 10.98 | 384.13 | 0.000738 | 1.02 | 0.000213 | |

Parallel multiswarm multiobjective PSO with SA (parallel MMPSO with SA) | 0.1 | 50 | 3055 | 99.75 | 890.40 | 0.001095 | 0.96 | 0.000432 |

0.2 | 50 | 5000 | 84.90 | 469.23 | 0.000709 | 1.04 | 0.000224 | |

0.3 | 50 | 5000 | 74.43 | 275.54 | 0.000686 | 1.05 | 0.000230 | |

0.4 | 50 | 5000 | 73.03 | 168.91 | 0.000668 | 1.06 | 0.000237 | |

0.5 | 50 | 5000 | 69.88 | 99.92 | 0.000663 | 1.07 | 0.000235 | |

0.6 | 50 | 5000 | 67.34 | 61.42 | 0.000662 | 1.07 | 0.000236 | |

0.7 | 50 | 5000 | 53.43 | 69.02 | 0.000661 | 1.07 | 0.000235 | |

0.8 | 50 | 5000 | 70.06 | 132.27 | 0.000676 | 1.06 | 0.000236 | |

0.9 | 50 | 5000 | 79.58 | 319.36 | 0.000734 | 1.03 | 0.000211 | |

Migration parallel multiswarm multiobjective PSO (migration parallel MMPSO) | 0.1 | 50 | 2943 | 33.52 | 886.90 | 0.001144 | 0.95 | 0.000482 |

0.2 | 50 | 4995 | 19.33 | 488.60 | 0.000724 | 1.02 | 0.000219 | |

0.3 | 50 | 4998 | 12.43 | 295.69 | 0.000688 | 1.04 | 0.000231 | |

0.4 | 50 | 5000 | 8.57 | 190.20 | 0.000667 | 1.04 | 0.000238 | |

0.5 | 50 | 4999 | 6.02 | 120.88 | 0.000665 | 1.05 | 0.000240 | |

0.6 | 50 | 5000 | 4.44 | 78.89 | 0.000669 | 1.04 | 0.000234 | |

0.7 | 50 | 5000 | 4.98 | 92.53 | 0.000668 | 1.05 | 0.000234 | |

0.8 | 50 | 5000 | 7.94 | 171.98 | 0.000669 | 1.04 | 0.000236 | |

0.9 | 50 | 5000 | 15.76 | 383.09 | 0.000738 | 1.02 | 0.000209 | |

Migration parallel multiswarm multiobjective PSO with SA (migration parallel MMPSO with SA) | 0.1 | 50 | 3122 | 102.00 | 888.48 | 0.001091 | 0.96 | 0.000436 |

0.2 | 50 | 5000 | 85.68 | 469.50 | 0.000716 | 1.04 | 0.000222 | |

0.3 | 50 | 5000 | 77.35 | 276.03 | 0.000678 | 1.05 | 0.000235 | |

0.4 | 50 | 5000 | 73.19 | 167.84 | 0.000674 | 1.06 | 0.000234 | |

0.5 | 50 | 5000 | 69.62 | 99.66 | 0.000665 | 1.06 | 0.000238 | |

0.6 | 50 | 5000 | 67.83 | 61.33 | 0.000660 | 1.07 | 0.000237 | |

0.7 | 50 | 5000 | 64.19 | 69.02 | 0.000666 | 1.07 | 0.000237 | |

0.8 | 50 | 5000 | 61.46 | 133.45 | 0.000673 | 1.06 | 0.000234 | |

0.9 | 50 | 5000 | 71.62 | 319.68 | 0.000731 | 1.03 | 0.000213 |

Experimental results of the runtime and fitness value are in Table

Experimental results of the runtime and fitness value are in Table

Our experiments focus on implementing formula (

In this part, we present the formula for the evaluation of all algorithms about their stability to produce required tests with various weight constraints (_{i} is a fitness value of run ^{th}.

The standard deviation is used to assess the stability of the algorithms. If its value is low, then the generated tests of each run do not have much difference in the fitness value. The weight constraint

The experiments are executed with the parameters following Ridge and Kudenko [

When

All algorithms can generate tests with acceptable percentages of duplicate questions among generated tests. The duplicate question proportions between generated tests depend on the sizes of the question bank. For example, if the question bank’s size is 100, we need to generate 50 tests in a single run and each test contains 30 questions, and then, some generated tests should contain similar questions of the other generated tests.

Based on the standard deviation in Tables

Generation of question papers through a question bank is an important activity in extracting multichoice tests. The quality of multichoice questions is good (diversity of the level of difficulty of the question and a large number of questions in question bank). In this paper, we propose the use of MMPSO to solve the problem of generating multiobjective multiple

Future studies may focus on investigating the use of the proposed hybrid approach [

The data used in this study are available from the corresponding author upon request.

The authors declare that there are no conflicts of interest regarding the publication of this paper.