Study on the Intentional Choice Mechanism of Course Selection Based on Swarm Intelligence Algorithm

With the passage of time and recent advances in science and information technology, the development in the area of course selection based on some defined criteria has made the choice of the mechanism easy and effective. In this paper, the approach is based on an innovative perspective, and swarm intelligence was introduced for the intentional choice mechanism of course selection. *e study has considered the course selection in English as an example. Swarm intelligence algorithm and integrative course selection were combined with the recommendation algorithm and the intent of course selection in English course to discuss the relevant decision mechanism. Firstly, the comprehensive selection intentional recommendation algorithm and PSO algorithm were introduced, and the algorithmwas initialized. Secondly, the operation process was described in detail, and the application process was analyzed.*en, it was introduced into the English course elective process. Finally, the experimental results of the study came with the conclusion through the test that the PSO algorithm has a higher degree of accuracy and can better judge individual behaviors, which contributes to the establishment of choice-choice mechanism. *e effectiveness of the study was demonstrated through experiments.


Introduction
With the continuous deepening of economic globalization, the society has become increasingly demanding on college students' English proficiency. It not only requires students to have a certain level of English basic knowledge but also requires students must have strong English comprehensive ability and cross-cultural communication skills [1]. is puts forward new requirements for college English teaching reform. At present, college English is influenced by traditional utilitarianism. Students' motivation to learn English has a great deviation. And, English has become a "tool" for employment [2]. In our country, higher education adopts compulsory and elective methods in English education. In a certain sense, the quality of elective courses can better reflect the perfection of the credit system because the credit system is based on the elective system [3]. However, with the reform of the curriculum in our country, the college entrance examination has canceled English subjects, which is still facing a crisis of lack of elective intention in higher education. e original compulsory high school English has also become an elective course. ere is a lack of normative and curriculum diversity in the opening of English courses, and students lack guidance for elective courses. Meanwhile, elective course teaching and students' incorrect understanding of the relationship between elective courses and college entrance exam need to be improved [4]. erefore, studying the intention of elective courses in English courses can help students to strengthen their enthusiasm for learning English courses. In-depth understanding is inadequate concerning the students' influence on the choice for taking a particular course. Imperfect understanding of modality choice has significant implications to institutions and students. e swarm intelligence evolutionary algorithm, a kind of optimal algorithm, has attracted more attention of researchers. Both artificial life and EA are correlated intensively in evolutionary strategy, especially in the domain of genetic algorithms [5]. ere are mainly two kinds of algorithms in the field of swarm optimal theory. One is ant swarm algorithm and the other is particle swarm optimization. e second one originated from the simulation of simple social systems, and it was originally the process of foraging for flocks. However, later, it was found that it is a good optimization tool [6]. e first one (PSO) was proposed by scholars in recent years [7]. e PSO algorithm and the simulated annealing algorithm are similar as much as possible and both of them are evolutionary algorithms. PSO algorithm utilizes a random solution to start the iteration and then circles the algorithm to obtain the optimal solution. It uses the fitness function to assess the effectiveness of the solution, there are no operations such as "crossover" and "mutation," and it is easier than the rules of GA. e important thing is that it gains the global optimal by following the optimal value of the current search. For its advantages of easy implementation, high precision, and fast convergence, the algorithm has attracted more attention of the academic community, and it has demonstrated its superiority in solving practical problems [8]. PSO can be calculated parallelly. With the continuous progress of the innovation and reform process of English major teaching, the traditional single teaching model has long been unsuitable for today's students.
ose boring and old-fashioned presentation methods make students feel that they can only passively accept and are far from interactive fun [9]. erefore, it is important to improve interactivity and understand the individual conditions in teaching reform and innovation. e contribution of the proposed research is devise swarm intelligence for the intentional choice mechanism of course selection. For the course selection, English was considered as an example of the study. Swarm intelligence algorithm and integrative course selection were integrated with the recommendation algorithm and the intent of course selection in English course for discussing the applicable mechanism of decision. e effectiveness of the study was demonstrated through experiments.

Particle Swarm Algorithm Content Interpretation.
e location of the Locator in the space is as shown in the dark rectangular position in the figure. And, their coordinates are L0(l 0x , l 0y , l 0z ), L1(l 1x , l 1y , l 1z ), L2(l 2x , l 2y , l 2z ), and L3(l 3x , l 3y , l 3z ). During the experiment, the four Locators emitted ultrasonic waves to the Tag. When the ultrasonic wave detected the Tag, it was reflected back to the Locator by the Tag. e propagation time of the ultrasonic wave was monitored and the TOA method was used to calculate the distance from each Locator to the Tag. Based on this initial condition, the following will begin to introduce the PSO algorithm [10]. Within the problem set is placed a particle group that consists of four particles: P 0 , P 1 , P 2 , and P 3 . e initial position of these particles is randomly placed. Follow the steps below. Firstly, ultrasonic sensors and communication between tags were used to prove the existence of ultrasonic transmission cycle. Secondly, the TOA method was used to find the distance between the Locator and the Tag [11]. e operation is as follows. e distance from Locator L0 to Tag is denoted as M 0,0 . Distance from Locator L1 to Tag is denoted asM 1,0 . Distance from Locator L2 to Tag is denoted as M 2,0 . e distance from Locator L3 to Tag is denoted as M 3,0 .
en, under the initial conditions, the coordinates of the four Locator and the four particles are also known conditions. erefore, the distance between Locator and the particles can be calculated separately, such as the distance between Locator L0 and Particle P 0 [12], as follows: e distance between Locator L1 and Particle P 0 can be calculated as follows: e distance between Locator L2 and Particle P 0 can be calculated as follows: e distance between Locator L3 and Particle P 0 can be calculated as follows: According to the above formula, the distance between the Locator and the other three particles can be expressed as follows: D 0,1 , D 1,1 , D 2,1 , and D 3,1 represent the distances between L0 and P 1 , L1 and P 1 , L2 and P 1 , and L3 and P 1 , respectively, D 0,2 , D 1,2 , D 2,2 , and D 3,2 represent the distances between L0 and P 2 , L1 and P 2 , L2 and P 2 , and L3 and P 2 , respectively, and D 0,3 , D 1,3 , D 2,3 , and D 3,3 represent the distances between L0 and P 3 , L1 and P 3 , L2 and P 3 , and L3 and P 3 , respectively. en, the distance between the Tag and the Locator and the particle and the Locator was found. Combining these two conditions with the following formula, it is possible to find the particles with the closest Tag among the four particles [13]: Among them, D 0,0 , D 0,1 , and D 0,2 represent the distance between particles and Locator. M 0,0 , M 1,0 , M 2,0 , and M 3,0 represent the distance from the Tag to the Locator. f 0 , f 1 , f 2 , and f 3 are called distance degrees, whose size indicates the distance between the particle and the Tag. It can be seen from the formula that the smaller the distance, the smaller the distance between the particle and the tag. So [14], among the four particles, the particle with the smallest distance is calculated to be closest to the Tag. For example, if the value of f 1 is the smallest, then it can be determined that particle P 1 is closest to Tag. After this aspect, faced with the first key point in the PSO algorithm, it can be effectively processed clearly defined in the particle swarm that which one is the closest particle separated from the tag [15].

Spark Cluster Iteration
Calculation. After narrating the particle swarm algorithm, in this paper, the distributed platform-Spark Cluster Particle Swarm Optimization was used. With the advantages of speed, ease of use, and sophisticated analysis, Apache Spark, as a computation platform, can be used to handle big data. It started up originally in 2009 and opened in 2010 greatly. Frankly speaking, Spark widely extends the MapRed and Ce models to hold with various kinds of calculations. Speed always plays an important role on large data sets when processing the interactive queries and stream data. Spark calculates in memory specifically [16]. In addition, Spark can perform complex computation on disk. In generally, Spark was proposed to handle various computation situations, such as batch processing applications, pass generation algorithms, interactive queries, and stream [17]. e versatility of Spark not only enables simple and convenient processing in different application scenarios but also reduces the administrative burden. Spark provides many version interfaces such as Python, Java, Scala, and SQL and provides a rich set of default tool libraries. Spark was also combined with other tools slightly. e driver program running on the master node controls the critical flow of the program [18]. e driver program defines the operations such as map, reduce, and filter. Figure 1 is a working principle diagram [19].
en, we will elaborate on the specific implementation of the algorithm in the program. From the analysis above, firstly, it can be seen that, in the Spark application, the data file was read from a document teaching platform (such as HDFS). Secondly, the elastic distribution data set (RDD) was set up. irdly, the driver program was used to parallelize RDDs and assign them to various points. If the RDD is frequently reused in the application, it can perform well in cache. When RDD is filed, it is possible to perform parallel operations on the RDD. e new algorithm operation used in Spark cluster environment implementation is shown in Table 1 [20].
As you can see from the above table, Spark provides rich functions. e Spark ecosystem consists of a general execution module, a structured data module, a stream analysis module, a machine learning module, and a graph calculation module [21]. e first one is the execution system of the platform and is the core of functions. SparkCore supports cache capabilities, a common execution model, and application programming interfaces for Java, Scala, and Python, which allows Spark to efficiently calculate and stand for a wide range of applications. Spark processes structured data by QL. It provides DataFrames, program abstraction, and acts as a distributed SQL query engine. SparkSQL enables native Hive queries in Hadoop clusters to be up to 100 times faster than existing configurations and datasets. At the same time, it has strong integration capabilities with other modules in the Spark ecosystem. e stream analysis module allows for strong interactivity and analysis applications for stream data mining and historical data, while continuing Spark's ease of use and fault tolerance. Spark Streaming easily integrates with all types of common data. Machine learning module (MLLIB) is a scalable machine learning library that provides both high quality and efficiency algorithms. e MLlib library can be used as part of Spark applications in languages such as Java, Scala, and Python.

Particle Swarm Algorithm Validity Test.
After calculating the three ideas of the ant colony algorithm to select courses for English courses, the three curriculum selection intentions were expanded to Spark cluster environment to improve the efficiency and expansibility of the elective courses. ese experiments can be used to identify the performance of our proposal in this paper. In this paper, all experiments were done in a lab Spark cluster environment. e data sources are based on the original foursquare check-in data set, and the data sets with corresponding numbers are gained by replication. e experiment of verifying the efficiency and scalability of large data volume is performed on this case [22].
By repeating these experiments to modify these parameters of the influence factors of the following factors in the linear combination and probability fusion, for the linear method, the result of the selection intention preference recommendation is the highest under the values of 0.4 and 0.5. For the second one, the order is in accordance with social factors, time factors, and geographical factors. Besides, the first rough and then fine-grained result were made in order, and the effect of the selection intention preference is highest.
In the probabilistic fusion recommendation method, when the values of λ and δ are 0.2 and 0.4, respectively, the result of the selection intention preference recommendation is the highest. Among the three influencing factors, which influence the choice of course intention, the degree of influence of geographical factors and time factors is greater than that of social factors, and the degree of influence of geographical factors is stronger than that of time. en, we compare the results of our proposal and others to validate the advantages, and it can be shown in Figure 2.
e experimental results show that the introduction of social factors, geographical factors, and time factors can hardly recommend the intention of elective courses, which verifies the conclusion of the existing research results. e three integrated methods proposed in this paper further Scientific Programming improve the effectiveness of the results of recommendation of choice intention. Specifically, we found that the linearweighted recommendation effect was better than GT, indicating that social factors could hardly choose the outcome of recommendation intention. However, the recommendation effect of GT is better than that of T and G, indicating that geographical factors and time factors can also enhance the recommendation effect. en, all push methods are analyzed. e results show that the linear weighting method has the highest F1_measure and can provide the most vivid comprehensive recommendation effect. In addition, the RECAU of EL combination method is relatively low, while the accuracy is the highest among all methods, and it is suitable for applications requiring high accuracy. e recommended results of the probabilistic fusion method are second only to the linear weighting method. And, all results are better than others.

Verifying the Preference of Particle Swarm Optimization.
In this experiment, the efficiencies of the particle swarm algorithm for testing stand-alone environments and Spark cluster environments were compared. e extended foursquare check-in data set was used to gradually add the data volume from lG to 32G, and then, the execution time was observed in the two environments, respectively. e comparison results of the three comprehensive recommended methods show the result of linear weighting. It shows that the particle swarm algorithm in the Spark is the best one. With the size of the data enlarged, the difference is even more significant. e result is shown in Figure 3 and Table 2.
en, the scalability of the recommended method was we verified. Within these, keep the available memory of each executor as the default value of 1G. Change the number of executors in the cluster and change the number of executors in the Spark cluster environment by changing the number of cores __tota1_executor_cores available to all available executors in the cluster. at is, when the number of available cores and memory changes, the execution time of our proposal changes. Cache sizes are 1G, 2G, and 4G enlarged four square check-in data sets. Finally, the expansibility experimental results of three kinds of particle swarm optimization algorithms are as before. ese show that, with more executors, the more the increase of the number of available cores and memory is, the more it decreases linearly. In addition, the stability of the algorithm was tested in two experiments. e test results are as follows. e algorithm has good convergence and stability. e convergence of the algorithm in 2 experiments is shown in Figure 4.
e above experiments testify the performance of the particle swarm algorithm. e experimental results of verifying the validity of the integrated method show that the linear one is the best one. e second one performs well on accuracy. e third ones are the second following linearweighted method, and it performs well on sparseness problems. e experimental results of verification efficiency show that the particle swarm algorithm has higher efficiency    in the Spark cluster environment than the crash environment. With the increase of the data set's scale, the efficiency advantage becomes even significantly. It shows that, as the number of available cores and memory in the cluster increases linearly, the time decreases linearly within a certain range.

Conclusion
At present, the development of computer and Internet technology is rapid and growing with the passage of time.
With this, the combination of modern education and computer is getting closer. An effective and efficient way is needed to efficiently and precisely consider the intentional choice mechanism of course selection. For achieving the aim of the proposed study, a recommendation model and algorithm for the selection of course intention in English courses were constructed. e research results recommended by the current selection of course intentions confirm that various reinforcement factors can indeed improve the recommendation quality, but it does not consider the recommended methods for the three influencing factors. In our proposal, based on the recommendation of interest points of the existing research results, three kinds of influencing factors were integrated by three methods to improve the results of elective intentional ones. On the one hand, the problem of scalability of collaborative filtering was solved. However, efficient and easily scalable solutions were supported by point of interest recommendations. e good points and disadvantages of the three integrated methods were discussed and verified, and conclusions were conducted. Besides, the performances of the method of selection of point-of-entry intentions were verified, and it confirms that the linear one is the best choice. e experimental results perform well and show the effectiveness of the proposed study.

Data Availability
e data used to support the findings of this study are included within the article.