A Study on the Construction of Translation Curriculum System for English Majors from the Perspective of Human-Computer Interaction

theCreativeCommons AttributionLicense,


Introduction
Textual materials have been used more frequently in English majors' translation classes for a long time.Yet, multimodal symbolic resources, such as photos and lms on the visual, aural, and tactile senses, have been less employed [1,2].e multimodal context of information technology has brought new challenges to traditional translation teaching.With the advent of the digital era, the objects of translation have emerged, not only traditional paper texts but also all kinds of texts covering various forms of symbols, such as text, pictures, and sound images (i.e., hypertexts or virtual texts).e act of translation of these texts has gone beyond the concept of translation in the traditional sense [3].Translation teachers should keep abreast of the impact of multimodality based on information technology on translation teaching and follow the trend to reform translation teaching and improve the e ectiveness of translation teaching to cultivate new types of translation talents that meet the needs of modern society [4].
e multiple symbolic resources and sensory systems used in constructing meaning in interactive activities between humans and humans and between humans and machines are referred to as multimodal.With the growing growth of the translation discipline, the topic of translation education has become increasingly important.Some scholars, linking theoretical knowledge and teaching practice, have begun to devote themselves to studying multimodal translation teaching systems.
e literature [5] combined the diversi cation of translation contents, forms, and tools under the new situation and the changes in translators' roles and proposed that the construction of a translation curriculum should re ect the characteristics of multimodality.e literature [6] uses web-based networking and multimedia technologies to collect teaching materials in various modalities.It creates web-based interactive platforms and innovative teaching methods and fully engages students' multiple sensory systems in interaction, allowing optimal teaching effectiveness.It is crucial for appropriate online teaching materials and courses in English major translation courses.
With the promotion of the "Internet + education" model and the development of artificial intelligence, big data, cloud computing, and other technologies [7][8][9], online education, as an essential means of education informatization, provides a convenient learning platform for learners.It enables learners to access rich learning resources according to their learning interests and needs without the limitation of time and space [10].National universities have launched many learning resources in the form of text, audio, and video on mainstream education platforms.For learners, they can apply the course resources as a supplement to their learning and check the gaps after class to consolidate the foundation.However, the education platform provides a wide range of learning resources and diversified learning directions [11], which have certain drawbacks: (1) e online education platform does not have the same training program as formal higher education schools and cannot organize courses systematically and progressively.It cannot provide learners with effective learning guidance, which results in information overload and low course completion rates.(2) Duplicate content exists between courses, which can cause some distress to learners.erefore, to maximize the value of online courses, solutions for effective course recommendations must be studied [12].
Existing course recommendation methods are usually collaborative filtering (CF) methods [13], which combine student course interaction history and common preferences for recommendations.e most significant benefit of CF recommendation systems is that they do not have any unique criteria for the suggested things and can manage elements that are difficult to describe textually in an organized fashion, like music and movies [14].However, the issue of information sparsity is regularly experienced due to the little number of users rating information for things relative to the whole number of items.
Most of the information has a graphical structure in recommender systems.e interaction of users and items could be viewed as a bipartite graph, which applies graph learning methods to obtain embedded representations of users and items [15].Among graph learning methods, graph neural networks are currently receiving a lot of attention.Recommendation models based on graph convolutional networks (GCN) are widely utilized in recommendation system models under their excellent results [16].
Graph neural networks apply embedding propagation to iteratively aggregate neighborhood embeddings [17,18].By stacking propagation layers, each node can access higherorder neighbor information instead of only first-order neighbor information as in traditional methods.e literature [19] proposes a new recommendation method NGCF, by inserting collaborative signals to the implanting work of a model-based connectivity graph.e technique exploits the higher-order connectivity of the integration graph to achieve an embedded representation of collaborative signals.A simplified GCN model LightGCN is proposed in the literature [20].e model contains only the most essential parts of GCN neighborhood aggregation, for example, domain aggregation, multilayer propagation, aggregation for collaborative filtering, and the removal of feature transformations and nonlinear activation.
However, there is still relatively little research on course recommendation models that introduce human-computer interaction modules.In addition, the interaction data of online platforms is sparse, and there are more noisy data in the interaction data.e existing recommendation methods cannot effectively solve the above problems.
is study proposes a recommendation model for English major translation courses with human-computer cooperative interaction in response to the issues mentioned above.
e innovations and contributions of this paper are as follows.
(1) is paper gives suggestions for the construction of a translation curriculum system for English majors in universities.( 2 is paper consists of four main parts: the first part is the introduction, the second part is the methodology, the third part is the result analysis and discussion, and the fourth part is the conclusion.

Construction of Translation Curriculum System for English Majors in Colleges and Universities.
ere are more than 1,000 colleges and universities offering English majors in China.From the current teaching situation of translation courses in English majors, there are still problems, such as curriculum setting (seen in Figure 1), emphasizing theory over practice; teaching content, emphasizing fundamentals over specialization; student training, emphasizing over culture; and teacher team, emphasizing scale over development.In response to the above problems, this paper gives suggestions for constructing a translation curriculum system for English majors.
A systematic translation curriculum system is constructed for English majors in colleges and universities.Qualified language service talents should not only learn and master theoretical translation knowledge but also need certain translation practice.ey should be skilled in applying translation skills and methods, namely, the organic 2 Advances in Multimedia combination of translation theory, translation skills, and translation practice, to do well in translation.In the construction of a translation course system for English majors, how the knowledge points of this course are constructed and the articulation of the knowledge structure and demand for the next course should be considered.To learn translation well, you need a combination of theory and practice to do a good support job.To do well in language services, you also need a foreign language + major to build a good foundation.And the systematic mastery of knowledge needs to be realized by the knowledge chain formed by the seamless connection of various knowledge points (i.e., to constitute a relatively complete knowledge system).e knowledge learned is systematized and solidified by creating a knowledge chain.
e mastery of knowledge is like building a building, and each floor is the foundation.erefore, the effective connection of knowledge points and the formation of a knowledge chain rely on the overall structure formed by the connection of knowledge structure between different courses and the strengthening of practical teaching links.
is general structure is an urgent issue that must be built and needs to be articulated to make a firm knowledge chain and cultivate qualified talents.
e Dynamic Development of the Construction of the English Professional Translation Curriculum System.e curriculum system is not static, and the structure of the curriculum system is closely related to the development of social economy, technology, and culture.e demand for English translation talents in various fields is also constantly changing.
e continuous construction of English major disciplines also drives the construction of the curriculum system in a state of change and development.
e construction of the English major translation curriculum system should not only be based on the previously summarized experience but also add elements of the development of the times.For example, in the era of rapid development of information technology, language service technology is also developing rapidly towards the trend of informatization, specialization, networking, and cloudization.is requires more and more technical ability of language service personnel.And there are more and more companies that have certain requirements for the ability of translation technology and tools.
Constructing a Comprehensive Translation Curriculum System for English Majors.Due to the continuous emergence of cross disciplines and marginal disciplines, the boundaries between disciplines are becoming more blurred.e development of foreign language majors has gradually evolved from the original single discipline to foreign language + major and major + foreign language.It is no longer possible to adapt to the current social development situation and national development needs by only knowing a foreign language but not specializing or only specializing but not knowing a foreign language.is is especially true for English translation talents.Otherwise, they will not be able to effectively solve various problems in the language service industry.To build a comprehensive translation curriculum system for English majors, the teaching contents, methods, and teachers should also be adjusted based on adjusting the curriculum.

Constructing an Innovative Translation Curriculum System
for English Majors.e innovative features of the translation curriculum system should be highlighted when constructing it, and the influence of factors such as geographical location, economic development, and school characteristics on the cultivation of English translation talents should be considered comprehensively.
e talents thus cultivated will have stronger competitiveness in employment.In addition, the individual differences of students should be noted.Translation is not simply the mutual transformation of two languages, but also the transformation of culture.Translation ability should be a comprehensive ability that includes language ability and cultural ability.e development of this ability requires a period of time and is characterized by dynamic changes.us, a corresponding translation curriculum can be constructed according to the developmental characteristics of students.
To progress the learning proficiency for translation courses, a course recommendation model of graph contrast learning with human-computer interaction is proposed according to the translation course system of English majors.2.
Given a graph A, its features consist of features I q � i q1 , i q2 , . . ., i qw   for each node in q and edge set features I l � i l1 , i l2 , . . ., i lw   for the edge l connecting two nodes.e goal of GCN learning is to achieve feature extraction by aggregating features of neighboring nodes through the graph structure.
e output of each hidden layer of graph A can be represented as a nonlinear mapping of neighboring layers.
where b n+1 denotes the feature output of the n + 1th hidden layer.G represents node's adjacency matrix.M n denotes the weight parameter of the tth layer.For a GCN with multiple hidden layers, the propagation process is where  D is the degree matrix of  G.  G � G + X T .σ is the activation function.In this paper, the ReLU function is used.
en, the gradient descent training of the weight matrix M is performed layer by layer after stacking multiple layers, and finally the representation vector of each node is obtained.
GCN suffers from the problems of a single structure of convolutional layer, fixed size of convolutional kernels, and difficulty in fully extracting important feature parts corresponding to each sequence.is paper designs a multiscale graph convolutional layer including different numbers and sizes of convolutional kernels and a constructed feature fusion architecture based on multiscale feature extraction and fusion.Figure 3 depicts the structure of the multiscale convolutional layer.
Figure 3(a) constructs a multiscale parallel graph convolution by processing several different scale graph convolution operation units in parallel.
is structure extracts multiscale features in parallel with different scale convolutions.Multiscale feature fusion is performed by a direct weighted summation strategy to combine long-, medium-, and short-distance temporal features into a feature map, which is further enhanced by a 2D convolution for feature extraction. is structural convolution does not perform additional processing on the extracted time-series features but directly unifies them into the same feature map during the feature fusion process.A simple fusion results in a more comprehensive feature representation and the increase in computational effort is less than that of the other convolutional unit structures designed.
Figure 3(b) shows a multiscale serial graph convolution that is constructed by serially processing multiple differentscale graph convolution units.e convolution kernels of the subunits are set to small to large sizes in the order of structure from front to back.Short-range features are acquired from the small-size convolution, and then long-range semantic dependencies are acquired layer by layer using relatively larger-size convolutions.It enables the acquired semantic information to be based on the node-internal information and to capture higher-level association information.
Figure 3(c) constructs a multiscale dense temporal graph convolution by borrowing the classical dense network structure and fusing the features of multiscale parallel graph convolution and multiscale serial graph convolution.It can more comprehensively extract the rich temporal features in node intrinsic and semantic information.
e multiscale temporal convolution under this structure can increase the model computation significantly.To reduce the number of operations and model burden and the number of temporal convolution subunit operations, this paper adopts a scheme that restricts the input to multiscale dense convolution.In addition, to further reduce the model complexity, a simplified version of the multiscale dense time convolution structure is designed.It limits the number of convolutional subunits in each layer to 1, and the input of each convolutional subunit is set to the output sum of all previous layers.

Course Recommendations for Graph Comparison
Learning.
is paper applies contrast learning to online course recommendation and proposes a course recommendation model with graph contrast learning.First, data augmentation is performed on the input bipartite graph of user-item interactions to obtain two subviews.
en, a modified LightGCN model is then used on the original bipartite graph and the two subviews for node representation extraction.A recommendation supervision task and a contrast learning assistance task are constructed for joint optimization.Finally, the recommendation results are obtained.
e algorithm framework diagram is shown in Figure 4.
e bipartite graph of user-item interaction is managed as a matrix, with matrix R denoting the interaction matrix.If user p has completed course x, the value of the corresponding position of R px is 1.Otherwise, it is 0. e adjacency matrix can be calculated as follows: e matrix equivalence form in the graph convolution process can then be obtained as follows: where D is the diagonal matrix of (M + N) × (M + N) and the value D xx on the diagonal indicates the number of nonzero entries in the vector of the xth row of the adjacency matrix G.
Two methods of data augmentation are designed on the graph structure, random addition and random deletion, to create different node views.
e following is a standard representation of data augmentation operations: Advances in Multimedia s 1 , s 2 ∼ S. e data augmentation operation is denoted as S.
Two completely independent data augmentation operations are performed on the bipartite graph A of user-item interactions to form two views s 1 (A) and s 2 (A).R (l) 1 and R (l) 2 are the two associated node representations of the nodes obtained at layer l. e neighborhood aggregation function B is used to update the node representations.Each loop starts with two alternate views of each node.e random addition parameter w and the random deletion ratio u are kept constant for the two independent operation procedures.
Random Addition.is method adds w randomly generated interaction record data to the interaction record employing parameter w .Specifically, the method adds some random edges to the bipartite graph of user-item interactions.Two separate operations are represented as follows: where Q denotes the set of nodes, θ denotes the set of edges, and θ ′ , θ ″ indicate the set of added edges. is expansion is expected to identify node information that is more important for node representation learning from different expanded views.

Random deletion.
is method removes some user-item interaction records by a random deletion ratio p. Specifically, the method removes some edges in some interaction bipartite graphs at random.Two separate operations are expressed as follows: where W 1 , W 2 are two mask vectors depending on the edge set, generated by the random deletion ratio u. ⊙ denotes the product of the two vectors.Only some of the connections in the neighborhood are involved in node representation learning so that GCN does not rely too much on a particular edge.
After enhancement, the views of different nodes of the same training batch are considered negative pairs.ose of the same node are positive pairs.Positive pair supervised learning promotes consistency, while negative pair supervised learning strengthens the differences.
e user-side InfoNCE loss function is formulated as follows: where s denotes the function to calculate the similarity between user vectors.And here the cosine function is used to calculate the vector similarity.T is the temperature hyperparameter.
e node representations of user u are obtained by graph convolution operations under two views s 1 (A) and s 2 (A) and are denoted as R p ′ , R p ″ .R p ′ , R p ″ are the node representation vectors learned by the graph convolution network for the same user p under different views.e representation of user q (p ≠ q) learned by the graph convolutional neural network under view s 2 (A) is denoted as R p ′ .R q ″ is the node representation vector learned by different user nodes through the graph convolutional neural network.e project-side InfoNCE loss function is kept consistent with the user-side InfoNCE loss function.L SSL loss function is formulated as follows: e goal of L SSL optimization is to maximize the similarity between the same node representation vectors and minimize the similarity between different node representation vectors.
A multitask training strategy is used to improve the recommendation method using a contrast learning task.e objective function is jointly optimized by combining the recommendation supervision task and the contrast learning assistance task.e jointly optimized objective function is expressed as follows: where C denotes the trainable parameters in the model and ‖C‖ 2 2 denotes the L 2 regular term, which is used to prevent the overfitting phenomenon.β 1 , β 2 are hyperparameters.L BPR denotes the Bayesian personalized ranking loss commonly used in recommender systems.
where O denotes the data records of user-item interactions.p denotes the user.x and y represent items that the user has interacted with and not interacted with. j denotes the sample score. j px denotes the positive sample score. j py denotes the negative sample score.

Human-Computer Interaction Model.
A human-computer interaction model is designed in this paper to improve the relevance of the recommended courses to students' interests.e human-computer interaction process is shown in Figure 5. B and R represent the participant and the robot in the human-computer interaction.e zth input of the participant to the interactive system is C z B . e z reply participant of the robot is denoted as C z R .e emotional interaction friendliness generated by the participant and the robot is R(z).e specific mathematical expression is e knowledge graph A is introduced at the system initialization.During the interaction, the participant input is the kth conversation content C z B . e system output is the zth bot response content C z R . 6

Advances in Multimedia
Human-computer interaction content relationship assessment: the process of human-human communication is a process of continuous awakening of background knowledge, and there are correlations between communication contents.Based on this, the human-computer dialogue content is evaluated for relevance, and the potentially interesting contents of the participants are discovered on the knowledge graph.
Figure 6 shows the process of activation of potential entities of the participant by dialogue entities.Entities A and B are the entities involved in the content of the participant's conversation during a certain human-computer interaction.
e entities associated at level 1 of entity A are numbered 1, 4, 5. e entities associated with level 1 of entity B are numbered 1, 8, 9. e entities associated at level 1 of entities A and B are numbered 1, and the involved entities are shaded in Figure 6(b).Entity A's level 2 associated entities are numbered 2, 3, 6, 7. Entity B's level 2 associated entities are numbered 2, 7, where entity A and entity B's common level 2 associated entities are numbered 2, 7, involving entities as shaded in Figure 6(c); with this reasoning, the entity numbers of entities with lower association levels (the lower the association relationship, the higher the level of association level) of entity A or B can be derived.Participants' potentially interesting contents are activated by the conversation entities and propagated along with the knowledge graph hierarchical relationships from near to far and from strong to weak.As the number of associated entity levels increases, the user interest weakens, which corresponds to the shaded part in Figure 6.
is process is similar to the propagation state of water ripples from near to far and from strong to weak.e amplitude of water ripples gradually decreases during the propagation process from near to far.
Similarly, the influence of dialogue entities on entities with lower association levels gradually decreases.
ere is an interference superposition effect at some entities during the ripple propagation process, highlighting some entities (i.e., the commonly associated entities).Finally, the content of the entities of interest is selected optimally.

Experimental Dataset and Parameter Settings.
e dataset selected for this paper, MOOC-English, is the data of the English category in the MOOC platform.MOOC-English contains information on 779 courses and 130656 students' ratings of 779 courses.e basic statistical information of the dataset is presented in Table 1.
is experiment is implemented based on the popular open-source recommendation framework Burroughs.e parameters of the model are initialized using the Xavier method.e optimizer was selected as Adam.e learning rate size is 0.001.Batch_size is 2048.e number of layers of the graph convolutional neural network is 3.
is experiment uses recall and normalized discount cumulative gain (NDCG) for the top-K recommendation scenario as evaluation metrics.NDCG is used to evaluate the accuracy of the recommendation results with high correlation results.e higher the NDCG indicator is in the more forward position, the better the recommendation effect is. , where rel t represents the relevance of the recommended courses at position t.K denotes the top K courses with the highest prediction probability recommended.|REL| indicates the set of the top K courses with the highest relevance.Let K � 5; that is, the top 5 courses with the highest prediction probability are recommended to the user.

Results of Dataset.
e performance results of the top-K recommendation are given in Figure 7. e experimental results demonstrate that the proposed model outperforms the other models, even in the case of student course interaction and its sparsity.
e results in Figure 8 show that the recommended proposed model achieves better experimental results than other comparative methods ( [19][20][21][22][23]) on the dataset.Compared with the top performance of literature [20], the NDCG of the proposed model is improved by 5.6%.From the analysis of the experimental results, literature [22] uses GNN based on the information transfer paradigm to mine the dichotomous graphs and model the collaborative filtering signals directly into the node representations.e performance is improved compared to the traditional neural network model.Literature [19] achieves better performance compared to literature [22] by explicitly modeling the higher-order connectivity between users and items to improve the quality of node representations.Literature [20] only performs normalized neighborhood embedding for the Advances in Multimedia next layer.It removes the operations in literature [19], which are not significant for collaborative filtering recommendation methods, and achieves better experimental results than literature [19].However, literature [23] modeled each student course interaction as independent data instances without considering the relationship between them and fails to extract collaborative signals based on course attributes from students' collective behavior, so the recommendation effect is poorer than the proposed model.

Analysis of Hyperparametric Results.
e experimental results of this approach are compared at different values to test the effect of hyperparameters.Figure 9 depicts the experimental outcomes.
e role of the hyperparameter β 1 is to control the proportion of the contrast learning assistance task to the joint learning task.e optimum performance is reached when β 1 � 0.1, according to the experimental results in  Literature [20] Literature [21] Literature [22] Literature [23] Proposed Recall and NDCG@5 Models Recall@5 NDCG@5  Advances in Multimedia Figure 9.When β 1 � 0.4, the model performance decreases sharply.A suitable β 1 value can balance the relationship between the recommended supervised task and the contrast learning auxiliary task and achieve the desired performance.e experimental results are compared under different values of the temperature hyperparameter to verify the effect of the temperature hyperparameter on the algorithm's performance.Figure 10 depicts the experimental outcomes.

Robustness Analysis of Noisy Data.
A certain amount of randomly generated interaction record data is added proportionally to obtain the dataset with noisy data added.Table 2 represents the experimental findings achieved by the model on the noisy dataset.
As can be seen from Table 2, the other models perform poorly under the condition of adding more noisy data.With the addition of noisy data, the proposed model can still produce superior experimental results on the dataset.By comparing the two subviews obtained by nodes after data augmentation, the proposed model identifies the graph structure information of nodes from the bipartite graph of user-item interaction.And the proposed model is able to reduce the dependence of node representation learning on certain edges.In short, the comparison learning approach provides a different perspective on noise interaction data in the recommendation.

Conclusion
To improve the past single boring modal teaching form of English major translation courses and fully mobilize students' motivation and initiative to improve classroom efficiency effectively.is paper proposes a translation course recommendation model with human-computer collaborative interaction.e model constructs a multiscale graph convolution model to extract the course association information and the structured features of key nodes.e effectiveness of course recommendations is further improved by comparing learning assistance tasks and introducing student interest factors through human-computer interaction.Empirical research experiments are conducted on online education platform datasets.Results show that the proposed model beats other existing models to improve the quality of translation course recommendations.However, there is still rich correlation information in the feature information of the courses, such as the knowledge points contained in the courses and the teachers of the institutions teaching the courses.Integrating the feature information of courses into the recommendation system to further improve the recommendation effect is the direction of future research.

Figure 3 :
Figure 3: Multiscale graph convolution structure.(a) e architecture of multiple-parallel temporal block.(b) e architecture of multipleserial temporal block.(c) e architecture of multiple-dense temporal block.

5 :
Schematic diagram of content input and output in the process of human-computer interaction.
Improvement of the GCN.GCN is created to solve graph-structured data that cannot be solved by traditional CNN.Like the role of traditional CNN, GCN also achieves feature extraction of data samples through modules, such as convolution and pooling.e difference is that the GCN data samples are graph data (i.e., node and edge features).eschematic diagram of its principle structure is shown in Figure

Table 1 :
Basic statistical information of the dataset.

Table 2 :
Comparison of Recall@5 and NDCG@5 for different noise data.