Human-Computer Interaction Design of Intelligent Vehicle-Mounted Products Based on the Internet of Things

. With the development of science and technology, human-computer interaction technology has also been more applied. This article aims to use the Internet of Things technology to apply human-computer interaction technology to smart car products to improve the realism and immersion of the user’s human-computer interaction experience. This paper deeply studies the concept of networking technology and frame composition and analyzes the intelligent vehicle product development strengths and weaknesses. Then, from the perspective of human-computer interaction design, the training response and training learning situation of human-computer interaction are proposed, and the human-computer interaction system of intelligent vehicle-mounted products based on the Internet of Things is constructed. The user experience is improved from this perspective, and the breadth of applications is increased. This article ﬁrst analyzes and predicts the market size of smart car products and then analyzes the scene elements of car products. When designing car products, the driver’s control range should be fully considered. Finally, the user’s human-computer interaction experience analysis for smart car products is analyzed. In the execution of navigation and telephone tasks, there is no signiﬁcant diﬀerence in user satisfaction with tasks and the P values are all less than 0.05.


Introduction
China has now become a country with large car ownership. According to statistics, as of the end of 2013, the number of civilian cars in Beijing has exceeded 5 million, which has reached 5.2 million, an increase of 10% over the previous year, and at the same time, the daily average number of vehicles is increasing sharply. At the same time, my country's current mobile communication users have far surpassed fixed-line users, and mobile communication-based voice exchange, 3G networks, and digital broadcasting services are developing in full swing. e emergence of human-computer interaction applications plays an important role in smart car products. Human-computer interaction is used in many fields, including space, aerospace, medical, deep sea, banking, hotels, and home services [1]. Human-computer interaction technology can give full play to the advantages of human intelligence and enhance its application in the smart car and the improvement of people's sense of interactive experience.
Intelligent products have brought us great changes, which also aroused the research interest of scholars. Today, the electronic world is being introduced into all areas of our lives. Smart cars are a new generation of smartphones, which are very useful and necessary for many of our lives. is is why many manufacturers try to flourish in this field. In the past two decades, people have experienced rapid development in the telephone sector, and perhaps in the next two decades, they will expect the same situation in the automotive field. e question he raised concerns the possibility of creating a mobile application for smart car management and control. But his experiment is still affected by many restrictive factors in real life [2]. e inherent characteristics of IoT devices of Alrawais, such as limited storage space and limited computing power, require a new platform to efficiently process data. e concept of fiber optic gyro computing is introduced into technology to bridge the gap between remote data centers and IoT devices. Fog computing brings a wide range of benefits, including enhanced security, reduced bandwidth, and reduced latency. ese benefits make fiber optic gyroscopes an appropriate paradigm for many Internet of ings services in various applications, such as connecting vehicles and smart grids. However, fiber optic gyro equipment (located at the edge of the Internet) obviously faces many security and privacy threats, the same as those faced by traditional data centers. e security and privacy issues in the Internet of ings environment were discussed, and a mechanism to improve the distribution of certificate revocation information among Internet of ings devices by using fiber optic gyros was proposed to improve the security of Internet of ings devices. Also, potential research directions aimed at using fog computing to enhance security in the IoT environment were proposed. However, the background of the Internet of ings environment is constantly changing, and their privacy and security response strategies are slightly lagging [3]. e ability of Chakraborty computer vision to recognize gestures is crucial to the advancement of human-computer interaction. Gesture recognition has many applications such as sign language, medical assistance, and virtual reality. However, gesture recognition is not only due to the diversity of its context, multiple interpretations, and temporal and spatial changes but also due to the complex and nonrigid nature of the hand, which is extremely challenging. His research mainly studies the main constraints of vision-based gesture recognition in detection and preprocessing, representation, and feature extraction and recognition. e current challenges will be discussed in detail. However, his research is not mature enough in the direction of computer vision recognition, and its design concept is not perfect [4]. e innovations of this article are as follows: (1) using the technical architecture of the Internet of ings, using smart car products, and designing human-computer interaction system applications; (2) experimental evaluation of humancomputer interaction design experience, choosing from the target user, selecting the experience scene, and then selecting the man-machine interactive device that provides a sense of high-quality interactive experience. All aspects of humancomputer interaction to improve the performance of the system and intelligence and human-computer interaction to expand application areas are of great significance. ere are many types of definitions of the Internet of ings. Among them, there are two more common definitions. First, the Internet of ings is a data information platform based on the Internet and traditional telecommunications networks [5]. It can enable specific physical objects in real life reflected in the network and physical addressing [6,7]. It has the important characteristics of the electronification of ordinary objects, the interconnection of autonomous terminals, and the intelligence of pervasive services. Second, the Internet of ings refers to the integration of ubiquitous terminal sensing equipment and facilities, including sensors with "intrinsic intelligence", mobile terminals, industrial systems, building control systems, home intelligent facilities, video surveillance systems, and "external intelligence". Such as "smart objects or animals" or "smart dust" such as individuals and vehicles with various assets attached to carry wireless terminals, and realize interconnection and application through various wireless and wired long-distance and short-distance communication networks. Large integration and cloud computing-based operation modes provide safe, controllable, and even personalized real-time online monitoring, positioning traceability, alarm linkage, dispatch command, plan management, remote control, security protection, remote maintenance, online upgrades, and statistics. Management and service functions such as reporting, decision support, and centralized display of leadership desktops realize the integration of "management, control, and operation" of "high efficiency, energy saving, safety, and environmental protection" of "everything" [8,9].

Framework of the Internet of ings.
From the perspective of technical architecture, the Internet of ings is mainly reflected in three aspects: one is the characteristics of the Internet, which connects all items that need to be connected to the Internet; the other is the identification and communication characteristics, that is, the automatic identification of special functions and the communication of the things (given the things belonging to the Internet of ings); the third is the intelligent feature, that is, the use of automated control theory to make the Internet of ings system have fully intelligent and self-regulating feedback [10,11]. erefore, the Internet of ings is currently considered to be separable. ere are three layers: perception layer, network layer, and application layer. e sensing layer, as the name suggests, is composed of various sensors and sensor gateways, including gas quality sensors, temperature and pressure sensors, bar code tags, RFID tags and readers, electronic monitors, GPS, and other sensing terminal modules. e perception layer is like the skin nerve of the human body, which can feel various complex environmental changes. It transmits various data to the Internet of ings system, and its main function is to identify objects and collect data [4,12]. e main products include Beiyang Group's fiber optic thermometers, RFID omnidirectional channels, acrylic channels, and other RFID hardware products.
At the network layer, we can get ideas from the network layer in the computer network. Similarly, the network layer of the Internet of ings is also composed of various private networks, the Internet, wired and wireless communication networks, network management systems, and cloud computing platforms. Part of these components is equivalent to the data and information obtained by the military aircraft department of ancient times and the comprehensive executive transportation and processing perception layer. For example, when using a credit card to make purchases, the reader directly collects and forwards the radio frequency tag data in the credit card to the Internet, and the network layer queries the database after verification that the tag is legal, and then the bank system is notified to pay the bill [13,14]. Currently emerging cloud computing platform constitutes the main part of things. In data mining, database query, and analysis technology for the Internet of ings, the network layer provides the ability to perceive the behavior of the decision-making data. e application layer is the interface between the Internet of ings and users (including people, organizations, and other systems). It is similar to the graphical interface of the operating system and people. is layer is closely related to the user. For example, which type of service the user subscribes will provide this type of service to realize the intelligent application of the Internet of ings. It can be roughly divided into security systems, smart homes, charging systems, etc. [15,16]. e development of software technology and smart technology benefited from the popularization of the Internet of ings, which brings huge benefits to people through network links [17]. ese are used in practical applications such as household services to improve people's living standards.

Smart Car Products.
In recent years, as a consumer product, automobiles are no longer a symbol of the rich. Automobiles have entered the homes of ordinary people. At the same time, people's requirements for automobiles are not limited to the terms of transportation and transportation In addition, people have become more prominent in the quality and quality of cars, as well as the experience and enjoyment that cars bring to people [18,19]. erefore, the intelligence and technology of automobiles have become the future development trend of the automobile industry.
Obstacles and facilitating factors for the development of the smart vehicle equipment market are as follows. Obstacles are the following. e product business model is not yet clear; product profitability is affected; product homogeneity is serious; innovation is still insufficient; smart vehicle manufacturers' product rules and standards are not uniform, which seriously affects the improvement of the user experience. Facilitating factors are as follows. e rapid development of the Internet provides technical support for the development of smart vehicles; national policies provide further support and support for smart vehicle devices; as the pattern of the consumer market changes and consumers' consumption concepts change, they are more willing to accept smart devices auxiliary [20,21].

Human-Computer Interaction Design.
Human-computer interaction (HCI) refers to the mutual understanding of information communication between humans and computers through a specific language and method, supported by a certain interactive technology. e purpose of HCI is to enable computers to assist humans in completing functions such as data processing, information storage, and visualization services. e development of HCI not only improves the efficiency of human work but also meets the needs of human life [22,23].

MTBM-Based Training Responds to Human-Computer
Interaction. In the process of human-computer interaction, the movement of the smart device is completely controlled by the user's gestures. e way of interaction is calculated through the iterative calculation of kinematics, which can be expressed as (1) e derivation of (1) also reflects, on the other hand, that the sensing range of the sensor is limited, and the working range of the user's gesture is also limited, which requires the user to continuously control the end of the computer in a heterogeneous manner. On the one hand, a user gesture operation is adopted to control the motion increment of the computer.
at is, the movement change amount of the user's gesture operation is used as the computer's movement drive control command [24,25]. On the other hand, the conversion matrix was constructed from the operator's gesture workspace to the computer end effector's workspace and the process of zooming and coordinate conversion of the user's gesture control instructions is realized [26,27].
e MTBM-based training response human-computer interaction method gives full play to human predictive intelligence and decision-making intelligence. e process of human-computer interaction can be composed of several subtasks, and the specific steps are realized in training, which also divides the structure of human-computer interaction very clearly [28]. en, for each subtask, the user's gesture operation instructions are trained. is section represents the training process as Among them, T a ins− (s′,s) represent the actual position set of the gesture guidance instruction when the user trains the subtask in the time (t′, t). T dir ins− (s′s) indicates that the user expects the gesture guidance instruction set when training the subtask within the time (t′, t). fi represents the initial filtering of the user gesture instruction set. h represents the optimized algorithm for filtered user gesture operation instructions.
In order to realize the user's continuous control of the robot in the heterogeneous workspace, this section adopts the incremental motion man-machine control method. Based on this method, the user's desired control command is expressed as Among them, represents the collection Δv i and . Correspondingly, the expected motion increment of the robot is expressed as Among them, ß represents the zoom factor from the user's gesture operation workspace to the computer workspace.
Mobile Information Systems e user gesture operation instruction set uses the motion data in the center of the user's palm, and the instruction set is defined as When the user gestures in the "four fingers open" posture, the palm motion data of the user's hand is used as an invalid instruction for the user's gesture operation and is discarded, and then, the user resets his hand to the starting position of the gesture operation, that is, the online reset of the user's gesture [29]. e effective continuous displacement increment of the center of the user's palm is used to construct the computer's desired motion command. In this way, when the user gesture workspace and the computer workspace are heterogeneous and the user gesture workspace is limited, the continuous control of the robot by the user's gesture operation is realized.
We divide the user's gesture operation data set into L ordered classes, and the probability density function of each directed class is defined as Among them, x i represents the i-th data object. p (k) represents the probability value of xi belonging to the k-th directed class and represents the prior probability. erefore, q (k) satisfies the following conditions: Among them, μ k and σ k are the parameters of the k-th directed class, which, respectively, represent the mean value and covariance matrix of the user's gesture operation data in the directed class. t represents the number of observation data points.
In the human-computer interaction system, due to the limitation of the sensor's perception range and the user's own constraints, the user gesture workspace is limited, and the user gesture workspace and the robotic arm workspace are usually heterogeneous. In this case, in order to expand the working space of user gestures and achieve continuous control of the robotic arm, this section uses the user gesture displacement increment to control the movement of the robotic arm, which is based on the core intention data of the user gesture operation set to build. In the actual humancomputer interaction system based on the user's gesture operation, in order to reduce the noise interference of the sensor's perception error on the user's gesture operation data, the original data of the user's gesture operation can be preliminarily optimized.

Building MTBM Memory Response Based on Improved Radial Basis Function Neural Network.
e objective function of the MTBM memory response algorithm based on the improved radial basis function neural network is Here, g(q n )� Δ [Δx t , Δy t , Δz t ; task t (n 1 n 2 n 3 )] T represents the expected motion of the computer end effector and the corresponding subtask attribute characteristics. is subtask refers to the user's online division of the interactive task into several subtasks during the human-computer interaction process and then performs gesture-based operations for each subtask Robot control.
In the process of human-computer interaction, the attributes of subtasks and the robot's motion state are constantly changing; that is, the sample data of the minimum resource allocation network is constantly updated to detect whether the current number of hidden nodes in the network can meet the accuracy requirements of training and learning.
at is, if the robustness of the current network is insufficient, a new hidden node is added. e current judgment rule for network robustness is defined as Among them, ‖g n (q) − g n (q)‖ represents the norm of the difference between MRAN output and MRAN output estimation. e value range of swn is an integer. e formula indicates that the geometric mean of the consecutive swn network output error norms before the i-th data object input is greater than the threshold, which is equivalent to a sliding window with a length of swn, which is used to estimate the prediction error and ensure the hidden nodes grow smoothly. e value of the threshold is mainly based on experience and is also affected by the accuracy of training and learning. e value of ln is limited by Among them, ln represents the center of the hidden node closest to the input data object np, where c is a control parameter coefficient used to control the width of the newly added hidden node. If formula (11) does not hold at the same time, it means that the robustness of the current network can meet the accuracy requirements of training and learning, so there is no need to add new hidden nodes at this time. However, some weights in the network need to be predicted and updated. e update calculation method is Among them, Hn represents the gradient matrix, K n represents the Kalman filter gain matrix, bn represents the basis function of the radial basis neural network, and V n and C n represent the covariance matrix of the noise variance and error of the Kalman filter strategy, respectively. e covariance matrix update method is Among them, C n represents a dimensional positive definite symmetric matrix. Its function is to determine the direction of the C n gradient vector; then, Detect the contribution value of each hidden node to the network output, and delete the hidden nodes with small continuous contribution value to the network output online. Calculate the network output of hidden nodes: Finally, through calculation and comparison, the hidden nodes with a small contribution to the network output are calculated. If the n consecutive output values of the standard output value ln of a hidden node all satisfy the condition, the hidden node is deleted and the dimension of the extended Kalman filter is reduced. erefore, the advantages of the MRAN-based memory response algorithm are reflected in the simple and adaptive network structure and the online addition and deletion of hidden nodes in the network. On the one hand, it saves the storage space of the algorithm and reduces the space complexity and the time complexity of the algorithm.

Design of the Human-Computer Interaction System for Intelligent Vehicle-Mounted Products Based on the Internet of Things
With the in-depth development of human-computer interaction technology, its application fields are becoming more and more extensive, and the requirements of interactive tasks are becoming more and more complex. In order to expand the application range of smart car products and improve their adaptability to complex environments, the research of human-computer interaction technology has received more and more attention.

System Network Topology.
e realization of various business functions in the system requires a network environment, as shown in Figure 1. e whole system is divided into three parts: human-computer interaction terminal, intelligent vehicle-mounted terminal, and information service platform. e smart terminal mounted on the vehicle is a frontend device mounted on each vehicle. It is composed of a communication unit (DSRC unit, 5G unit), a perception unit (GPS receiver, gyroscope, gravity sensor), and information processing unit. On the vehicle's intelligent end, GPS devices receive position data through satellites, DSRC units provide communication between the vehicle and the roadside, and 5G units can access the Internet through mobile base stations.
e human-machine interface terminal is a system peripheral device that connects the intelligent terminal vehicle and the information service platform. Figure 1 shows that the human-computer interaction terminal communicates with the vehicle's intelligent terminal via Wi-Fi and is connected to the platform's mobile cellular data information service. e human-machine interface terminal will display the status information of the terminal set on the vehicle and the location of the information service platform in real time. At the same time, the humancomputer interaction terminal continuously inputs user information into the terminal and the background to make the system run normally.
ere is an information service platform for data center preservation and promotion of information. Real-time status information was collected by the road network devices and human interface devices, and the status of various vehicles, road traffic management, and local self-government bodies were monitored to provide information in accordance with the particular road conditions.

System Hardware Architecture.
e intelligent terminal has multiple functional interfaces. For the two-way transmission of business information, it is connected to the human-computer interaction unit through the Wi-Fi interface. e human-computer interaction terminal also includes a mobile cellular communication interface and a communication link between terminal interactions, which is used to implement human-computer interaction and an information service platform through mobile cellular communication.
e modules between the terminals are connected to each other to form a terminal material composition. According to the system structure of this document, the smooth application of software functions and services has finally been realized.

System Software Architecture.
According to the operating functions of the man-machine interface, it is necessary to design a system with stable interactive software, powerful real-time performance, and clear function development.
is white paper uses the MVC framework model. In this way, the design of the program will be simplified and the quality of the software will be improved. In the MVC framework pattern, the tasks are handled independently by the model, view, and controller and finally adjusted to complete the entire business logic.
In the software layered representation, each layer is composed of corresponding functional modules, and the functional module independently completes the functional Mobile Information Systems business, coordinated to complete the entire system function, so as to reduce the degree of coupling between modules. Human-computer interaction software can be divided into communication layer, data layer, business control layer, and presentation layer. Each layer has corresponding functional modules, and each module coordinates and completes the overall task of the system.

Specific Functions and Composition of the Human-Computer Interaction Module.
e vehicle-mounted module terminal written in this chapter takes the processor as the main control part, and the software platform of the system is an embedded operating system and mainly can complete the following service functions. Information collection is as follows: various pieces of information of motor vehicles equipped with on-board modules, such as speed, number of passengers, whether it is safe, etc. Value-added services: as a terminal, it broadcasts news programs, short stories, songs, and other road flow calculations: using infrared technology to detect information, distinguish the information of passengers, treat people and objects differently, and count the passengers getting on and off the bus. e function equivalent to the bus stop sign, through the use of radiofrequency technology, uses the reader to report the stop on the electronic sticky note on the bus sign and reduce the driver's work intensity. On the basis of the Internet, all the specific data and business information of motor vehicles are uploaded to the traffic management department, and the vehicle-mounted module is connected to the Internet, and a dedicated upper-layer protocol is designed to collect specific information. is part is called wireless communication. e entire vehicle-mounted module is composed of an antenna, part, part, main control module, power supply, part, speaker, microphone, and display screen. We generally arrange the on-board module at the rear of the motor vehicle to collect the state of the vehicle and send it to the traffic management department.
We choose the development board, the model is Samsung, it has complete functions and excellent performance, and it is a streamlined instruction set with the processor as the core. e advantages such as its low cost, high yield, lightweight, and low energy consumption are enough to meet the needs of designing vehicle modules. It has a memory processing unit and cache architecture, composed of instructions and data, and has many interfaces, which directly reduces additional equipment and saves the use of other devices. Table 1 shows the ARM development board description.
e main control module processor and the radio frequency reader are connected through an asynchronous transmission interface. We can use active tags for the tags and readers of the on-board module, we can deploy the readers on motor vehicles, and we must add protective devices around the readers to prevent the reader signals of many vehicles from interfering with each other and receiving wrong signals. A metal cover can be added. ere are many brands of specific labels, and we have a lot of rooms to choose from, but as far as possible, we choose low prices, high sensitivity, low energy consumption, and universal reader interfaces. e antenna must be able to distinguish each direction.

Market Scale and Forecast of Smart Car Products.
At present, my country's smart car equipment market is in a stage of rapid development. Figure 2 shows the growth of the market size of China's smart car equipment, including central control, dashboard, head-up display, voice control (virtual personal assistant), gesture control, and other devices, all of which have increased to varying degrees, and the total growth is also on the rise.

Scene Elements.
In the design of the in-vehicle information system, the scene elements mainly include the spatial layout of the equipment in the cabin and the related attributes of the equipment. e International Standardization Agency has put forward standardized recommendations for the accessibility and layout of traditional in-vehicle equipment control, including ISO3958/SAEJ287, ISO4040/ SAEJ680, 1138, and 1139. However, as the touch control screen and HUD display are gradually introduced into the cabin, the layout of the interior space has also become diversified. ere is no perfect standard for the layout and operation area of the car's interior space, which is based on the touch control panel. We draw the following two conclusions: (1) most of the gestures are completed in the limited area between the steering wheel, the rearview mirror, and the shift lever, and the placement of the media function equipment is located in the upper position of the center console; (2) the control operation range related to the media function is smaller than the control operation range related to the car function.
In the British consulting firm SBD, the driver in the control study touches the screen, as shown in Figure 3 operation. If the driver cannot be found in the most remote location of the touch screen in the most comfortable driving posture, after a period of time, he will not want to do extra action to control the function of the touch-screen remote location. erefore, when considering the relationship between the equipment in the car and the driver, the driver's control range should be considered. By reducing the operation range and depth of the touch screen and other devices, the driver can operate the device in a comfortable state. Figure 4, 60% of drivers hope that the display device in the car can be more digital and intelligent in terms of the experience perception of the display device in the car. e driver hopes that the application in the mobile phone can be used on the display device in the car. Combined with the above data analysis, it also shows that drivers hope to have safer and better user experience devices that can replace mobile phones and at the same time can use the applications in mobile phones.

Experience Perception of Smart In-Vehicle Devices. As shown in
From the data analysis of the picture, it can be seen that 59% of users use WeChat for nondriving related software, but only 24% of Yonghua use music software to use it. e difference between them is caused by the height difference of the column. is shows that although there is currently no in-car version of WeChat for driving scenarios and other

Mobile Information Systems
survey results also show that users recognize that there are certain safety risks in using mobile phones while driving, drivers still need to use WeChat.

Statistics on the Duration of Sending the Current Position
Task under Human-Computer Interaction. Figure 5 shows the statistics of the duration of the task of sending the current location. We classify different smart onboard devices into A-F grades and conduct experimental tests on 6 subjects. e test results are shown in Figure 5. After the user experience with intelligent network products, automotive HMI experience is difficult to describe accurately; according to the study of psychology, all human cognition and experience are evaluative; when people perceived an object experience, good will/be judged on the bad, and there is a positive/negative dimension; this judgment will be expressed through people's behavior and a psychological evaluation of the action. Traditional product experience evaluation uses subjective evaluation methods, and the evaluation results have shortcomings such as strong subjectivity and poor repeatability of experiments. Figure 6 shows the variance and average time analysis of the duration of the user sending location task through testing the facial expressions, finger movements, and psychological scales and other behaviors and psychological responses in the process of user experience products as objective data for quantitative evaluation of product user experience. In order to ensure the accuracy of user experience test evaluation results, user experience test evaluation should follow strict test methods and test procedures.   Table 2, the average task duration of the A application is about 18.19 seconds, and the average task duration of the B application is about 23.33 seconds. ere is a significant gap between the two in the single-factor variance; the analysis also shows that there is a significant difference between the two.

Human-Computer Interaction User Experience Analysis of Smart In-Vehicle
Products. It is difficult for users to accurately describe their feelings after the experience of the HMI products of the intelligent connected car. According to psychological research, all human cognition and experience are evaluative. When people perceive and experience an object, they will be better. It is judged on the dimensions of bad, positive, or negative, and this judgment will be expressed through people's behavior and psychological evaluation. Traditional product experience evaluation uses subjective evaluation methods, and the evaluation results have shortcomings such as strong subjectivity and poor repeatability of experiments. In the process of experiencing the product, the user's behavior and psychological response such as finger movement, facial expressions, and psychological scales can objectively reflect the user's true feelings in the process of experiencing the product, and the user's behavioral data and psychological feelings can be mutually confirmed.
erefore, behaviors and psychological responses such as facial expressions, finger movements, and psychological scales in the process of user experience product testing are used as objective data for quantitative evaluation of product user experience. In order to ensure the accuracy of user experience test evaluation results, user experience test evaluation should follow strict test methods and test procedures. To this end, this article explains the composition and technology of the user experience test evaluation system as shown in Figure 7.
By calculating the average ASQ score of each participant on each task, you can view the average ASQ value of a specific task and report the confidence interval to show the variability in the task data. In order to understand whether there is a significant difference between the two products in different task satisfaction levels, a paired two-sample mean ttest was performed.
e inspection results are shown in Table 3. After inspection, the P value of navigation and telephone tasks between the two products is greater than 0.05. erefore, there is no significant difference in task satisfaction in the execution of navigation and telephone tasks. e P value between the two products is less than 0.05, so on the music task, the task satisfaction of the two products is significantly different, and calink is better than carlife. e standardized data results are shown in Figure 8. e subjective and objective data of the input happiness ratio and denoised data are sorted into Excel tables by tasks. e percentage standardization method is used to perform a lot of the above data through M average ATLAB programming. In the standardized data processing of the Gang, because the task success is named data, and other metrics are ratio data, so the task success is not considered for the time being in the standardization, but only the ratio data type is considered.      And after standardization, all data can be converted to the range between 0 and 1, and it has a positive relationship with the final performance result.

Conclusion
is article mainly focuses on the research on the humancomputer interaction design of intelligent vehicle-mounted products based on the Internet of ings. e humancomputer interaction design system of the intelligent vehicle-mounted products designed in this paper has a good application effect, and users have a high sense of experience for the navigation, telephone, and other functional applications of these products. e research in this paper applies human-computer interaction technology to smart car products through the Internet of ings technology to improve the realism and immersion of the user's humancomputer interaction experience. e innovation of this paper is the design of human-computer interaction experience experimental evaluation. From the selection of user goals, the selection of experience scenes, to the selection of human-computer interaction equipment, it provides a highquality human-computer interaction experience in all aspects, which is useful for improving people. ere are still some problems in the research of human-computer interaction, but it is still very valuable for the expansion of the application field, and it needs to be improved and supplemented in future research, and a more ideal interactive system has been reached.

Data Availability
No data were used to support this study.

Conflicts of Interest
e author declares no conflicts of interest.