Virtual Reality Interactive Method and Device Based on Wireless Communication Tracking

,


Introduction
With the development of social science and technology, virtual reality technology has gradually penetrated into all aspects of social life in recent years. With its lifelike performance, virtual reality technology gives people an immersive feeling and is loved and sought after by more and more people. It not only brings people an unprecedented fresh experience, but also brings a lot of convenience to all walks of life. In March 2014, when the news that Facebook spent nearly $2 billion to acquire Oculus VR was reported, some technology giants squeezed their heads and wanted to enter this emerging market. In order to meet market demand in many popular industries, some game, movie, entertainment, and other companies have spent a lot of money to develop virtual reality technology. Virtual reality technology is actually to collect all kinds of data in real life and then use the computer to run signals, combine the blunt data with various output devices, and transform it into a virtual world that allows people to be on the scene. At present, the basic principle of the widely used virtual reality interaction technology is to obtain the user's three-dimensional space position through the port and then use technical means to project its position into the virtual scene, thus, then the user's position in the virtual scene Information is collected, and based on the collected location information, it is rendered technically and displayed in front of the user. Then, in the virtual reality interactive technology based on wireless communication tracking technology, the sensor is a very important link. There are two main types of sensors for sensors. One is a sensor device that users can wear during humancomputer interaction, such as wearable helmets and gloves. There is also a sensing device that is used to perceive and collect environmental information around the user, such as temperature and humidity sensing devices. Therefore, the flexible use of sensors in virtual reality interactive technology in this article has both theoretical and practical significance.
In the decade of informatization, with the increase of communication data rate and capacity, the advantages of wireless communication tracking and virtual reality technology have attracted the attention of many countries. The United States was the first country in the world to conduct research into space optical communication technology, mainly by two departments, NASA and the Department of Defense. The Lincoln Laboratory at the Massachusetts Institute of Technology (MIT) began researching free-space optical communication technology for the US Air Force decades ago [1]. In the signal tracking technology, one of the more core technologies is frequency identification. Radio frequency identification (rfid) is an older technology that was originally used by the KGB (the main security agency of the Soviet Union until 1991) and the US National Security Agency (NSA), the US internal security agency, and Lumpkins came up with a similar idea: design an antenna that converts a specified field into electrical energy, like a coiled wire that generates electrical charge when passing through a magnetic field and then connects it to something that can record information for later playback [2]. Through unremitting efforts, in the past ten years, outdoor tracking research and technology have achieved rapid development. It is expected that in the near future, there will be a similar trend in indoor scenes where people will spend more than 70% of their life time. Dardari studied the indoor wireless tracking of mobile nodes from the perspective of signal processing [3]. When the development of infinite information communication tracking technology is relatively mature, virtual reality technology has also ushered in a vigorous development. Maples-Keller incorporates virtual reality technology into existing cases of effectiveness in treating various mental illnesses, with special attention to exposure-based interventions for anxiety disorders. Sensory information is transmitted through head-mounted displays and dedicated interface devices. These devices track head movement so that movement and images change in a natural way with head movement, creating a sense of immersion. Virtual reality technology allows controlled delivery of sensory stimulation through the therapist, and is a convenient and cost-effective treatment method [4]. With the rapid increase in the use of virtual reality in the treatment of mental health problems, Valmaggia identified 24 controlled trials published between 2012 and 2015. Virtual reality technology is effective for individuals with a series of serious mental health problems. Virtual reality technology has great potential in mental health research [5]. Bastug realized that the concepts of wireless augmented reality and virtual reality (AR/VR) have swept the entire 5G ecosystem, stimulating unprecedented interest in academia, industry, and others. However, the success of an immersive virtual reality technology experience depends on solving a large number of major challenges that span multiple disciplines. He described the main requirements of wireless interconnection virtual reality technology, then selected some key driving factors, and then introduced the research approach and its potential major challenges [6]. When virtual reality technology is relatively mature, the recognition of visual cues in facial images has been extensively explored in the field of computer vision. However, due to factors such as inconsistent robustness, low efficiency, high computational overhead, or strong dependence on complex hardware, theoretical analysis usually does not translate into a wide range of assisted human-computer interaction (HCI) systems. Zhang proposed a novel gender recognition algorithm, modular eye center positioning method, and gaze gesture recognition method, aiming to improve the intelligence and adaptability of the HCI system by combining demographic data (gender) and behavioral data (gaze) and interactivity to achieve a series of real-world assistive technology applications [7]. Li established a list of 23 human-computer interaction (HCI) visualization methods, identified 5 selection methods, demonstrated and provided evidence of how to classify visualization methods in each selection method, and discussed each selection method the limitations and advantages of [8].
The innovations of this article are as follows: raditional wired communication requires the use of different transmission media, and the efficiency of communication transmission is often affected by the media. The use of wireless communication technology has greatly reduced the transmission medium, and only signal transmission stations and electromagnetic waves can be used to meet the needs of wireless communication. Wireless communication is fundamentally a broad class of signal transmission process and is not limited by factors such as time, location, and capacity during the signal transmission period. This also creates different characteristics of wireless communication technology to the diversified mode of wireless communication: translated with http://www.DeepL.com/Translator (free version). In wireless sensor networks, to provide location information for monitoring practices, the sensor nodes themselves need to have accurate positioning. Then, as one of the important technologies of wireless sensor network, the node location technology can not only report the location of the event when an emergency occurs but also provide services for other modules in the current network [9]. Wireless sensor network node localization algorithms can reduce costs and improve accuracy but are time consuming; mobile node based localization algorithms are time consuming but accuracy needs to be improved. According to the current research status of positioning algorithms, there are roughly two types of positioning methods: ranging based algorithms and non-ranging algorithms. There is also a small part that involves mobile node-assisted positioning.
(1) Location Algorithm Based on Ranging. From the name, we can know that ranging is a prerequisite for running this algorithm [10,11]. Then, we need to first measure the distance or angle between the node and the node and then use this information to estimate the location of the unknown node. Figure 1 Schematic diagram of positioning based on ranging.
There are two commonly used ranging technology methods, namely, the calculation method based on RSSI and the method based on TOA.
RSSI-based ranging is less expensive to achieve and requires relatively low experimental equipment, but because the RSSI calculation results have poor regularity in practical applications, and due to multiple factors, the results will have large errors [12]. The RSSI ranging is to measure the strength of the radio frequency signal by receiving the target node under the premise of knowing its transmission power and then estimate the distance between it and the sending node according to the law of signal propagation and the channel model.
Assuming that the transmitting antenna is omnidirectional, the transmitting power density at a distance from the transmitting point a in free space is the value of the effective transmitting power emitted by the antenna per unit area of the sphere, and the receiving power is the product of the power density and the effective area. Therefore, the received power of the antenna at a distance from the transmitting point a in free space can be obtained as Among them, P d ðaÞ is the received power, P f is the transmit power, G f is the transmit antenna gain, a is the dis-tance between the transceivers, and S is the effective area of the receive antenna. S can be calculated by the following formula: Among them, G j is the gain of the receiving antenna and λ is the wavelength of the received signal.
(2) Nonrange Positioning Algorithm. Compared with the positioning algorithm based on ranging, the nonranging positioning algorithm does not need very accurate distance, angle information, or some other physical measurement values, but uses the neighboring relationship between each node and the network connection to achieve it. This algorithm greatly reduces the requirements for node hardware, but compared with the positioning algorithm based on ranging, its positioning accuracy is also relatively low [13,14]. Figure 2 is a schematic diagram of nonranging positioning.
Nonranging localization algorithms, as the name suggests, do not require distance measurements to perform localization calculations and utilize the network connections between the nodes, which need to be analyzed for each node's information before accurate localization calculations can be performed.

Location Algorithm Based on Mobile
Node. The two positioning algorithms introduced above require fixed target nodes for positioning. Generally, the more target nodes are set, the higher the accuracy of node positioning will be. But when all the positioning work is completed, the target nodes of these positioning become ordinary nodes, which leads to great waste and also increases the cost [15]. Therefore, by combining the ranging and nonranging methods, and performing location estimation through databasematching, there is a mobile node location that can perform location measurement with low cost and high efficiency. The grid diagram is divided on the basis of the area fully covered by the cell grid, reducing waste and increasing efficiency when nodes become common points.
Divide our target area into a grid graph as shown in Figure 3. If the moving target signal displays a virtual signal point at the vertex of each grid, then a complete traversal of the full area coverage of the grid can be achieved [16].

Wireless Communications and Mobile Computing
Assume that the area of the grid unit is S = l w 2 , where l w is the side length of the grid coverage area unit, so the side length l w of the grid coverage area unit can be increased or decreased according to the value of the target node communication coverage area R.
V represents the position of the node and m represents the area covered by the node.
In order to ensure that all target nodes can receive highquality signals, moving target signal points need to traverse the vertices of all cells in the set grid, as shown in Figure 4 [17,18]. Here, we use the undirected graph PB = ðT, GÞ to represent the path of the moving target signal traversing the cell: Among them, T represents the set of all vertices in the grid table, and G represents the set of distances from the vertex where the target signal point is currently located to the remaining vertices. Therefore, the shortest complete traversal path of a mobile beacon is defined as: Among them, f ðgÞ represents the distance between the vertices of two mesh units.

Wireless Sensor Network Target Tracking Technology.
Because the traditional wireless network sensor technology does not take into account the situation that the node will move, when the node moves, it needs to repeatedly refresh the node position for positioning. This kind of periodic repetitive work will lead to extremely large positioning energy, consumption, while reducing its service life [19]. For this reason, the coordinated signal between nodes in the wireless sensor network is used to complete the positioning and tracking of the moving or moving target. Figure 5 is a schematic diagram of target tracking.
Since the current positioning node of the moving target can be known, the cooperative processing can be carried   Wireless Communications and Mobile Computing out with each node at the same time, and the positioning state of the target node at the next moment can be predicted, thereby obtaining the moving trajectory of the target [20,21]. Compared with the traditional tracking algorithm, this has a huge advantage. It has played a huge role in the military and civilian fields since its development. In recent years, more and more scholars have continued to conduct research. They analyze and study the distribution characteristics of wireless sensor networks, hoping to reduce network energy consumption while improving its tracking accuracy [22]. This type of research can be divided into five types in the following figure, as shown in Figure 6: Particle filters and binary targets are related to each other, but there is a difference between the two in terms of who the targets are tracked against; quantity transfer as the name implies means tracking by transferring the number of targets, predictive targets means tracking by prediction in advance, and clustered targets means tracking the same class of targets. Wearing a portable sensor helmet is a device that can be disassembled and worn at any time. It can place and position the user's head in time, monitor the user in real time, and refresh the user's position and direction in time [23]. This enables users to adjust their direction and field of view in order to observe the environment in a virtual environment. This type of helmet is also equipped with a stop detector, a direction detector, and an ultrasonic detector. The display device usually uses LED or CRT. When a user uses a helmet, not only a real-time response is required but also the weight of the helmet is required, which has stricter requirements on the width of the field of view.

Virtual Reality Human-Computer Interaction
The operation of the data glove is mainly to increase the actual experience of the user. The user can directly operate by changing the position of the finger or hand, so that the relevant virtual elements in the virtual environment can be checked without using the mouse and keyboard. Currently, there are four common data gloves, and their values vary depending on their performance. While putting down the mouse and keyboard, the data glove must be light in weight while ensuring control accuracy and good real-time feedback [24].
The operation of the body monitoring sensor is mainly to detect and collect data on the user's attitude towards sports, so as to notify relevant sports in the virtual environment in time. At this stage, human tracking equipment mainly includes two types of human tracking clothing and data clothing [25,26].
The voice interaction detector mainly collects and recognizes the user's voice information and then activates corresponding commands or events based on this information. In addition, the collected data can also convey the sounds produced in the environment to the user. It mainly includes speech recognition, sound detection, and speech synthesis.

Environmental Sensing Equipment in Virtual Reality
Human-Computer Interaction Technology. The virtual environment created by the computer is not necessarily a scene that already exists in real life, but can also be a virtual scene created by human imagination. To realize the user's perception of the real environment, it needs to rely on environment-sensing sensing equipment. At present, the environmental sensing devices we are familiar with generally  Wireless Communications and Mobile Computing include visual sensors, auditory sensors, tactile sensors, force sensors, and olfactory sensors. Among them, the visual sensor is mainly able to visually present the three-dimensional scene created by the computer in front of the user's eyes and observe it visually through the user himself, which needs to ensure that the visual characteristics of the visual sensor and the maintenance of human vision principles' uniform; otherwise, it will cause distortion of the visual information seen by the user. The auditory sensor feeds back the sound wave information emitted in the virtual environment to the user, so that the user can make judgment, analysis, and selection from the auditory sense. This function requires the auditory sensor to be in the same range as the human ear in terms of sensing frequency. Other sensory system sensors such as touch, force, and smell enable users to perceive the environment in a virtual environment from all angles, such as pressure distribution, temperature, and odor. If you want to move these flexibly into practice, you need these sensors to have a very light weight and a small size to be easy to carry. Human-computer interaction is the science of designing, evaluating. and implementing interactive computer systems for use by people and the research around the main phenomena involved. The theoretical basis of HCI is engineering, while multimedia technology, virtual realization technology, and HCI technology are intertwined and interpenetrated. The science of engineering is the theoretical basis of HCI technology, while multimedia technology, virtual realization technology, and HCI technology are intertwined and interpenetrated. Environmentally aware sensing devices are interactive computer systems that are used by people based on changes in the external environment.

Sensor System Design
3.1.1. System Structure. Generally speaking, a wireless sensor network target tracking system includes sensor nodes, base stations, and remote monitoring centers, and its system is generally divided into three layers: perception layer, network layer, and application layer, when our node location method locates the sensor node. Perception layer: there are two main types of perception devices: automatic perception devices: capable of automatically perceiving external physical information, including RFID, sensors, and smart appliances; manual generation of information devices: including smartphones, personal digital assistants (PDAs), and computers. Network layer: the network layer, also known as the transport layer, includes the access layer, aggregation layer, and core switching layer. Application layer: the application layer is divided into a managed services layer and an industry application layer. Then, the steps of initializing the network are completed. Next, when the target appears, the sensor measuring node device will send a signal to the moving target. After the target tracking calculation, the target will be tracked and predicted, and the current target node status information will be sent to the base station in real time. The estimated information about the location of the target node is transmitted back to the monitoring center by the WSN network manager via the GPRS network, and the motion track of the target node position can be displayed in the monitoring center in real time, so as to realize the real-time monitoring and tracking of the target node position. The WSN target node location tracking network structure is shown in Figure 7. 3.1.2. System Function. The wireless sensor network target tracking system (WSN) can flexibly and quickly freely component the wireless network, and then it can obtain the information of the target node positioning status in all directions in real time, and through the system function, quickly calculate the target node position and simultaneously target the target. The state is predicted, the target node's movement trajectory is statistically analyzed, and its state information is accurately displayed in the monitoring center. The functions of the target tracking system (WSN) are shown in Table 1:

RSSI Tracking and Positioning Algorithm Based on
Ranging. Suppose the distance from the transmitter a 1 is the initial basic positioning, the received power of a certain antenna at a 1 is P d ða 1 Þ, and the received power at the distance from the transmitter a is P d ðaÞ; then, the relationship between the following formulas can be obtained according to (4): where a stands for primary positioning and p for distance.
The above relationship shows that in free space, the signal will decrease with the increase of the propagation distance, but in the actual situation, the decrease of the signal will increase significantly; so, we can improve (5) to Among them, n represents the reduction degree of the signal under actual conditions, which can be statistically obtained based on experience or field measurement. Usually, there is 2 < n < 4, which is more obvious in an indoor environment.
In engineering applications, the signal power is often expressed in the form of logarithm, with dBm as the unit, and then equation (6) can be changed to When applying it as a model to actual engineering, it is found that when the distance between the transmitting and receiving machines is the same, but the transmitting and receiving directions or the time of measurement are different, the signal strengths obtained are not the same. After collecting part of the data and analyzing it, it is found that it is usually normally distributed; that is to say, the path consumption caused by the distance change is a normally 7 Wireless Communications and Mobile Computing distributed random variable. Therefore, we need to improve (7) to obtain the following model: Among them, n is the index of path loss, which is used to indicate the speed of path loss growth due to the increase in distance. This index depends on the specific propagation environment, and X α is Gaussian random noise with an average value of zero.

Positioning Error.
When the calculated node position of the target signal identification point stays elsewhere in the section, a certain positioning distance difference is generated, and the further the target signal identification point is from the position of the letter circle, the greater the error. Due to the irregular shape of RSS due to environmental changes, the node error cannot be improved well under certain conditions. Next, the error data will be collected and sorted to analyze the maximum error and provide factors that affect the error.
Nodes P 1 and P 2 as shown in Figure 8 are the selected target signal identification points. Suppose the coordinates are ð−a, 0Þ and ða, 0Þ, respectively, the communication radius is r, and the distance between the target signals is l; then, the moving identification point can be obtained. The first target signal identification points before and after the node's communication range are P 0 ð−a − l, 0Þ and P 3 ða + l, 0Þ , respectively. Since nodes P 1 and P 2 are the target signal identification points for entering and leaving the communication circle, the position of the node in the communication circle must be between P 0 and P 1 and between P 2 and P 3 . The four points are the center of the circle, and the intersection formed by the four circles with r as the radius is the area where the node position is located.
The intersection area can be calculated by the following formula: x + a + l ð Þ 2 + y 2 > r 2 , x + a ð Þ 2 + y 2 ≤ r 2 , When 0 < a < r − l, the four vertices of the intersection area where the node is located are F 1 , F 2 , F 3 , F 4 , as shown   Figure 3, and the coordinates of the four vertices are Since P 1 enters the first node of the node communication circle, and P 2 is the last one, when we use the positioning algorithm to estimate the position of the node, we select P 1 and P 2 as the target signal identification points to estimate. Assume that the position of the node can be obtained through the RRML algorithm, that is, the F e point in Figure 8. According to the analysis and sorting, F e will definitely fall in the area where the vertices are F 1 , F 2 , F 3 , F 4 , and the maximum positioning error of the node is the maximum distance to the four vertices. Therefore, the maximum positioning error is expressed as max ðF e F 1 , F e F 2 , F e F 3 , F e F 4 Þ.
It is estimated by the RRML algorithm that the node position may appear in any position within the range enclosed by F 1 , F 2 , F 3 , F 4 . Therefore, in order to simplify the calculation of the maximum error, the maximum error of the node positioning error on the x-axis can be expressed as ðF 2 , F 4 Þ, and the maximum error on the y-axis can be expressed forðF 1 , F 3 Þ.
When 0 < a < r − l, the maximum error in the x-axis direction is l, and the maximum error in the y-axis direction . After sorting out the data, as the value of a becomes larger and larger, the intersection area enclosed by F 1 , F 2 , F 3 , F 4 will also change, and when r − l ≤ a < r − l/2, F 3 will not exist. At this time, the maximum error in the x-axis direction is l, and the y-axis direction the maximum value of the upper error is ffiffiffiffiffiffiffiffiffiffiffiffiffi r 2 − a 2 p . At that time, P 1 and P 2 points will not exist. At this time, the intersections with the ring and the x-axis are distributed as ð−a + r, 0Þ and ða − r, 0Þ; so, the maximum error in the x-axis direction is 2ðr − aÞ, and the maximum error in the y-axis direction is ffiffiffiffiffiffiffiffiffiffiffiffiffi r 2 − a 2 p . In summary, the maximum value of the positioning error in the x-axis direction is The maximum error in the y-axis direction is By formula (12), the maximum value of the predicted error of the node position on the x-axis can be obtained as l. It can be obtained by formula (8) that the estimated error of the node position at a = r − l is the largest on the y-axis, and the maximum value is ffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 2rl − l 2 p . As a result, the maximum value of the node position prediction error will decrease as the distance between the target signal identification points decreases. Therefore, if you want to get higher accuracy, you need to reduce the distance l between the moving target signal identification points. .

System Performance Test.
In the RRML positioning algorithm, mobile signal identification points can be used to form network reference coordinate information. The location of the mobile signal identification point can be obtained through GPS or other devices capable of positioning. This experiment uses the Pioneer3 robot as the mobile signal identification point. The Pioneer3 robot can collect the surrounding environment information through the laser rangefinder and the sonar sensor, thereby drawing the environment map and establishing the relative coordinate system based on the data obtained to realize the node to be determined. The relative positioning can meet the needs of indoor and outdoor positioning.
Pioneer3 robot has sensors such as odometer, gyroscope, sonar ring, laser rangefinder, etc. It adopts C-S structure and is composed of industrial computer and H8S microcontroller. An independent server is responsible for the underlying  Wireless Communications and Mobile Computing control to obtain sensor information, such as motor control, sonar data collection, and code disk data reading. The camera can be turned. Auxiliary equipment such as laser rangefinders and manipulators can perform data query on the client. Pioneer3 robot can also use laser rangefinder and sonar sensor to realize all-round collection of the environment, thereby constructing complete map information.
In the experiment, 40 sensor nodes were randomly deployed, Pioneer3 was used as the movable carrier for the mobile signal identification point, and a sensor node was constructed on it. The mobile signal identification point moves randomly within the range of the deployed node and broadcasts the current position information and performs self-positioning after receiving the information. The experimental parameters are shown in Table 2. Each experiment is repeated 30 times, and the positioning result is an average of 30 times.
The experimental results are shown in Figure 9. When the mobile signal identification point is adjusted to broadcast the current position information interval of 6 s, that is, the mobile signal identification interval is 0.6 m, and the average positioning error of the node is about 1.6 m, when the mobile signal identification point is adjusted to broadcast the current position. When the position information interval is 3 s, that is, the mobile signal identification interval is 0.11 m, and the average positioning error of the node is about 1.76 m. The mobile signal recognition point interval in the experiment represents the change in the external environment, and the positioning error of the node changes as the external environment is constantly changing. It can be seen that the positioning error of the node in the experiment is slightly deviated from the simulation detection result, which is due to excessive interference factors such as noise in the environment. At the same time, the positioning error of the node increases as the interval between the mobile signal identification points increases, which is consistent with the simulation detection result.

Discussion
In the human-computer interaction of virtual reality, there are two main ways to allow the user to interact with the created virtual environment. One is to guide the user to perceive and observe the surrounding virtual environment through the sensory system and adopt actions or use voice instructions to operate and control the virtual objects in front of you. The second is that when the virtual environment receives the user's request, the environment information is fed back to the user through the sensor device. From these two methods, it can be known that if virtual reality wants to realize human-computer interaction, it must enable the computer to detect the actions made by the user or the sound instructions issued by the user through a variety of sensor devices, so that the terminal can analyze and process the collected information, and the corresponding results after the processing are quickly fed back to the user by the sensor, which makes the user have a strong sense of immersion and desire for operation in the virtual environment. Therefore, if you want to realize the humancomputer interaction in virtual reality well, the core wireless communication tracking technology sensor must be better. The program will conduct original research on key scientific issues such as multimodal input, perception, fusion, interaction, and presentation in human-computer interaction and human-computer collaboration and make key breakthroughs in the fusion and coexistence of multiple interaction models and the deep understanding of fused multimodal interaction intentions, as well as form research features and advantages in the field of gesture and somatosensory interaction.

Conclusions
At this stage, domestic and foreign people are paying more and more attention to the research of virtual reality human-computer interaction technology. The virtual reality human-computer interaction technology is also developing by leaps and bounds, and many researches have been developed that can be applied to the military, commerce, entertainment industry, construction industry, and people's livelihood. Technologies in other fields have promoted the  development of these industries from a certain level, and this has also enabled people to conduct indepth research on virtual reality human-computer interaction technology.
There are more and more types of sensors at the core of the technology. Functional performance is also improving day by day. In my country, in recent years, the virtual reality human-computer interaction technology has also been highly valued and affirmed, and a lot of funds have been invested in the research and development of virtual reality human-computer interaction technology. With the continuous development of science and technology, virtual reality interaction technology based on wireless communication tracking sensor devices will play a huge role in the development and progress of human society. Although this article explores virtual reality technology to a certain extent, there are still shortcomings: (1) although the article uses a very large number of diagrams for analysis, simple tables do not exist, and there is a lack of experimental materials and equipment parameters for comparison. (2) In the experimental vehicle, test section is the type of experiment is too single, not taking into account other conditions.

Data Availability
No data were used to support this study.

Conflicts of Interest
The author states that this article has no conflict of interest.