Embedded Design of 3D Image Intelligent Display System Based on Virtual Reality Technology

The previous 3D image intelligent display system is large in size and poor in accuracy, and the display effect and synchronization cannot meet the needs of users. Aiming at this phenomenon, a 3D image embedded intelligent display system based on virtual reality technology is designed. In recent years, with the rapid development of computer technology, three-dimensional measurement technology has become increasingly mature, and three-dimensional information technology has emerged in different fields in real life. With the development of computer science and automatic control technology, more and more intelligent robots appear in production and life. As an important subsystem of intelligent robot systems, the vision system is also receiving more and more attention. The three-dimensional imaging system projects a structured grating on the surface of the target object, uses a digital sensor to collect the deformed structured light image modulated by the surface of the object, and comprehensively uses image processing technology and precision measurement technology to achieve noncontact threedimensional measurement of the object. This method of number of three-dimensional coordinates has the advantage of not being damaged, high efficiency, high degree of automation, low cost, etc. It has very important significance and broad application prospects for improving product quality and manufacturing efficiency and reducing production costs. Through the investigation and analysis of the existing 3D image intelligent display system embedded, this paper designs and realizes the 3D image intelligent display embedded system, which includes a high-speed structured light projection module, an embedded image acquisition and processing module, and a 3D reconstruction module. Among them, the embedded 3D imaging image acquisition and processing system is an important part of the embedded 3D measurement system, which provides a certain theoretical and practical basis for future in-depth research. The experimental results in this article show that the acquisition module takes 1.934 s to obtain the folded phase of the measured object through phase demodulation and the algorithm, while the phase demodulation and unfolding takes 2.068 s. Therefore, the algorithm needs to be further optimized to speed up the image processing speed and reflect the real-time effect of the system.


Introduction
With the rapid development of computer software and hardware technology, multimedia and network communication technology, and the concept of "digital earth," more and more geographic information systems are widely used in embedded hardware platforms. Traditional imaging technology can only measure object size information, but not depth information. However, with the development of the manufacturing industry, more and more industrial measurement cases need to detect the 3D coordinate information of the object. The emergence of 3D digital imaging technology meets this demand. 3D not only breaks the monotonous spatial information constraints on the 2D level, but also interprets and analyzes the information. This provides a better way to process size images. For example, 2D images require the most basic spatial data processing functions, such as data collection, data organization, data processing, and data analysis. Compared with 2D images, 3D images have the following advantages: (1) The display of spatial information is more intuitive. The use of 2D graphical interfaces in spatial display is very abstract, while 3D graphs are more realistic spatial display platforms and the visualization of abstract and difficult spatial information. (2) Multidimensional spatial analysis functions are becoming more and more powerful. The process of analyzing spatial information is usually complex, dynamic, and abstract. High-tech topics with advanced achievements in different fields such as artificial intelligence and automatic control are one of the key technologies for the realization of intelligent robots and intelligent weapons. They are widely used in various fields such as military, aerospace, surveillance, biomedicine, and robotics.
For example, in the field of industrial design, it can be applied to the rapid design and testing of products and the quality control of products; in the medical industry, it can assist in the design of comfortable orthotics or prostheses and trauma care, etc.; in the cultural relics protection industry, antique restoration, and damage, the restoration and preservation of cultural relics can use three-dimensional measurement technology to quickly build models; in the consumer industry, animation production, threedimensional advertising production, and now increasingly popular 3D movies require three-dimensional data to build animation models; in the aerospace field, threedimensional measurement technology is helpful to complete the design and inspection of aircraft components, performance evaluation, and auxiliary aerodynamic analysis. In addition, three-dimensional measurement technology also has huge application potential in the fields of reverse engineering, biomedicine, industrial automation, and virtual reality. At the same time, the continuous improvement of personal and property safety requirements has promoted people's demand for video surveillance systems. From the security monitoring of communities and important facilities to the traffic monitoring of cities and highways, to the security monitoring of important places such as security systems, the application of video monitoring has penetrated into all aspects of society.
The vision system is a very complex system. It not only needs to collect images accurately, but also realize real-time response to changes in the outside world. At the same time, it also needs real-time tracking of external moving targets. Therefore, the vision system puts forward higher requirements on both hardware and software systems. Ohno et al. use mathematical modeling methods to reconstruct a 3D structure from a single two-dimensional (2D) training image (TI). Among many reconstruction algorithms, algorithms based on the best have been developed and have strong stability. However, this algorithm usually uses an autocorrelation function (which cannot accurately describe the morphological characteristics of porous media) as the objective function. This has a negative impact on the further study of porous media. In order to accurately reconstruct 3D porous media, they proposed a pattern density function based on random variables used to characterize image patterns. In addition, they also proposed an optimal-based original algorithm called pattern density function simulation [1]. But this algorithm still has certain flaws. Gao et al. formulate recommendations for high-risk clinical target volume (CTVHR) contours based on computed tomography (CT) for 3D image-guided brachytherapy (3D-IGBT) of cervical cancer. At the time of his diagnosis, the CT-based CTVHRHR boundary was defined by each anatomical plane, regardless of whether the tumor had progressed to the cervix. Due to the current limited availability of magnetic resonance imaging (MRI) with an applicator in 3D planning, T2weighted MRI obtained at the time of diagnosis and before brachytherapy without applicator is used for accurate assessment tumor reference [2]. Because the experiment process is not closed, there is an error in the experiment process. François et al. proposed a new design of embedded strain sensor. Based on Eshelby's inclusion model, it can be used to measure the complete 3D strain or stress tensor in any entity. Currently, ordinary embedded strain sensors can only perform one-dimensional measurements. The spherical shape of his sensor allows Eshelby's theorem (in the case of elastic behavior) to calculate the stress or strain that would exist in the structure without the sensor. They used fiber Bragg gratings to measure the deformation of the sphere, but other techniques can also be used. Their experimental test was carried out under two conditions. The first test measured the performance of the polymethyl methacrylate prototype under fluid pressure loading. In the second test, the steel prototype sensor was embedded in a standard concrete sample under axial compression and successfully used for 3D strain measurement under actual conditions [3]. After participating in this activity, learners should be better able to evaluate the literature on the effectiveness of incorporating virtual reality (VR) in the treatment of mental disorders. Maples-Keller et al. evaluate the use of exposure-based interventions for anxiety disorders. Virtual reality (VR) allows users to generate in computer experience presence in a 3D environment. This review focuses on the existing literature on the effectiveness of including VR in the treatment of various mental illnesses, with particular attention to exposurebased interventions for anxiety disorders [4]. In Jeon and Jo's research, a new noise evaluation method was proposed, in which virtual reality (VR) technology in the form of headrelated transfer function (HRTF) and head-mounted display (HMD) is used to evaluate the noise inside a house building. First, before and after applying HRTF, the sound pressure level and frequency characteristics of the road traffic noise recorded in the living room were determined. Second, there are four different test environments, namely, without HRTF or HMD, with HRTF, with HMD, and with HRTF and HMD [5]. Ruiz-Moya et al.'s report discusses the basic knowledge, reconstruction techniques, practicality, and limitations of three-dimensional (3D) fusion imaging methods. The 3D fusion imaging method uses multiple modalities, but due to the limitations of computer hardware, image processing software and reconstruction methods, it is still in the early stages of development. Reconstructing standard 3D images by relatively simple processing of a kind of image data is widely used and has excellent spatial information and construction simplicity. However, due to the relatively low resolution of anatomical information, this information is not sufficient for preoperative simulation [6]. However, his experimental process is flawed, leading to discrepancies in experimental results. Computer vision has become one of the most popular competitive topics in the field of 2 Wireless Communications and Mobile Computing artificial intelligence. Together with expert systems and natural language understanding, it has become the three most active areas of artificial intelligence. It is the realization of high automation of industrial production, intelligent robots, autonomous car navigation, and goals. One of the core contents of tracking and various other industrial testing, medical and military application is also a key factor for realizing machine intelligence. The purpose of computer vision is to use computers instead of human eyes and brains to perceive, interpret, and understand the environment. The phase recovery algorithm is used to complete the design of the image acquisition and processing system, and the related circuit model is simulated according to the needs of this research, and the system research of the embedded processor of the virtual reality technology is fully utilized to design the intelligent display system such as image acquisition. The innovations of this article are (1) design and implement a digital image acquisition and processing system based on the embedded processor TMS320DM648, including system power supply design, DDR2 high-speed storage interface design, FLASH circuit design, and Gigabit Ethernet circuit design, and complete the test of each function of the hardware platform; the system has the characteristics of high degree of automation, strong processing capacity, high reliability, etc., suitable for industrial online inspection and other occasions. In order to meet the high-speed image processing system that requires high interface bandwidth and large-capacity storage, the design method of FPGA external DDR2-SDRAM is adopted, and a scheme of DDR2-SDRAM controller based on VHDL language is proposed.
(2) Design and complete the CCD analog front-end circuit design based on ICX618 chip, including circuit design, FPGA programming, and imaging test. The results show that the CCD analog front-end design is reasonable, and the work is stable and reliable. (3) Based on the DSP/BIOS operating system, complete the software design of the image acquisition and processing system, including system initialization, image acquisition, phase calculation, and Gigabit Ethernet data transmission. (4) Set up an image acquisition and processing experimental system, complete the test of all modules, and realize the three-dimensional measurement of the object. After the device initialization is completed, the video image can be collected. There are two methods: one is read() to read directly; the other is mmap() memory mapping. read() reads data through the kernel buffer; mmap() bypasses the kernel buffer by mapping the device file to the memory.
This article is mainly divided into three parts. The first part is the introduction of this article by referring to relevant references, using data retrieval techniques and examples, including the description of the research background and significance of virtual reality technology and intelligent display systems. The second part is a combined analysis of the algorithms used in this research institute and the research objects of this research. These algorithms include phase recovery algorithms and four-step phase shifting methods; the third part is mainly about data acquisition and experimentation. Analyze, design the data acquisition and adjustment module, and use the image processing mod-ule of virtual reality technology to conduct threedimensional imaging experiments. Finally, the last part is the conclusion of this research.

Phase Recovery Algorithm Implementation
The phase recovery algorithm is mainly used to obtain the unfolded phase image of the measured object. In this system, the unfolded phase image is obtained by frequency conversion and phase shift [7,8]. When the M-step phase shift is adopted, the phase shift change of each sinusoidal fringe image is 2b/M, and the light intensity distribution received by the camera can be expressed as follows: , nÞ is the light intensity distribution of the target object collected by the camera, cð m, nÞ is the background light intensity distribution, gðm, nÞ is the local contrast of the fringe, χ 0 is the carrier frequency, and φðm, nÞ is the phase factor containing the depth information of the object. The surface height information of the measured object can be obtained by solving φðm, nÞ in the phase function [9,10]. From the above formula (1), the phase distribution φðm, nÞ on the surface of the measured object can be solved, namely, The phase shift method is proposed to solve the problem of measuring phase difference, so that the optical phenomenon of interference fringes is truly used for precision measurement in engineering. The idea of the interferometer is to solve the phase difference by introducing the phase shift amount and then obtain the surface shape. The design of this system adopts a 4-step phase shift method; the phase shift changes are 0, π/2, π, and 3π/2 in sequence, by formula (2).
Φðm, nÞ can be obtained as follows: The phase function obtained by formula (3) cannot be distinguished in the fringe pattern of the same period, and the calculated phase value is folded, which cannot guarantee its monotonic increase. Therefore, it is necessary to unfold and recover the folded phase to solve the continuously changing phase. This process is called phase unfolding or phase reconstruction [11,12]. This system uses generalized time phase unwrapping algorithm to solve the unwrapped phase diagram of the measured object.
The augmented virtual reality system can not only allow the user to see the real world, but also the virtual objects superimposed on the real world. It is a system that combines the real environment and the virtual environment, which 3 Wireless Communications and Mobile Computing can reduce the complexity of the composition. The calculation of the real environment can also operate on the actual objects, which has truly reached the realm of reality and fantasy. According to the relationship between the phase value and the fringe frequency, In the formula, u 1 and u 2 , respectively, represent the two fringe frequencies, and φðu 1 Þ and φðu 2 Þ, respectively, represent the unfolding phases of the two frequencies, and formula (5) is In the formula, b = u 2 /u 1 , which is the ratio of the number of two stripes.
Therefore, using the unfolding phase φtðu 1 Þ with the number of fringes u 1 , combining the above linear relationship and the folding phase φwðu 2 Þ with the number of fringes u 2 , and then unfolding, In the formula, b = u 2 /u 1 , and the ratio of the number of stripes in this experimental system is 4, that is, k = 4.

Embedded Design of 3D Image Intelligent
Display System 3.1. Data Acquisition and Debugging Module Design. For this experiment, it is mainly divided into three parts: preparation, experiment, and discussion. In order to ensure the accuracy and authenticity of the data, the analysis and comparison of the general three-dimensional measurement technology and the embedded processor consider multiple factors in the selection of the core processor and the image sensor. The design of the analog front end is to pave the way for the image processing module and then build the system of this research on this basis. This experiment is carried out indoors and compares and selects through the collected investigation and analysis data of core processors, front-end interfaces, and other devices.
3.1.1. Selection of Core Processor. The inside of the DSP chip adopts the Harvard structure with separate program and data, has a special hardware multiplier, widely adopts pipeline operation, and provides special DSP instructions, which can be used to quickly realize various digital signal processing algorithms. Comparing the analysis and comparison between general three-dimensional measurement technology and embedded processor, this system uses Texas Instruments (TI) Da Vinci processor series DSP chip TMS320DM648 as the core processor to construct a DSP-based image acquisition and processing system. Through the algorithm, obtain the unfolded phase map of the measured object to provide data for the next three-dimensional reconstruction [13]. Its functional structure diagram is shown in Figure 1. Virtual reality technology is the use of a computer to generate a simulation environment; through the use of sensor equipment to make the user into the environment, the user and the environment directly interact with the technology. DM648 adopts advanced C64x+ kernel, compared with C64x kernel, the performance is increased by 20%, and the code density is increased by about 25%. The enhanced EDMA3 can realize the copy and transportation of threedimensional data, and C64x programs can be easily transplanted to C64x+, The reuse rate is 100% [14]. In addition, DM648 integrates some common peripherals, such as I2C, PCI, and GPIO2, to help users reduce development difficulties and system costs. Comprehensive consideration, this system decided to use TMS320DM648 as the core processor of the image acquisition and processing system. It is constructed with DM648 chip as the core, including power supply circuit, clock and reset circuit, memory circuit, and emulator circuit. The on-chip level one program cache size of DM648 is 32 K bytes, the level one data cache size is 32 K bytes, and the second-level cache size is 512 K bytes.

Image Sensor Selection.
At present, the widely used image sensors mainly include CIS, CMOS, and CCD [15]. CIS is the contact image sensor [16]. Since this sensor is used for contact scanning, it is not suitable for noncontact measurement in this system. CMOS is the complementary metal oxide semiconductor image sensor [17]. Utilizing the semiconductor substrate doped to form negatively charged (N) and positively charged (P) semiconductors, the current generated by the photoelectric effect can be recorded by the processing wafer and converted into pictures. However, the traditional CMOS chip has a low aperture ratio, and the sensitivity is lower than that of a CCD under the same pixel size; CCD is the charge coupled device image sensor [18]. CCD is a kind of semiconductor device that uses the photoelectric effect to convert photons into signal charges. The structure of the CCD photosensitive unit is simpler than that of the CMOS. With the same area, the CCD photosensitive unit has a higher degree of integration and a higher resolution than CMOS. At the same time, in the CCD photosensitive unit, the photosensitive part occupies a larger area, and the image obtained by the sensor is also brighter. In addition, because the CCD photosensitive unit performs signal processing uniformly, the image noise is small and the signal-to-noise ratio is high. From the above analysis, we understand the advantages and disadvantages of different types of sensors. Since the main object of this system is the deformed sinusoidal fringe pattern projected on the surface of the object, it has higher requirements on the sensitivity, linearity, and signal-to-noise ratio of the sensor. Therefore, the system finally decided to use CCD sensor [19].

CCD Analog
Front-End Design Based on ICX518. As shown in Figure 2, the CCD analog front-end circuit is 4 Wireless Communications and Mobile Computing mainly composed of a driving power module, a CCD driving peripheral module, and an AD conversion module. With the rapid development of computer technology, there have been many high-tech based on computer technology, and the same is true for virtual reality technology (referred to as VR). This is a high-tech that is widely used in all walks of life. It is a high-end human-machine interface, including through vision, hearing, touch, smell, and taste. The schematic diagram of the CCD drive power supply circuit designed with this chip has a built-in accurate feedback resistor, which is adjusted by modifying the resistor value-a feedback current of 25 μA. Finally, the feedback current is used to adjust the positive and negative output voltages. The specific formula is as follows: Among them, V pos is +15 V and V neg is -5.5 V. According to the above formula, the resistance value of R1 is

Design of Image Processing Module Based on DSP.
Virtual reality technology embodies the adaptation of computers to humans, and humans participate in the environment of information processing through the most ordinary means of communication to obtain an immersive experience. Virtual reality technology has three characteristics: immersion, interactivity, and conception. In order to make the system take into account the flexibility and efficiency and facilitate the upgrade of the algorithm, after detailed investigation and program comparison, this article decided to use the combination of FPGA and DSP to build a hardware processing platform. The block diagram of the system structure is shown in Figure 3.
Among them, the DM648 module is an important part of the entire system, and the image processing module is composed of the following important parts: (1) FPGA Part. Mainly realize the interconnection of various modules, which can collect the data of the ICX618-based CCD analog front end introduced in Chapter 2, and connect to the DSP through the Video Port video interface, or it can be used as a DSP coprocessor, responsible for completing the algorithms with high real-time requirements.
(2) Central Processing Unit TMS320DM648. Through the video interface to achieve seamless connection with the CCD analog front end, adjust the embedded 3D imaging image acquisition and processing system design core, the chip support library to achieve the phase recovery of the measured object, and finally through the gigabit Ethernet output results.
(3) Storage Device. DM648 has a dedicated DDR2 memory expansion interface, which can be connected to a DDR2 storage device that meets the JESD79D-2A standard. And in order to solidify user data and save the output result conveniently, this system connects a 512 Mbit parallel FLASH chip through the EMIF interface.
(4) High-Speed Data Transmission Interface. DM648 has a three-port Gigabit Ethernet transmission module, which can realize high-speed data transmission between this system and other embedded devices or computers. This system connects the SGMII inter-face of DM648 through the PHY chip to realize dual gigabit network port communication.
(5) Other Communication Interfaces. Communication interfaces include SPI interface, I2C interface, and GPIO interface. It is very convenient to use these interfaces for data communication and interaction with external devices.
The above five parts build an embedded image processing platform with DSP as the processing core. The interconnection of various modules is realized by FPGA, which makes the hardware platform have great flexibility and real-time [21]. From a functional point of view, the hardware system consists of three parts: image acquisition module, image processing module, and data transmission module. The optical image is converted into a video data stream by the image acquisition module. The FPGA module collects and buffers the data and sends the data to the DM648 through the Video Port interface. Then, the DM648 calls the phase recovery algorithm to solve the unfolded phase of the measured object and finally passes the Gigabit Ethernet; the interface is sent to the computer.

PCB Design.
In high-speed PCB design, in order to ensure the stable and reliable operation of the system, it is not enough to only consider the electrical connection. It is also necessary to consider signal integrity issues such as impedance matching, board layer design, and electromagnetic compatibility. The layout of PCB components is to place components on the PCB according to certain principles. Good component layout can reduce the difficulty of wiring during layout, which is conducive to shortening design time and reducing design costs. Therefore, it is necessary to understand some methods and principles of PCB layout [22]. First of all, the circuit board size should be moderate. Too large size not only increases the cost, but also causes the stability of the circuit board to decrease. If the size is too small, it is easy to cause the devices to be too concentrated, the signals between the devices are easy to crosstalk each other, and the heat dissipation effect is also worse [23].
The distributed virtual reality system is a product of the development and combination of virtual reality technology and network technology. It is a system in which multiple users or multiple virtual worlds located in different physical locations are connected to share information through the network in the virtual world of the network. The final physical diagram of the PCB is shown in Figures 4

Experimental Analysis
This system adopts TMS320DM648 as the main control platform. Figure 6 is the physical map of the system. The first part is the image acquisition and processing system designed in this paper, and the second part is the highspeed structured light system environment designed by the research team.

Circuit Test.
It has only been more than ten years since the concept of virtual reality was proposed, but the technology has developed by leaps and bounds. It has opened up a vast world for human-computer interaction and other aspects but also brought huge social and economic benefits. According to the system hardware design in Chapter 3, the power consumption of this system is estimated to be 5 w, so the experiment uses 5 V, 2A power adapter as input, and it is converted into +14 V, +4.4 V, and +3.2 V which we need through various buck and boost circuits, +2.3 V and -6.4 V, and the actual voltage measured by the oscilloscope after power-on test is shown in Table 1.
It can be seen from the above table that the voltage value of the power supply part is accurate, the ripple is small, and it meets the system requirements, and the stability of the power supply is very high through long-term testing.

4.2.
Three-Dimensional Imaging Test. Image tracking is an emerging technical field. It organically combines image processing, automatic control, and information science to form a real-time automatic target recognition from image signals, extract target location information, and automatically track target movement technology. In chronological order, the sinusoidal fringe pattern with 4 frequency conversion and 4-step phase shift is projected on the measured object through the DMD projector to collect a total of 16 images, the folded phase of the measured object is obtained through phase demodulation, and the generalized time phase expansion is used. The algorithm obtains the unfolded phase of the object. The system time consumption is shown in Table 2.
After the development of the image tracker is completed, the tracking performance needs to be evaluated. Through the development of the image tracker test and evaluation system, it can be evaluated in the laboratory by simulating various actual scenes and tracking the target and the interference situation.
It can be seen from Table 2 that the time consumption part of the algorithm processing and the acquisition part is basically the same, and the system can continuously collect and output and expand the phase diagram. Finally, the    existing three-dimensional reconstruction algorithm of the research group is used to reconstruct the threedimensional topography of the measured object to verify and expand the accuracy of the phase map. The reconstruction result of the plaster image is shown in Figure 7.
The above data strongly proves that the embedded system constructed by this research institute has made great progress and improvement in image processing and tracking test evaluation, circuit correlation coefficient, data transmission, and real-time performance. By combining DSP, DM648, and high-speed PCB, a highly advantageous image intelligent display system is created.

Conclusions
This research is based on the analysis of the intelligent image display embedded system based on virtual reality technology. The main purpose is to design a system with highspeed image processing and as little power consumption as possible to optimize the process of 3D image imaging. With the development of 3D measurement technology, the hardware requirements of image acquisition and processing systems for 3D imaging will become higher and higher. In consideration of image acquisition quality, processor requirements, cost power consumption, and stability, the existing processing system cannot meet the requirements of portability, low power consumption, low cost, and stable operation. Based on the shortcomings of the existing system, this paper conducts research on the key technology research requirements of high-precision and high-speed 3D digital imaging and completes the construction and debugging of the entire platform. Timely testing and evaluation of image tracking on the one hand can verify the performance requirements of the image tracker; on the other hand, it can also facilitate the discovery of tracking effects and improve the performance of the tracker during the development of the image tracker. This experiment conducted strict monitoring on data acquisition and used computer technology to analyze the error of the data. In addition, the components selected for the experimental purpose of this research are all fully demonstrated feasibility components, and to maintain the authenticity of the experiment, a correspond-ing model was constructed. Although the analysis of the details of image acquisition quality and the application of high-speed 3D digital imaging technology is not so sufficient, this research is at a complete level as a whole. In the course of the 3D imaging experiment in this study, it is necessary to rationalize the design of the parameters of the control power supply circuit, the heat dissipation effect between the devices, and the central processing unit. In order to make the error of the relevant data as small as possible, multiple experiments are required. Because the number of tests in this study is relatively small, and the parameters involved in the test are relatively large, there are some shortcomings in the research that are rough and not enough for in-depth research. For the next research direction, more are based on this research to analyze the application of the built embedded system.

Data Availability
No data were used to support this study.

Conflicts of Interest
The author states that this article has no conflict of interest.