Today, the application of high-tech image format technology in contemporary visual experience and film and television production has become infinitely mature. In the era of the combination of modern technology and the Internet, virtual numbers connect the past with the future, merge reality, and myth and even synchronize the world between primitive and modern. This article adopts experimental analysis and comparative analysis, setting the experimental group and the reference group aim to use high-tech image formats from the perspective of a full-frame sensor to basically realize a perspective screen and a three-dimensional screen for observers in indoor scenes. In addition, the process of reconstructing a 3D model using high-precision geometric information and realistic color information is also described. The experimental results show that the sharpness threshold cannot be too small; otherwise, part of the clear image is misjudged as a blurred image. If the threshold is too large, the missed detection of the blurred image will increase. Combined with the subjective evaluation of the image, when the threshold is 0.8, the experimental result is close subjective evaluation, and the missed detection rate is 2.41%. This shows that the ASODVS three-dimensional digital scene constructed in this article can meet the needs of real-time image processing; it can effectively evaluate the clarity of realistic analog images. It shows that controlling the size of the
Computer-based 3D reconstruction technology is an emerging application technology, which has huge growth potential and practical value. It can be widely used in urban planning, medical investigation, construction site, geographic research, bone remodeling, cultural relic investigation, crime investigation, and data collection. The reconstruction of 3D models with high-precision geometric information and realistic color information has always been the focus of research in computer interaction, intelligence, scanning molds, graphic drawing, and map drawing. The 3D model reconstruction technology can be applied to different objects. The purpose of the 3D reconstruction of specific objects is to obtain information such as the 3D shape, curved surface, and actual color of the object. 3D scene reconstruction is mainly used to capture the distribution, shape, and actual color of objects in the scene, for example, the reconstruction of cultural and historical sites, the reconstruction of urban scenes, the reconstruction of urban landscapes, natural environment map modeling, and underground scene modeling. 3D scene reconstruction plays an important role in intelligent robot distance sensors, reshaping crime scenes, virtual displays, and many other fields. Its main purpose is to objectively reproduce real scenes and display them on mobile and client displays.
Based on data acquisition methods, commonly used 3D scene reconstruction methods can be divided into passive and active. When performing 3D reconstruction of scenes based on laser scanning, the calculation is complicated, and the reconstruction process cannot be fully automated. Generally, commercial 3D laser scanners are expensive and complicated to operate, which limits its application to a certain extent. The method of reconstructing the scene based on active vision has the characteristics of low cost and high efficiency and is regarded as the most effective method of 3D reconstruction of the scene.
Existing passive 3D reconstruction scenes require a lot of computing resources for point matching and stereo measurement, and there are some “unconditional” calculation problems. Although the obtained model has texture information, the accuracy of the model is low, and depth information is lacking. Existing laser scanners must perform functions such as recording and combining 3D point data during reconstruction. In order to obtain point cloud data with color information, it is usually necessary to have a one-to-one correspondence between the three-dimensional coordinates and the color information of the spatial points. The existing data input methods are relatively strict, and some methods require prior knowledge of known scenes.
In the early years, Yu used a sequence of images taken by a normal camera to reconstruct the scene. He proposed a factorization method to solve the geometric information of the scene when the object is far away and the problem of calculating camera movement. However, the parallel projection reconstruction method cannot fully restore the realism of the scene when it is applied to the real scene [
The innovation of this article is related to: (1) The new ASODVS scanning program is used. The program can simplify most of the calculation work in the experimental design of this article. The input data is loaded in an orderly manner according to the order set by the experimenter. In the process from the field to the cloud, the cloud to the mobile terminal, the data will not be lost or scattered, but ordered in a matrix form. Therefore, the collected two sets of lattice data can be automatically matched according to a certain order, which greatly reduces the amount of calculation and manual operation on the machine side, and greatly improves the efficiency. (2) This article directly simulates the picture observed by the human eye when building the scene, which greatly improves the requirements and standards of the experiment. It also enables the experimenter to produce an immersive observation experience similar to the 3D movie version. The platform construction, parameter setting, and algorithm optimization have successfully completed the process of indoor 3D scene reconstruction [
As shown in Figure
Four coordinate systems and their associations (from
In terms of hardware composition, the moving surface laser generator and the ODVS rear-view sensor form the ASODVS. The overall structure of the system is shown in Figure
Relationship among the various coordinate systems.
Establish a coordinate system with the single viewpoint on as the origin, let axis
Among them, the conversion relationship from the catadioptric mirror to the sensor plane is shown in formula (
In the formula,
The formula for converting from the sensor plane to the image plane is as follows [
:
Use a function to replace the functions
Since the error will be introduced into the actual processing and assembly of the omnidirectional vision sensor, it can be assumed that the ODVS conforms to the ideal model [
Formula (
ODVS’s calibration result.
Calibration object | Center point | |||||
---|---|---|---|---|---|---|
ODVS | -106.597 | 0.0021 | -0.001 |
In the design of 3D perception and information reconstruction system based on active vision, the effective projection of the light source plays a vital role in the structure and accuracy of the point cloud data. In order to adapt to the function of ODVS that can simultaneously shoot 360-degree panoramic images of the scene, the mobile surface laser generator must project a laser light source that can cover 360 degrees in the horizontal direction of the scene. It can move up and down in the vertical direction to complete the scan of the scene [
Commonly used laser ranging principles are triangle ranging method, time-of-flight method, and phase method [
Triangular ranging method.
The time-of-flight method and phase ranging method, as the distance measurement principle, and the transmitting and receiving devices of the laser rangefinder must be laser receivers and transmitters, which belong to point-to-point measurement; while the receiver of the triangulation method is a CCD imaging chip. The CCD imaging chip can obtain planar image information, so that the laser of the triangulation method can use a line laser light source or a surface laser light source. This method not only improves the scanning efficiency of the laser but also effectively reduces the complexity of the system.
The line laser generator is an important part of the moving surface laser generator. Laser light sources of different power levels have different effects on the user’s personal safety. For this reason, this article analyzes the selection and precautions of laser emission power. Under normal circumstances, the greater the laser power, the wider the emission range, but high-power laser generators usually have safety hazards [
When designing the hardware, the ODVS and the moving surface laser generator are fixed on the same axis. The ideal assembly situation is that the axis line of the single viewpoint Om of the ODVS and the scanning planes of the 360° laser emitted by the moving surface laser generator are perpendicular to each other. To achieve this goal, we used a hollow cylinder to calibrate ASODVS during assembly. The specific method is as follows: (1) put the ASODVS into the hollow cylinder vertically so that the axis of the ASODVS coincides with the axis of the hollow cylinder, (2) constantly change the distance between the moving surface laser generator and the viewpoint Om of the ODVS, and (3) observe the panoramic sequence images obtained from ODVS.
If the aperture produced by the projection of the moving surface laser generator in the panoramic sequence image is a series of perfect circles centered on the panoramic image, then the ASODVS configuration is over; otherwise, fine tuning is required to make the ASODVS meet the ideal design requirements. In addition to the observation method, you can also save the panoramic image with laser information generated in the above process and use the algorithm to analyze the image to determine whether the ASODVS has reached the assembly requirements [
Figure
System flow chart.
When SODVS calculates the three-dimensional coordinates of a space point, it first needs to analyze the laser point in the panoramic image. The analysis algorithm of the laser point includes the algorithm based on the HIS color model, the algorithm based on the interframe difference algorithm, and the three-frame difference algorithm. The sampling speed of ASODVS is also closely related. ASODVS inevitably has some errors in the calculation of the three-dimensional coordinates of the space point. The maximum error of the distance from the point cloud to the single viewpoint in the statistical space is within 3%. In most cases, the laser generator, the measured object, and the camera are in different spatial positions, which is why the four coordinate systems mentioned above cannot form a coincident, parallel, or vertical relationship. Since the four coordinate systems cannot form a simple relationship, the registration and conversion between coordinate systems is a necessary step in the process of measurement and reconstruction. Objects in the world coordinate system pass through the camera coordinate system, the ideal image coordinate system, and the real image coordinate system and finally are imaged in the digital image coordinate system.
ASODVS can scan the indoor scene volume and realize the reconstruction of the scene, which is the main software platform used in this experiment. It mainly elaborates the method of acquiring and modeling indoor scene 3D information. With the research and development of computer three-dimensional graphics, the point-based three-dimensional model has attracted the attention of many researchers. Therefore, a relatively simple point model method was chosen.
The 3D mesh model is obtained by processing the point cloud data. Generally, it is necessary to construct the topological structure of the 3D point cloud in order to perform neighborhood operations on each point. On this basis, some algorithms can be used to obtain the corresponding 3D mesh model.. The 3D mesh model is the mainstream method of 3D modeling at present. It contains the topological relationship between points and can better reflect the geometric information of the surface of the object. Based on the above considerations, this work uses a 3D point cloud model and a mesh model to reconstruct the 3D internal scene. And because of the classification of the three-dimensional point cloud data obtained based on ASODVS, the system can better meet the real-time requirements without constructing a topological structure.
The ODVS in this article uses a
In this paper, an experiment was conducted in a scene of about 25 square meters arranged in the lobby of a certain teaching building of a university, and an indoor scene was built with a full-scale sensor for data collection [
Use ASODVS software to edit algorithm calculations, use different parameters, observe the effect of modeling on the display, and evaluate the impact of different image formats on the visual experience under different parameters.
In the 3D display and drawing of the scene, due to the three characteristics of immersion, interaction, and imagination of the experimental effect, virtual reality technology is adopted. This technology is realized by integrating multiple technologies, such as graphics, digital image processing, and three-dimensional modeling. This technology enables people to observe and experience realistic environmental scenes and interact with them.
Before acquiring the point cloud data, first place the ASODVS and place it in the center of the scene. The sharpness evaluation method is not only an important link to measure the quality of a digital image but also the basis for realizing the automatic focus of a digital imaging system, and it is also an important means of judging the imaging quality of a digital imaging device. The good link of the evaluation system is related to the quality and efficiency of data collection, eliminates the influence of subjective factors in the evaluation process of different people, ensures the consistency of evaluation standards, and facilitates comparison and optimization between different algorithms, so that subjective evaluation is objective and objective. The evaluation is consistent and has high application value. Based on this, this article studies a class of document image sharpness evaluation problems collected by high-speed scanners.
The azimuth of the
Figures
3D point cloud model (from:
Arc part
Gate part
Color obstacle part
Virtual display technology.
Compared with Figure
In this paper, an experiment was conducted on the method of establishing a grid model of an ordered point cloud. From the steps of the algorithm, it is known that the points on each scan slice need to be connected with their neighboring points in the specific implementation. In this paper, the geometric shape of the line (line) in Java3D is used to connect the point cloud data to construct a grid. The grid model established by this method does not need to perform normal vector calculation and topological structure establishment operations and generates point cloud data in real time. Since there are several ways to connect quadrilateral meshes into triangular meshes, two triangular mesh models of indoor scenes can be obtained.
Virtual reality technology must have the three characteristics of immersion, interactivity, and imagination, and the relationship is shown in Figure
According to the principle of ergonomics, the right viewpoint image can be calculated by using the perspective view as the left viewpoint image, and vice versa, if the above generated perspective view is used as the central eye image, left and right viewpoint images need to be generated to realize the generation of a stereo image pair. This article adopts the first method, which takes the abovementioned perspective view result as the left view point, and then constructs the disparity map by obtaining the right view point perspective view to realize stereo display.
Taking the single viewpoint Om (0,0,0) of ASODVS as the coordinates of the left eye viewpoint, applying formula (
According to the previous description of the perspective drawing with the observer as the center, we first need to determine the initial azimuth angle corresponding to the viewpoint. Through the spatial relationship between the left and right viewpoints, when the initial azimuth angle corresponding to the left viewpoint is known, we need to determine the initial azimuth angle corresponding to the left viewpoint. Calculate the initial azimuth corresponding to the right viewpoint. The observer-centered stereo map drawing algorithm is specifically described as follows:
Determine the respective initial azimuth angles of the left and right viewpoints and read the minimum incident angle
Perform perspective view display calculation for the point cloud data to be displayed corresponding to the left and right viewpoints and obtain two perspective display matrices PL1 and PR1. The calculation method will not be repeated.
Judge whether to display the grid, if the judgment result is yes, then construct the grid model, otherwise, continue.
Determine whether
Assign the color value
End, get a new matrix PS of size
Based on the perspective display drawing of the indoor scene model, through the red and blue stereo display technology, the 3D stereo display output rendering of the indoor scene can be obtained, which is similar to the perspective display drawing. It is also possible to perform a perspective transformation on the basis of a stereoscopic display to realize a stereoscopic display of a panoramic scene. Observation with the aid of red and blue 3D glasses can achieve 3D stereoscopic display of indoor scenes. The stereoscopic display in this article can achieve a certain 3D effect, but it cannot completely simulate the fine stereoscopic effects such as movies.
In order to better test the impact of high-tech image formats on visual experience and film and television production, we designed the experimental group (
As shown in Figure
Operation of 3D point cloud model (data
As shown in Figures
Operation of 3D point cloud model (data
In order to explore the impact of different lattice parameters on the clarity of 3D modeling, we set a total of five sets of 3D point cloud data as shown in Table
3D point cloud data parameter setting.
Type | Clarity point | |||
---|---|---|---|---|
Data 1 | 252.78 | 11.13 | 30.0 | 0.779844 |
0.5490196078431474 | 0.5491196078631383 | 0.6470588236294119 | ||
Data 2 | 252.87 | 12.77 | 30.0 | 0.659842 |
0.6470588234296118 | 0.6470589235274112 | 0.7690176078441379 | ||
Data 3 | 252.78 | 14.37 | 30.0 | 0.485497 |
0.6470788235294118 | 0.6970588235294118 | 0.7497196078431373 | ||
Data 4 | 252.78 | 15.97 | 30.0 | 0.171234 |
0.516078431372549 | 0.5382352941176471 | 0.6731960784313725 | ||
Data 5 | 252.78 | 17.57 | 30.0 | 0.021564 |
0.5872352941176471 | 0.5676274509803921 | 0.6789235294117647 |
Two rows of data represent a point, the first row is the three-dimensional space coordinates (
All six attribute values are saved when stored in one class. It is worth noting that due to the limitation of the size of the site space when selecting the points, the laser line emitted by ASODVS is located on the horizontal plane; so, it is impossible to calculate the three-dimensional coordinates of the spatial points on the horizontal plane. In other words, the
After putting the set data into the matrix calculation and comparison, make a statistical graph according to the software clarity score statistics as shown in Figure
Software clarity point.
252.76, 11.12, 30.0,
0.5490196078431373, 0.5490196078431373, 0.6470588235294118
252.86, 12.72, 30.0.
0.6470588235294118, 0.6470588235294118, 0.7490196078431373
252.77, 14.31, 30.0.
0.6470588235294118, 0.6470588235294118, 0.7490196078431373
:
It can be seen from the statistical graph that the sharpness data is closely related to the
When
Based on the above research results, we ignore the two parts of coordinate
After rearranging the data, make a statistical graph based on the software clarity score again, as shown in Figure
The relationship between
Analyzing Figure
This shows that changing the value of the
In order to test the effectiveness of the evaluation algorithm, we evaluated 4000 onsite document images. The image was scanned using Belllink’s copi8000, 100 dpi, and 256-level grayscale, which contained 267 fuzzy images. The experiment has obtained relatively ideal results. To detect blurred images as an example, different threshold parameters can be selected to evaluate the sharpness of the image. The experimental results are shown in Table
Clarity test result.
Quantity | Threshold | Detection amount | Subjective evaluation | Missed |
---|---|---|---|---|
4000 | 0.50 | 742 | 267 | 0 |
0.80 | 322 | 6 | ||
0.90 | 104 | 144 |
As shown in Table
In order to further illustrate the superiority of the method proposed in this article, this article compared with other general quality assessment methods without reference, including BIQI, BRISQUE, DESIQUE, CORNIA, DIIVINE, QAC and SSEQ. Plot the test data in Figure
Performance comparison with the state-of-the-art general purpose NR quality metrics.
It can be seen from Figure
Experimental research shows that the ASODVS designed in this paper can quickly obtain the three-dimensional coordinate data and color data of the surface of all measured objects in the panoramic range at one time and can obtain the three-dimensional point cloud model and grid model of the scene based on this information. When displaying the reconstruction results of the scene, the observer-centered method is adopted, so that the observer can experience the scene immersively. Experiments show that the sharpness is only related to the abscissa
No data were used to support this study.
The author declares that there are no conflicts of interest regarding the publication of this article.