Three-Dimensional Reconstruction of Mechanical Parts Based on Digital Holography

In reconstruction of the off-axis digital hologram of diffuse reflection objects, the position of the positive first-order image cannot be accurately obtained because of the low quality of the reconstruction image. ,is paper focuses on the above problem and proposes a method for marking the first-order image of the 1-FFT surface based on the fast Fourier transform (1-FFT). ,e parameters of angle of illumination light were investigated, and the maximum relative measurement error is 5.6% by standard objects. ,e multiaperture stitching technique in cylindrical coordinates is applied to digital holography technology, and the particle swarm optimization algorithm is used to transform the nonlinear equations into optimization problems to solve the splicing parameters. Finally, the 3D display of a typical rotary three-dimensional mechanical part is successfully realized with holography stitching using the above method.


Introduction
With the development of computers and charge-coupled devices (CCDs), the three-dimensional object display based on digital holography has also been developed accordingly. Some convention optical methods of obtaining three-dimensional contours of objects are proposed which include grating projection method and optical knife scanning method [1][2][3]. ese methods have their own shortcomings, such as the need for a projection system or a scanning system. In contrast, the advantages of digital holography are obvious. Digital holography does not require a cumbersome process such as film processing as in the conventional one and quantitatively obtains the phase information of the object, which can obtain the three-dimensional contour of the object. Yamaguchi et al. [4,5] obtained the three-dimensional contour of the object by using phase-shifted digital holography. is method eliminates conjugate images and zero-order images and can measure millimeter-level objects, but this method requires a high degree of environmental stability and requires a high-precision phase shifter. In digital holography, the dual light source method is used to make a slight change in the angle of the illumination light [6][7][8][9] and obtain the three-dimensional contour of the object by calculating the phase difference between the reconstructed images before and after. However, during the current measurement process, there are two main problems: (1) the reconstruction image of a weak diffusely reflecting object has poor contrast because of its low light intensity. It is difficult to accurately locate the positive first-order image of the object on the reconstructed image plane, which affects the subsequent acquisition of the interference phase map. (2) e imaging angle is limited and influenced by the size of the CCD array and the total number of pixels, so some additional methods are needed to achieve the fusion of the multiview contour. e off-axis digital holographic reconstruction process of diffuse reflection objects is studied in this paper. Aiming at the problem that the intensity of the light emitted by the weak diffuse reflection object is low, the contrast is poor, and the position of the positive first-order image of the object in the reconstruction plane cannot be accurately located, and a premarking method of the reconstructed image position for diffusely reflecting objects is proposed. Combining of changing the inclination of the illumination light and the filtering-image of 4 times fast Fourier transform (FIMG4FFT) method [10], the three-dimensional contour of the object at a specific perspective is obtained. A calibration target with a height difference of 9 mm is used in demonstration test. e results show that the measurement error is between 0.1 mm and 0.5 mm. e maximum relative measurement error is 5.6%. e multiaperture stitching technique in cylindrical coordinates is applied to accomplish the digital holographic 3D stitching. e particle swarm optimization algorithm is used to transform the nonlinear equations into optimization problems to solve the splicing parameters. e contour from a single viewpoint is stitched together to realize the three-dimensional display of the whole object. Finally, the three-dimensional stitching of rotational three-dimensional objects in the cylindrical coordinate system is carried out, and the experiment has been verified.

Principle of the Inclined Illumination Method. Li and
Peng [11] derived the basic formula for three-dimensional shape detection based on digital holography. Figure 1 shows the measurement principle of the digital holographic threedimensional topography measurement. In Cartesian coordinates Oxyz, the plane of z � 0 is defined as the reconstructed plane, which is parallel to the CCD window plane with a distance of z � d 1 and tangent to the surface of the measured object.
According to the statistical optics theory [12], the nonoptically smooth spatial surface is illuminated with coherent light, and the scattered light field of the surface of the object can be regarded as the scattered light of a large number of scattering primitives, that is, the superposition of all primitive scattering waves. e reconstructed light wave field of the entire object in the z � 0 plane can be expressed as [13] O(x, y, 0) � C R o(x, y, z)exp jϕ(x, y, z) + jϕ r − jkz . (1) In formula (1), |C R o(x, y, z)| 2 denotes the intensity of diffuse reflected light at the corresponding points on the surface of an object because the intensity information of the object is not mainly concerned here, so it is not discussed here. j � �� � − 1 √ and k � 2π/λ, λ are the wavelengths of light, ϕ r denotes the random phase, the range of variation is − π: π, and z is the reconstruction distance.
If the direction of illumination in Figure 1 is parallel to xoz and the angle between illumination and axis z is θ, the phase can be expressed as In formula (2), θ is the illumination light tilt angle. Studying the absolute phase measurement shows that if one wavelength is used to illuminate, the projection angle can be changed to obtain two reconstructed fields and then superimposed to form a contour line, which realizes the measurement of the three-dimensional shape.
According to formula (2), when the light wave with wavelength λ illuminates the object with parallel illumination with inclination angle of illumination light θ and θ + Δθ, the phase distributions of the light field in the reconstructed object plane are, respectively, In formulas (3) and (4), all ϕ r1 , ϕ r2 denote the random phase of − π: π with uniform probability.
Subtract the two formulas: When Δθ is very small, it can be seen that the same light wave irradiates on the surface of the object, and the scattering characteristics of the same light wave on the surface of the object do not change too much. At this time, ϕ r1 − ϕ r2 is no longer the random phase of − π: π with uniform value probability. Because Δθ is relatively small, so cos Δθ ≈ 1, sin Δθ ≈ Δθ and ϕ r1 − ϕ r2 � ε2π(0 < ε < 1), which are phase measurement noises, and formula (5) can be rewritten: Formula (6) shows that the phase difference is composed of two parts. e latter is highly correlated with the surface of the object. e former is linearly related to the spatial coordinate x, called the linear tilt term. e change in its value is accompanied by a change in its coordinate value. e phase difference is caused by its changes much more dramatically than that caused by the height change of the object itself, which makes the phase difference Δϕ wrapped seriously. In this way, the phase difference image of the object can be obtained by removing the linear tilt term before phase unwrapping. Formula (6) is after denoising, and the height of the surface of the object varies with the coordinates, which can be written as follows:   International Journal of Optics Λ θ represents the change in the height of the object corresponding to the change in phase difference of 2π, and it can be seen that it is inversely proportional to the system sensitivity. When Λ θ is greater than or equal to the maximum depth measured on the surface, the phase diagram cannot be wrapped.

FIMG4FFT Wavefront Reconstruction Method.
According to the scalar diffraction theory [14] and the paraxial approximation of diffraction, Fresnel diffraction integral is often used in wavefront reconstruction of digital holography. Fresnel diffraction integral can be divided into two forms: Fourier transform and convolution. When Fourier transform is used, it can be calculated by one-time fast Fourier transform (FFT). is method is called 1-FFT method for short, but when reconstruction is performed by the 1-FFT method, the area of the reconstructed digital hologram image is smaller [15]. In order to obtain a high-quality reconstructed image and make the image of the selected area of the object be displayed on the reconstructed plane completely, so that the interference phase with high quality can be obtained later, the image plane filtering technology [16] is used to reconstruct the digital holographic image in 1-FFT, which is not interfered by other factors. To ensure that the resolution of the reconstructed image is as large as the original hologram, it is necessary to use zero-filling operation around the image. en, inverse diffraction operation is to make the digital hologram free from interference on the object plane, and then it uses the spherical wave as the reconstruction wave to irradiate, and it uses the angular spectrum diffraction method to reconstruct the high-quality digital hologram reconstruction image. is method requires four FFT calculations (FIMG4FFT method for short) [17][18][19].
In the digital holographic recording system, there is a Cartesian coordinate system Oxyz, the z � 0 plane where the hologram is located, the distance of the object plane from the hologram plane is z 0 , and the digital hologram obtained by CCD recording is I(x, y); then, the 1-FFT reconstruction light wave field can be expressed by the following Fresnel diffraction integral [14]: In formula (8), λ is the wavelength of the light wave and k � 2π/λ. e reconstructed spherical wave with a radius of z c is used to illuminate the reconstructed image: e light wave field of the undisturbed digital hologram propagating through distance z i is as follows: In formula (10), ω(x, y) denotes the window function of the hologram, U * (x, y) denotes the object field in the z � 0 plane corresponding to the local image of the selected object on the image plane, and f x , f y denote the frequency coordinates corresponding to x, y.
In the above formula, if the reconstruction distance z i is satisfied, In formula (11), z r is the reference wave surface radius. e reconstructed image of the object field whose magnification is M � z 1 /z 0 is obtained by angular spectrum diffraction formula (10).
In the off-axis digital holographic reconstruction system, there are two basic conditions that must be met. One is Nyquist sampling theories; another is the positive and negative first-order image separation conditions [19]. When these two conditions are met, the positive first-order image in the digital hologram will be able to reconstruct. e marking method of the positive first-order image of the 1-FFT surface is based on these two conditions. e marking method is as follows. Taking the virtual nut as the measured object is shown in Figure 2.
First, the surface of the nut mainly based on specular reflection is recorded in CCD, and the position of the positive first-order image on the 1-FFT surface is adjusted to separate the positive and negative first-order images from the zeroorder image; after that operation, the position of the first-order image on the 1-FFT surface is recorded. e hologram process captured by mirror reflected light is shown in Figure 2(a). en, rotate the nut slightly without changing its position so that the mirror reflected light is far away from the CCD and the diffuse reflected light from the nut enters the CCD. e hologram process captured by diffuse reflection light is shown in Figure 2(b). Because the nut is fixed on the rotating table, it only rotates, and its spatial position does not change, so the angle of the reference light does not change, so the diffuse reflection image can be obtained at the same position on the 1-FFT reconstruction image plane.

Multiaperture Stitching Model in Cylindrical Coordinates.
Rotary three-dimensional objects are not easy to express in Cartesian coordinates, and the range of the measured aperture is limited. e central angle of the maximum measurable surface cannot exceed 180 degrees. As shown in Figure 3, a International Journal of Optics cylindrical coordinate system (Oρθx) corresponds to a rectangular coordinate system (Oxyz). Assume that a point P in the space has a coordinate value P(x, y, x) in a rectangular coordinate system and a corresponding coordinate value P(ρ, θ, x) in a cylindrical coordinate system. Among them, ρ is the vertical distance from point P to axis X, and θ is the angle between the projection of the vertical line from point P to axis X on the yoz plane and the Z-axis. e relationship between the two can be expressed as [20,21] In both coordinate systems, the technology of multiaperture stitching is coordinate transformation, but there is rotation between adjacent areas in the cylindrical coordinate system, so it is necessary to solve the least square solution of the nonlinear equations when it solves the relative position relationship between adjacent two viewpoints. When the object rotates, there are some motion errors in mechanical equipment, so when it moves from one perspective to the next one, there will be noncoincidence between adjacent overlapping areas, and then the coordinate origin and axis will be offset. e coordinate axes do not coincide, as shown in Figure 4, which indicates that the axis of the second coordinate system X 2 is cosine (cos α, cos β, cos c) with respect to the direction of the first coordinate system, and the coordinate origins do not coincide as shown in Figure 5.
To investigate this offset, when two circular surfaces are projected in the above coordinate system onto a plane where x is constant, which are two circles with different circle centers, the distance between the centers of the two circles can be expressed as e orientation between the centers of the two circles is In formula (14), ρ 0 and θ 0 denote the distance and orientation between two cylinders on the plane of x 1 � 0 and the centers of two circles formed by the intersection of the two cylinders on the plane, as shown in Figure 5.
According to the overlapping area between two subapertures and the geometric relationship between the cylindrical coordinate systems, the transformation relationship between the two coordinate systems can be obtained as follows: In formula (15), φ is the angle of rotation around the X 1axis between two coordinate systems. ρ 0 , θ 0 and any two of the direction cosines (cos α, cos β, cos c) (due to cos 2 α + cos 2 β + cos 2 c � 1) can represent the relative positional relationship of the two measurement angles in their respective coordinate systems. Because there are four unknowns, four points can be solved in the overlapping region. However, in order to reduce the influence of noise, several points of the overlapping region are usually substituted into equation (14) to form a nonlinear equation group, and by solving the least square solution, we can get the parameters ρ 0 , θ 0 and direction cosine (cos α, cos β, cos c). en, we can transfer the measurement results from different angles of view, i.e., local coordinates into the global coordinate system by coordinate transformation to achieve the purpose of multiview stitching. e splicing model in cylindrical coordinates is suitable for the measurement of rotating parts, but the main disadvantage of this model is that the equations are severely nonlinear, which makes the solving process complicated. On the basis of preserving the above advantages, to simplify the process, this paper adopts the multiaperture splicing technique in the cylindrical coordinate system based on the particle swarm optimization algorithm, which converts the nonlinear problem into an optimization problem to solve, and it establishes the objective function and determines the relevant parameters and solves them. en, the stitching parameters can be obtained.

Principle of Particle Swarm
Optimization. Ideally, the surface shapes of the two subaperture overlapping areas are identical, but in the actual measurement process, the surface shapes of the two subaperture overlapping areas are not equal due to mechanical motion errors. Assuming that the datum shape of the overlapping area is ρ base' , the surface shape to be spliced with datum is ρ � f(ρ 0 , θ 0 , α, β), and the difference between the two is the value of the surface shape: In order to reduce the influence of the error, the error of the least square cylinder is used to describe the actual cylinder. e least square cylinder indicates the sum of the squares of the radial distances from the cylinder to the points on the actual cylinder is the minimum, that is, In formula (17), N is the number of points in the overlapping region. When Δρ approaches the minimum value of 0, corresponding (ρ 0 , θ 0 , α.β) is the solution of the system of equation (16).
Using the particle swarm optimization algorithm, it is necessary to determine some parameters. By consulting the relevant literature, it can be known from [22] that when c 1 , c 2 is not 0, it is called a complete particle swarm algorism, and the value is easier to maintain the convergence speed and the balance of the search results, which is a better choice, so this paper sets c 1 , c 2 to the random number [0, 1]. Shi and Eberhart discussed the influence of inertia weight between [0, 1.4] [23]. e conclusion is that when ω ∈ [0.8, 1.2], the convergence speed will be increased, and when ω > 1.2, the convergence will fail. So, ω is chosen as 0.96 in this paper. e flowchart of the particle swarm algorithm is shown in Figure 6.
rough the analysis of the principle and the whole idea of this paper, the flowchart of the whole frame of this principle is shown in Figure 7. It was verified by using experiments. Figure 8 shows an optical path diagram for measuring the three-dimensional topography of an object by using a tilt illumination light method, and Figure 9 shows a physical diagram. MO is the microscopic objective, and PH is the pinhole device; MO and PH can be combined as a pinhole filter, which can make the laser light become the ideal point Olite. BS1 and BS2 are beam splitters, M1, M2, and M3 are mirrors, BA is a beam attenuator, L1 and L2 are lenses, L1 has a focal length of 100 mm, and lens L2 has a focal length of 200 mm. Lens L2 is mounted on a high-precision displacement platform PI with a translation accuracy of 20 μm. It can move the illumination light vertically in the direction perpendicular, which makes the angle of illumination light change slightly. e propagation process of the light path is as follows: the laser light emitted by the laser passes through MO, PH, and BE to become uniform parallel light. e parallel light is divided into two paths of light through BE1, one of parallel light passes through BA and mirror M3 as reference light comes into CCD; the other path of the parallel light is reflected by M1 and M2 after focusing and collimating by lens L1 and L2, which passes the displacement platform. en, the object is illuminated; finally, the CCD receives the scattered light emitted from the surface of the object. e experiment uses a green light laser with a wavelength of 532 nm. e pixel number of CCD is 2748(H) × 3664(V), the single pixel value is 1.67 μm, the pixel size of the whole CCD is 4.6 mm(H) × 6.1 mm(V), the illumination light tilt angle θ � 11 ∘ , the recording distance is 290 mm, the moving distance of lens L2 on the translation table is 30 um, and the change of the inclination angle of illumination light is Δθ � 0.03/200 rad ≈ 0.0086°. e relation between the reference light and the object light meets the conditions including the spectral separation condition and the sampling theorem [24] in the off-axis digital hologram. And the intensity ratio of the reference light and the object light meets the optimum intensity ratio, that is, 1 : 1.

e Effect of Specular Reflection and Diffuse Reflection on
Reconstructed Image Quality. In this experiment, the nut and the stud are used. e nut is screwed on the stud and fixed together on the rotary stage. e rotary stage is fixed on the pneumatic platform. Due to the nut of specular reflection and diffuse reflection on the surface, the quality of the reconstructed image will be affected, and the quality of the reconstructed image directly affects the quality of the next interference fringe image, which also affects the influence of the three-dimensional shape measurement. At the same time, the reconstructed image quality is low, and the contrast is poor. Even with the image plane filtering technique, it is not easy to find the image formed by the object on the 1-FFT plane. Figure 10 shows the effect of image reconstruction on the 1-FFT image plane by using the mirror reflection and diffuse reflection of nuts, respectively, and the result of angular spectrum reconstruction will be obtained after image filtering. It can be seen that the image of the diffuse reflection object on the 1-FFT plane is very unclear, so a marking method of the 1-FFT reconstruction image plane is proposed here. In this experiment, it is easy to find the position of the first-order image of the 1-FFT reconstruction plane by using the specular reflection of the nut. en, the nut is rotated so that the reflected light from the mirror is far away from the CCD and the diffuse reflected light from the nut enters the CCD. Because the nut is fixed on the rotating table, it only rotates, and its spatial position does not change, so the angle of the reference light does not change, so the diffuse reflection image can be obtained at the same position on the 1-FFT reconstruction image plane. Figure 11 shows the flowchart of the 1-FFT reconstructed image plane positive first-order image marking method. Figure 12 analyses the reconstructed image of specular reflection and diffuse reflection. It can be seen that when the specular reflection is strong, the reconstructed image can only see one side of the nut. When the reflected light of the specular surface is gradually weakening, the reconstructed nut using diffuse reflection has a better effect. e quality of the reconstructed image directly affects the quality of the phase difference image because the fringe produced by the phase difference image is subtracted from the phase of the two reconstructed images, which affects the quality of the phase map.

Calibration of Height Measurement Error.
In order to estimate the height measurement error of the digital holographic tilt illumination method, two measuring blocks are combined to form a ladder-shaped object with a height of 9 mm, that is to say, two parallel planes are used to form an object with a height difference of 9 mm. As it is shown in Figure 13, in order to make the two measuring blocks  International Journal of Optics together, a magnet is used to absorb the back of one measuring block. Because these two measuring blocks are standard gauge blocks, tolerance is very small and can be accurate to the submicron level. But, as a standard gauge block, the verification of measurement accuracy is also required. As for a measuring tool, its measurement accuracy must reach submicron level to measure the height difference between gauge block1 and gauge block2. Based on this, we choose an electric vernier caliper with an accuracy of 0.01 mm as the measurement tool. It is shown in Figure 13. Figure 14(a) shows an enlarged view of the reading dial in Figure 14(b). As it can be seen from Figure 14(b), the accuracy of the electric vernier caliper is 0.01 mm, which has reached the submicron level. e measure method of the electric vernier caliper is as follows.
First, using the electric vernier caliper is to measure the distance of A in Figure 13. e distance of A is a total long distance.
Second, similarly, using the electric vernier caliper is to measure the distance of B in Figure 13. e distance of B is a passive distance.
ird, subtracting B from A is to get the height difference between gauge block1 and gauge block2.
Since the upper and lower surfaces of gauge blocks 1 and 2 are both polished, they can be measured as standard measuring objects. e area marked by the red line in Figure 13 is the measurement data section. e whole height of C will be measured (the height difference between gauge block1 and gauge block2). e measurement data are shown in Table 1. It can be seen from the average value of the measurement data that the error of the standard gauge block is only 0.044%. It can be used as a standard gauge block for calibration. en, the inclined illumination light method is used to measure. Firstly, two digital holograms before and after illumination tilt are taken, and then the 1-FFT reconstruction image is obtained by reconstruction, respectively. e interference-free FIMG4FFT reconstruction image is obtained by image plane filtering technology. e phase difference image is obtained by subtracting the phase image before and after illumination tilt. en, the linear tilt term related to X is removed and then filtered. Figures 15(a)  In order to display visually, the height difference portion is taken out for analysis. e contour image corresponding to the red lines in the red rectangular frame and the red rectangular frame in Figure 15(f ) is shown in Figures 16 and  17.
As it can be seen from Figure 17, the height difference between the two planes is about 8.891 mm. In order to obtain more accurate experimental results, the phase curves of four positions are randomly selected in the phase image, and the error range of height measurement is from 0.1 mm to 0.5 mm, and the maximum relative error is 5.6%.

Contour Acquisition and Mosaic of Rotary ree-Dimensional Objects.
e threaded part under the nut which is the double-headed stud is measured and spliced in threedimensional digital holography. e size of the stud is M6. e flowchart of the object contour is shown in Figure 18. International Journal of Optics   Because the stud is similar to a cylinder, a single angle of view can capture half of the area of the stud, and the corresponding circumference angle is 180°. e whole stud is recorded four times, which means the image of the first view was taken at a viewing angle of 0°, and then the rotating table is rotated by 90°to obtain an image at the second viewing angle so that the corresponding circumferential angle of the overlapping regions at the two viewing angles is 90°, and the overlap area is 50%. According to the steps of the flowchart, the contour of the double-headed stud is obtained. Figure 19 shows the contour of the double-headed stud measured by the inclined illumination method. Figure 20 shows the phase distribution of the three-dimensional reconstruction of the stud from different perspectives. Figure 21 shows the threedimensional morphology of the stud after splicing, and it can be seen that the effect is good.
As it is shown in Figure 21, the three-dimensional splicing contour map of the stud has been completed, and the similarity of the stitching is basically consistent with the original object. Although the error analysis between the real object and the stitched object is not done, it can be seen from the visual comparison of the stitched object and the real object that the stitched object is very similar to the real object. erefore, the stitching theory described in Chapter 2 is correct.
is technique is of great significance for 3D reconstruction and stitching of digital holography.

Conclusions
e reconstruction image of a weak diffusely reflecting object has poor contrast because of its low light intensity and the position of the positive first-order image cannot be located accurately. All of results will influence the quality of subsequent acquisition of the interference phase map. A premarking method is proposed in this study, which combines the FIMG4FFT method and changes the inclination of the illumination light. A calibration target with a  height difference of 9 mm is used in demonstration test. e results show that the measurement error is between 0.1 mm and 0.5 mm, and the maximum relative measurement error is 5.6%. e multiaperture stitching technique in cylindrical coordinates is applied to accomplish the digital holographic 3D stitching. e particle swarm optimization algorithm is used to transform the nonlinear equations into optimization problems to solve the splicing parameters. e contour from a single viewpoint is stitched together to realize the threedimensional display of the whole object. Finally, the threedimensional stitching of rotational three-dimensional objects in the cylindrical coordinate system is carried out, and the experiment for a mechanical part has been conducted.

Conflicts of Interest
e authors declare that they have no conflicts of interest.