There is always a great challenge for the structured light technique that it is difficult to deal with the surface with large reflectivity variations or specular reflection. This paper proposes a flexible and adaptive digital fringe projection method based on image fusion and interpolated prediction search algorithm. The multiple mask images are fused to obtain the required saturation threshold, and the interpolated prediction search algorithm is used to calculate the optimal projection gray-level intensity. Then, the projection intensity is reduced to achieve coordinate matching in the unsaturated condition, and the adaptive digital fringes with the optimal projection intensity are subsequently projected for phase calculation by using the heterodyne multifrequency phase-shifted method. The experiments demonstrate that the proposed method is effective for measuring the high-reflective surface and unwrapping the phase in the local overexposure region completely. Compared with the traditional structured light measurement methods, our method can decrease the number of projected and captured images with higher modulation and better contrast. In addition, the measurement process only needs two prior steps and avoids hardware complexity, which is more convenient to apply to the industry.

Structured light technique has been widely used in academic research and industrial fields because of the advantages in their full-field inspection, noncontact operation, low cost, and high precision [

Many methods have been developed for the 3D shape measurement of the high-reflective surface [

In this paper, we propose an adaptive digital fringe projection method based on image fusion and interpolated prediction search algorithm to achieve the 3D measurement of the high-reflective surface with large reflectivity variations. According to the reflectivity characteristics of the measured surface, the captured images with valid uniform gray level are fused with the mask images, and the interpolated prediction search algorithm is used to calculate the optimal projection intensity at each pixel. Therefore, our method can adaptively adjust gray level intensity, avoid saturation in the captured images, and maintain higher intensity modulation. Compared with traditional optical 3D measurement methods, our method has optimal fringe contrast, which can achieve complete reconstruction in the overexposed region and effectively solve the problem of 3D shape measurement with high-reflective surface. Moreover, it will avoid additional hardware complexity and projector nonlinear gamma compensation.

The rest of the paper is arranged as follows. Section

Figure

The coordinate system of the phase-shifted fringe projection system.

The multifrequency heterodyne method is used to unwrap the phase and measure the complex surface since this method is a pixel-by-pixel measurement. The measurement system sequentially projects four kinds of fringe images with different frequency phase shifts, and the phase can be calculated as _{1}, _{2}, and _{3} by means of the four-step phase-shifted method. Then, superposition phases _{12} and _{23} are, respectively, obtained and unwrapped phase _{123} will be ultimately calculated from superposition phases _{12} and _{23} by using the heterodyne algorithm. The schematic diagram is shown in Figure

Schematic diagram of multifrequency heterodyne.

In this case, “adaptive” refers to the 3D shape measurement for the surface with large reflectivity variations as well as the influence of ambient light and complex surface interreflections, and it can calculate the optimal projection intensity at the pixel level [

The schematic diagram is shown in Figure

Schematic diagram of the adaptive digital fringe projection method.

The processing flow is further decomposed into six steps, as shown in Figure

Step 1: projecting uniform gray-level image sequences: a series of image sequences in uniform gray level are projected onto the measured surface, and the captured image sequences and the mask image sequences are obtained after acquisition

Step 2: obtaining valid gray-level images and mask images: the image sequences with valid uniform gray level are extracted by the mask image sequences to recalculate the gray level at each pixel and composite the final images

Step 3: determining the saturation threshold: the saturation threshold is set by calculating the maximum pixel gray level in the composite image

Step 4: calculating the optimal projection gray level: the optimal projection gray level is determined through the required saturation threshold and the interpolated prediction search algorithm

Step 5: camera-projector coordinate matching: the horizontal and vertical fringe sequences are projected onto the measured surface, and the absolute phase is calculated by the modulated fringe image under the low projection intensity to carry out camera-projector coordinate matching

Step 6: 3D reconstruction: the adaptive digital fringe sequences are projected onto the measured surface, and the unwrapped phase and the 3D reconstruction are achieved by using the phase-shifted method based on heterodyne multifrequency

Flowchart of the adaptive digital fringe projection method.

Since the projection intensity images have different brightness, the optimal projection gray level is obtained only if the captured fringe images are unsaturated. In this paper, the saturation threshold is obtained by the image fusion method, and the interpolated prediction search algorithm is used to obtain the optimal projection gray level. Our method can be described as follows:

Step 1: project image sequences _{i} = 255 − _{i}, _{k} (^{c},

Step 2: apply the threshold segmentation method. One kind of pixel is considered to be zero if its gray level is greater than _{k} (^{c}, _{i} (^{c},

Among them, the segmentation threshold

Step 3: design the inverse binary threshold and obtain the valid mask image sequences _{i} (^{c},

where the largest threshold

Furthermore, the mask image algorithm has been improved by using the method of transition compensation in the saturation region. The compensation value is gradually reduced, and the reconstruction error in the saturation region boundary is also reduced by adding transition compensation region around the local saturation region. Supposing that _{n−max} and _{s−max} are the maximum projection gray level of the saturated region and the adjacent region, respectively. The minimum surrounding ellipse algorithm is added to the transition region. If the length of the overcompensated region is

In formula (

Step 4: composite the projection intensity images: the projection intensity images ^{c},_{i} (^{c}, _{i} (^{c},

Step 5: calculate the optimal projection gray level. Due to the disadvantages and complexity of the threshold segmentation algorithm and information acquisition of the pixel value in the overexposed region, the maximum gray level in the composited images is not the optimal gray level of the final adaptive fringe projection images. Therefore, the gray-level response curve of the camera-projector system needs to be computed as shown in the following equation:

_{1}is the modulation factor among the projector, camera, and objects for projection intensity,

_{2}represents factors under specific measurement conditions, such as ambient light, and

_{n}is the noise intensity. Formula (

_{max}represents the maximum gray level in the composited image in Step 4, which equals

_{cam}(

^{c},

_{L}] is the minimum value in the gray-level array, and

_{H}] represents the maximum value in the gray-level array. In each computational cycle,

^{c},

_{H}], and

_{L}] can be calculated. The corresponding Lagrange interpolation polynomial is obtained by the known parameters

_{H}], and

_{L}], and then the value of the middle element (mid) is calculated by the value that needs to be looked up, which is the core idea of the interpolation prediction lookup algorithm. Compared with the target element

_{max}and the value of the middle element in the sequence of numbers, the optimal projection gray level

_{optimal}which is equal to

_{pro}(

^{p},

Two kinds of phase errors are mainly considered in the phase-shifted fringe. One is caused by additive random noise from the system, and the other is caused by the measured object with high reflectivity.

As is known to all that when the gray level is in the normal range, the factors, such as projection intensity, surface reflectivity, ambient light, and noise, can affect the gray level captured by the camera. In fact, all other factors are considered to remain constant except that the noise is a random error, which will affect the final unwrapped phase quality. The relationships between the measurement gray-level random noise _{n} and the final phase error induced by the random noise can be expressed as the following equation [_{m} is the modulation intensity, _{n} is the random noise of the camera. It can be concluded from equation (

Furthermore, we analyze the phase error because the gray-level image is saturated. For the camera with depth in ^{b} bits, the maximum gray level of its captured image is limited to

In equation (

It can be concluded from equation (_{m} are increased.

From the above analysis, we can draw the conclusion that the phase error caused by saturation is derived from the error value where the gray level is truncated. The phase error at low gray level is derived from the noise, which is much less than the phase error caused by saturation. If the phase can be calculated and the captured images are unsaturated, it can be used to achieve high-precision camera-projector coordinate matching.

As mentioned above, the optimal projection intensity only presents the magnitude of the adapted intensity. However, its position in the projector pixel coordinate system is not addressed. Therefore, the absolute phase resolution is the key process to map the camera pixel coordinate system to the projector pixel coordinate system.

In our case, a coordinate matching method for the projector-camera system is proposed to reduce the overall projection intensity in the unsaturated state. The method of calculating the optimal gray level can find the optimal projection intensity in Section

The projector projects the adaptive fringe image sequences onto the measured surface, and the deformed fringe images contain the 3D shape information of the measured object. The phase-shifted method is applied to calculate and unwrap the vertical phase _{h} are the periods of the adaptive transverse and longitudinal fringe, and

The matching process is shown in Figure

Schematic diagram of coordinate matching between the projector and the camera.

The mapping accuracy relates to the accuracy of corner extraction and the quality of the absolute phase, rather than affected by the calibration accuracy of camera parameters. The standard extraction functions are used to extract corner coordinates, and we have compensated the absolute phase error and considered the false calculation of the points at the edge of fringes in order to ensure the high quality of the absolute phase. These methods can ensure the one-to-one correspondence between the camera and the projector pixel coordinates.

Its matching accuracy is shown in Table

Reprojection error of camera pixels and projection pixels by our proposed method for Figure

Camera | Projector | System | |

Reprojection error/pixel | 0.1435 | 0.0953 | 0.1346 |

Since both the contour pixel extraction and the pixel gray level of a certain point are complex in the actual optical measuring system, several pixel points around the contour will affect the process. According to the proposed method in Section _{optimal} can be obtained, which is satisfied within the dynamic range of the camera, i.e., the highest gray level of the captured image will be unsaturated. Then, the adaptive fringe images _{i} (_{i} =

The adaptive fringe images can be generated and calculated after mapping the coordinate system. In this case, our proposed method can precisely adjust the pixelwise projection intensity, avoid image saturation, and maintain higher intensity modulation for the high-reflective surface.

To verify the feasibility and utility of the proposed method, the adaptive digital fringe 3D shape measurement system is set up, as shown in Figure

3D measurement system based on fringe projection.

Firstly, a series of uniform gray-level images _{i},

Then, the image sequences with valid uniform gray level and the corresponding mask image sequences could be calculated by equations (_{i} (^{c},

Schematic diagram of the synthetic image process.

The gray level of the composited image was taken as the set saturation threshold by the interpolated prediction search algorithm. Combining equations (

Contrast result of the fringe projection effect: (a) conventional method; (b) our method.

As the gray histogram could effectively reflect the frequency of the certain gray level in the image, the gray histogram was commonly used to calculate the distribution of overexposed pixels. In order to highlight the local details of the image, we only extracted the distribution histogram of pixel gray level in the small red frame selection part of Figures

Comparison result of the gray histogram: (a) gray histogram of the image in Figure

In our experiment, the phase-shifted method based on heterodyne multifrequency was used to carry out the unwrapped phase. Three groups of fringes were projected, each of four sinusoidal fringes with different phase shifts. We set the fringe frequencies to _{1} = 1/70, _{2} = 1/64, and _{3} = 1/59, and the corresponding wrapped phase values are _{1}, _{2}, and _{3}, respectively. According to the heterodyne principle, the phases of _{12} and _{23} were obtained by superimposing _{12} and _{23}. Then, the phase with frequencies _{12} and _{23} was superimposed to obtain the unwrapped phase _{123} with only one periodic phase in the whole field [

Flowchart of the heterodyne multifrequency phase-shifted method.

We analyzed the experimental results in extracting 0 to 945 lines from the final phase unwrapping diagram. As could be seen from Figure

Partial line extraction phase unwrapping.

Furthermore, the 3D reconstruction was generated by applying the phase-height mapping relationships. The traditional 3D measurement method based on fringe projection and the proposed method were used to measure the same metal workpiece with high-reflective surface. The high-reflective objects with different materials and reflection coefficients were used to realize 3D reconstruction, and the reflection coefficient range was from 0.40 to 0.90 in our experiment.

The reflection coefficient of the first metal workpiece was from 0.40 to 0.50 [

Experimental results of the 3D reconstruction with reflection coefficient from 0.40 to 0.50: (a) traditional phase-shifted method; (b) our method.

3D point cloud of the depth map with reflection coefficient from 0.40 to 0.50: (a) traditional phase-shifted method; (b) our method.

The measurement output comprised the 3D point cloud, so the quality of the point cloud was the most important indicator to evaluate the performance of the measurement system. Similarly, we measured the workpiece and merged the point clouds after alignment by using the cloud-based triangular-mesh reconstruction algorithm. Furthermore, we compared and analyzed the quality of the model data and the original point-cloud data. We set the deviation value, one side of the reference plane was negative, and the other side was a positive value. The maximum and minimum distance from the point to the plane were calculated by the least square fitting plane. The average error and the standard deviation are calculated as shown in Tables

Evaluation and analysis of the point cloud by the traditional phase-shifted method for Figure

Direction | Maximal value (mm) | Average error (mm) | Standard deviation (mm) |

Negative direction | −4.9650 | −0.6236 | 0.7501 |

Absolute direction | 4.9650 | 0.0163 | 0.1549 |

Forward direction | 3.6578 | 0.0089 | 0.1134 |

Evaluation and analysis of the point cloud by our proposed method for Figure

Direction | Maximal value (mm) | Average error (mm) | Standard deviation (mm) |

Negative direction | −3.3964 | −0.6236 | 1.2337 |

Absolute direction | 4.9624 | 0.0047 | 0.1104 |

Forward direction | 4.9624 | 0.0017 | 0.0692 |

From the quantitative analysis on Tables

We have measured a piece of machined aluminum, and its reflection coefficient was from 0.60 to 0.75 through the traditional ADFP method [

Comparison of experimental results of the 3D reconstruction with reflection coefficient from 0.60 to 0.75: (a) traditional ADFP method; (b) our method.

3D point cloud of the depth map with reflection coefficient from 0.60 to 0.75: (a) traditional ADFP method; (b) our method.

From the quantitative analysis of Tables

Evaluation and analysis of the point cloud by the traditional ADFP method for Figure

Direction | Maximal value (mm) | Average error (mm) | Standard deviation (mm) |

Negative direction | −3.9525 | −0.4587 | 0.4720 |

Absolute direction | 3.9525 | 0.0088 | 0.0843 |

Forward direction | 3.8243 | 0.0043 | 0.0544 |

Evaluation and analysis of the point cloud by our proposed method for Figure

Direction | Maximal value (mm) | Average error (mm) | Standard deviation (mm) |

Negative direction | −0.9415 | −0.3116 | 0.1994 |

Absolute direction | 0.9415 | 0.0014 | 0.0239 |

Forward direction | 0.8150 | 0.0007 | 0.0166 |

In addition, the experimental object was a metal processing object with the reflection coefficient of 0.75–0.90. The raw measured object is shown in Figure

3D point cloud of the depth map with reflection coefficient from 0.75 to 0.90: (a) measured metal object; (b) traditional ADFP method; (c) our method.

From the quantitative analysis of Tables

Evaluation and analysis of the point cloud by the traditional ADFP method for Figure

Direction | Maximal value (mm) | Average error (mm) | Standard deviation (mm) |

Negative direction | −2.4104 | −0.6388 | 0.4115 |

Absolute direction | 2.7558 | 0.0019 | 0.0470 |

Forward direction | 2.7558 | 0.0011 | 0.0394 |

Evaluation and analysis of the point cloud by our proposed method for Figure

Direction | Maximal value (mm) | Average error (mm) | Standard deviation (mm) |

Negative direction | −2.7106 | −0.4296 | 0.4486 |

Absolute direction | 2.7106 | 0.0014 | 0.0239 |

Forward direction | 0.9457 | 0.0007 | 0.0197 |

All in all, it could be seen from the above experiments that the method proposed in this paper had a wide range of applications and could effectively solve the problem of 3D shape measurement with large reflectivity variations.

An adaptive digital fringe projection method based on image fusion and interpolated prediction search algorithm is proposed to solve the point cloud-missing problem of 3D shape measurement with a high range of reflectivity. For overexposed pixels on high-reflective surfaces, appropriate gray-level intensity of the composite image is computed as the optimal projection intensity to avoid image saturation. Simultaneously, for dark pixels with low surface reflectivity, the high gray-level intensity is selected as the optimal projection intensity, which maintains higher SNR. The experiments show that the proposed method achieved high measurement accuracy for the high-reflective surface. The average error in the absolute direction is reduced by 84.1%, and the forward standard deviation is reduced by 69.4%. Our proposed method only needs two prior steps for measuring the high-reflective surface, without projecting and capturing a large number of fringe images at multiple intensities or too many exposure times; thereby, it avoids additional hardware complexity and makes the whole measurement easier to carry out and less laborious. However, the proposed method cannot be used in dynamic measurements; the optimal projection intensity will be predicted adaptively and achieve high measurement accuracy in the future.

The data used to support the findings of this study are included within the article.

The authors declare no conflicts of interest.

This research was funded by the National Natural Science Foundation of China (Grant nos. 51805153 and 51675166) and the State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University (pilab1801).