Method of Combining Spectrophotometer and Optical Imaging Equipment to Extract Optical Parameters for Material Rendering

Optical parameters of materials are used for implementing cinematic rendering in the field of graphics. Critical elements for extracting these optical characteristics are the accuracy of the extracted parameters and the required time for extraction. In this paper, a novel method for improving these elements as well as a comparison to the existing methodology is presented. By using a spectrophotometer and custom designed optical imaging equipment (OIE), our method made it possible to enhance accuracy and accelerate computing speed by reducing the number of unknowns in the fitting equations. Further, we validated the superiority of the extracted optical characteristic parameters with a rendering simulation.


Introduction
Optical parameters, including absorption and reduced scattering coefficients, have played a key role in many diagnostic and therapeutic optical techniques [1,2].Recently, a method for practical physical reproduction and design of homogeneous materials with desired subsurface scattering has been presented.Pigment concentrations that best reproduce the appearance and subsurface scattering of a target material must be determined to achieve the reproduction and design goals [3].To accurately convey optical material properties when synthesizing artificial objects is a critical task.For instance, we have to consider optical parameters when rendering translucent materials such as soap, skin, and stones.Additionally, the optical parameters of a material are used for implementing cinematic rendering in the field of graphics [4][5][6][7].The interreflections between objects and their environments and the subsurface scattering through the materials have to be considered to create realistic visual effects [8].To do this, a prerequisite is to extract optical parameters that represent accurate optical properties.This process needs to be fast as well.A combined model of diffused interreflections and isotropic subsurface scattering has been presented [9].The final appearance of the material is determined by the amount of light captured by our eyes after light goes into the material, scatters, or is absorbed by the material.Thus, we need to seek an accurate process to measure the amount of light that is observed.Some critical unknown variables of the bidirectional subsurface scattering reflectance distribution function (BSSRDF) are the index of refraction, the reduced albedo, the absorption coefficient, and the reduced scattering coefficient.It is critical to be able to acquire the reflectance distribution of the reflected light using an optical device since permeated light is not measureable in some materials.By fitting the measured sample data obtained from an image-based measurement system, Lee et al. suggested a method to determine the reduced scattering coefficient and the absorption coefficient [10].However, in their method it is necessary to solve for two unknown variables.
Other new image-based measurement techniques have been proposed where the index of refraction and the reduced albedo are estimated [11,12].The two unknown variables (the absorption and the reduced scattering coefficient) must be solved using Monte Carlo numerical methods.However, this function requires a long fitting time.In addition, it is not known whether the method physically measures the index of refraction.
Hence, we need a time efficient method that also outputs high quality results using optical parameters and index of refraction.
In this paper, we propose a method for extracting the optical parameters with less time required than existing methods by removing an unknown variable and enhancing the corresponding accuracy while acquiring the index of refraction.
To this end, we used a spectrophotometer and optical imaging equipment (OIE) designed and manufactured by the authors.First, the parameters of BSSRDF were optimized for fitting in order to extract optical parameters from a high dynamic range image (HDRI) acquired by the OIE, thereby improving the fitting performance by measuring the reduced albedo directly with the spectrophotometer.We conducted the experiments on a total of 32 different materials with various optical characteristics.Further, in order to verify the reliability of the measured parameters, we compared the rendering results obtained using the measured optical coefficients.We have also validated our proposed method for extracting optical parameters using the rendering results in two different equations.In short, we show that our rendering results and mean squared error (MSE) are superior to existing methods.
The remainder of this paper is organized as follows.In Section 2, related works are discussed.In Section 3, we explain the measurement equipment used.The comparative rendering results are shown in order to validate extracted parameters in Section 4. We used two comparative rendering methods.Finally, the conclusions are stated in Section 5.

Related Works
The optical characteristics discussed include absorption coefficient   , reduced scattering coefficient    , and index of refraction .These parameters are used for determining the reflectance and permeability of a material along with the subsurface scattering distribution of incident light [9].
The general measurement process of the optical characteristics is as follows: a laser beam is incident normal (perpendicular) to the material, and, then, the permeability distribution of the permeated light is determined using the result of numerical analysis or an analysis model [13,14].
A novel high speed approach for the acquisition of bidirectional reflectance distribution functions (BRDFs) is proposed.In this work the authors address a new theory for directly measuring BRDFs that represent a basis by projecting incident light, that is, a sequence of basis functions from a spherical zone of directions.By derivation of an orthonormal basis over spherical zones, they acquire an ideally suited process for this task.By reprojecting the zonal measurements into a spherical harmonics basis or by fitting analytical reflection models to the data, BRDF values are deduced [15].
To extract skin reflectance parameters, custom-built devices are used.Three-dimensional face geometry, skin reflectance, and subsurface scattering are measured of 149 subjects.The participants from whom these parameters are measured vary in age, gender, and race.In this work the authors show a novel skin reflectance model of which parameters are estimated from the measurements.The model is composed of measured skin data put into a spatially varying analytic BRDF, a diffuse albedo map, and diffuse subsurface scattering [16].
A steady-state imaging technique using non-normally incident light to determine anisotropy parameter is presented.To make it work, the authors perform a fitting Monte Carlo simulation to obtain high dynamic range images of the intensity profiles of samples [12].
However, because permeated light cannot be used with some materials, there are methods that measure the reflectance distribution of the reflected light using an optical imaging device.These methods determine the reflectance coefficient by fitting the measured reflectance distribution to the BSSRDF, a rendering model that takes into account optical characteristics [10].

Equipment for Measurement.
In this section, we explain the equipment for measurement used in this work.There are two pieces of equipment, including the spectrophotometer (CM-600D, Konica Minolta) and the optical imaging equipment (OIE), used to make the HDRI.
The refractive index  and the reduced albedo   were measured using the spectrophotometer.Figure 1(a) shows the spectrophotometer used, and Figure 1(b) illustrates the spectral reflectance graphs acquired by this spectrophotometer.This equipment measures two types of reflectance at wavelength intervals of 10 nm.The specular component included (SCI) measures the reflectance that includes the specular component, and the specular component excluded (SCE) measures the reflectance that excludes the specular component.
The scattering reflectance distribution could be obtained from the pictures taken with the OIE. Figure 2 shows the data obtained by the OIE designed by the authors and its specifications.This equipment uses lenses to focus various light sources to a point less than 1 mm in diameter on to the material.Moreover, the focused light point is in HDR filmed by the CMOS camera.HDR filming controls the amount of light that the camera takes in and thus captures the images.This can be done by controlling the camera's ISO value, aperture, or shutter speed.However, if the ISO value is increased, the camera's optical sensitivity and light sensitivity increase.Consequently, it is possible to take bright pictures at a low aperture and a low shutter speed as though they were taken in a considerable amount of light.Nevertheless, there will be a corresponding increase in the noise in the pictures with an increase in the ISO value.Therefore, it is advantageous to set a low ISO value for the imaging of optical characteristics.In this work, the ISO value and the aperture were fixed, and HDR filming was implemented by taking multiple images while controlling only the shutter speed.by controlling the distance of the lens and the light source to minimize the diameter of the focus point depending on material thickness and shape.During the filming, in order to obtain the HDR image, multiple images were taken under the same conditions by varying the exposure levels.LED light was used as the light source, and a total of 13 HDR images were taken by setting the aperture to 8.0 and the ISO value to 400 for the considered exposure level.Only the shutter speed was changed from 1/600 to 3 sec.The obtained images were merged with the HDRI having 16 bits per channel for each RGB channel [17].Then, according to the lighting characteristics, each level, depending on the RGB channel, was adjusted.Figure 3 The process of extracting the optical characteristic parameters implemented in this study will be outlined briefly.First, the refractive index  is calculated with the spectrophotometer.Then the reduced albedo   value is derived using the Monte Carlo numerical methods.The third step involves the HDRI merge using the OIE.The pixel value of each merged HDRI is the scattering reflectivity distribution; it can be expressed as   () according to the distance  from the focus point pixel.However, since the pixel, according to the distance from the focus point, is shown as a concentric circle like the red circle in Figure 3(b), the pixels on the circle for sampling have to be averaged and converted into one value of   ().The reflectivity function is then related to , and   () is used as the fitting data.Furthermore, in order to use the diffusion approximation as the fitting equation, fitting is performed after optimization using   .
The following is a more detailed description of the optical parameter's derivation process.First, we derive the refractive index  of the material from SCI and SCE on the basis of the two values of reflectance obtained using the spectrophotometer.The specular intensity at the time of normal incidence can be calculated as the difference of the two values and the refractive index  can be calculated by applying this to Schilick's approximation [18].Through Schilick's approximation, the specular intensity according to the perpendicular incidence refractive index can be obtained, and since the refractive index of the air is known, the refractive index  between air and the material can be calculated.Second, in order to acquire the reduced albedo   , the material color  *  *  * , which is measured using the spectrophotometer, can be changed into RGB.This RGB is the same as the BRDF approximation's   value and can be expressed by [11] where  = (1 +   )/(1 −   ) and   = −1.44/ 2 + 0.71/ + 0.668+0.0636.If the refractive index  that is obtained above is applied, it can be summarized as an unknown, reduced albedo   .This value can be extracted inversely by using the Monte Carlo method.Third,    is derived by applying the fitting equation to the HDRI that was obtained from the OIE and merged.The fitting algorithm used is the Levenberg-Marquardt algorithm (LMA), a model used for solving nonlinear least square method problems [19].Fitting is a method used for minimizing the sum of the function value and the multiple of the error between the data points.In order to minimize the sum of the function and the multiple of the error between the measured data points, the nonlinear least square method is used as an algorithm that finds the local minimum solution by repeatedly improving the parameter value in the function.
In order to minimize the fitting time and to improve accuracy,   was simplified as a function of    , as expressed by Here, we used the value calculated above for   .We then substituted the right hand term of ( 2) with   in the BSSRDF equation.Thus, the BSSRDF equation could be summarized with only    by substituting and optimizing the  value calculated above; this was used with the merged HDRI for the fitting, and the value of    was calculated.The fitting constituted the reflectivity distribution   to distance  ratio.When the fitting was performed by using the optimized formula in Levenberg-Marquardt algorithm (LMA) the optimal    value was reverted.Figure 4 shows the fitting process of a pink plastic.The three channels in the figure are red, green, and blue and the black dotted line indicates the sample spectrum.The blue line is associated with the variables.  was calculated inversely by using (2) and substituting the    value subsequently obtained and   , which was obtained above.Through the experiments, we determined that the optimal pixel distance converges to 300 pixels; hence, no reflectance appears after  = 300.
A total of 32 samples were tested with the OIE and the spectrophotometer using the method proposed in this study.The extracted parameters were ,   , and    .Table 1 presents the results of 7 among the 32 samples, such as rock, plastic, and wood materials.The time spent on the fitting process was 0.2738 s on average, which is considerably faster than that required by existing methods [11,12].

Subsurface Color Method.
The color-rendering model was used for entering the optical parameters, following which the material's ultimate color was determined [20].In this where () is the integration of phase function for backward,  is the depth, and  , is the diffuse Fresnel reflectance at the bottom boundary [9].The total diffuse reflectance  () is defined in where  , represents the diffuse Fresnel reflectance at the top boundary and  , is the diffuse Fresnel reflectance at the bottom boundary.Lastly, the pixel value needed for rendering was calculated by multiplying the light intensity, which was calculated using the following equation and the result of the color model: where   is the intensity of the reflected light and   (diffuse coefficient),   (specular coefficient), and  ts (translucent coefficient) can be optimized using an experiment., ,  are subsurface diffuse light coefficients, which differ according to the center of the object, distance to a light source , and size of the object. represents the light intensity, while  is an angle between the normal vector and the light vector.  ,   are the standard deviations of the surface slope in the ,  directions, and  is the angle between the normal vector  and the half vector . is the azimuth angle of the half vector, which is projected on the surface of contact.

Texture Space Method.
By first converting the integration over the 3D model surface into an integration over a 2D texture space, the method acquires the results of subsurface scattering.Irradiance values stored in the texture are used to make a feasible implementation [7].
The approach is composed of a precomputation process and a three-stage GPU based translucent rendering process that includes rendering the irradiance to a texture map, carrying out critical sampling, and evaluating the outgoing radiance.
The final outcome for computing outgoing radiance becomes where the diffusion function   (  ,   ) and the Fresnel transmittance   (,  →   ) contain computations of trigonometric functions, the square root, and exponential functions, which are complicated instructions.

Validation by Rendering.
To validate our proposed method, we carried out two experiments that implemented only the optical imaging equipment [11,12] and the proposed simulation method.The rendering equations defined in Sections 4.1 and 4.2 were used to create reflectance of the materials.
OpenGL was used to verify the extracted parameters implemented with shader codes.A statue of Athena was used for rendering; it is a model provided as freeware (http:// graphics.im.ntu.edu.tw/∼robin/courses/cg03/model/).
We show the rendering result of the subsurface color method in Figure 5(a1) and the texture space method with the existing method for a pink plastic in Figure 5(a3).Figure 5(a2) shows the rendering result of subsurface color method and Figure 5(a4) shows the texture space method with the proposed method for the pink plastic.Similarly, Figure 5(b1) shows the rendering result of the subsurface color method and Figure 5(b3) shows the texture space method with the existing method for a blue coating.Figure 5(b2) is the rendering result of the subsurface color method and Figure 5(b4) is the texture space method with the proposed method for the blue coating.Figures 5(c1) and 5(c3) show the rendering results of the subsurface color method and the texture space method with the existing method, respectively, for brown leather.Figure 5(c2) shows the rendering result of the subsurface color method and Figure 5(c4) is the texture space method with the proposed method.
In Figures 5(a1  Figures 5(a2), 5(a4), 5(b2), 5(b4), 5(c2), and 5(c4) consider the parameters given in Table 1.The final rendering result was applied by changing the parameters into gray tone to apply only the pattern, excluding the color information of the texture.As a consequence, when the rendering results were compared using the naked eye, as shown in Figure 5, the colors that were ultimately rendered in the case where optical characteristics extracted using the proposed method in (a2),(b2),(c2), (a4), (b4) and (c4) were relatively closer

3. 2 .
Measurement Methods.Measurements were conducted in a darkroom without ambient light.The focus was set
(a) shows 12 images out of the 13 HDR images, and Figure 3(b) shows the merged HDRI.

Figure 3 :
Figure 3: (a) Twelve HDR shooting images and (b) red circle marking the pixels that are at the same distance from the focus point as the center of the merged HDRI.