Color measurement by the colorized vision system is a superior method to achieve the evaluation of color objectively and continuously. However, the accuracy of color measurement is influenced by the spectral responses of digital sensor and the spectral mismatch of illumination. In this paper, two-color vision system illuminated by digital sensor and LED array, respectively, is presented. The Polynomial-Based Regression method is applied to solve the problem of color calibration in the sRGB and CIEL⁎a⁎b⁎ color spaces. By mapping the tristimulus values from RGB to sRGB color space, color difference between the estimated values and the reference values is less than 3ΔE. Additionally, the mapping matrix ΦRGB→sRGB has proved a better performance in reducing the color difference, and it is introduced subsequently into the colorized vision system proposed for a better color measurement. Necessarily, the printed matter of clothes and the colored ceramic tile are chosen as the application experiment samples of our colorized vision system. As shown in the experimental data, the average color difference of images is less than 6ΔE. It indicates that a better performance of color measurement is obtained via the colorized vision system proposed.
National Natural Science Foundation of China51305137National Science Foundation of Jiangxi Province20151BBE50116GJJ143881. Introduction
Color measurement is essential for a very wide range of industrial applications including paint, paper, oil, skin, printing, food, plastics, and ceramic tile [1–8]. The superficial appearance and the color are regarded as the most important elements evaluated by consumers and are the critical factors for accepting the products by the consumers as well. Traditionally, the color measurement is performed by commercial colorimeter or human being while the colorimeter instruments can measure small and nonrepresentative areas of a few square centimeters exclusively, but they cannot provide the automatic measurement in the pixel available images [7]. What is worse, these devices are expensive and can only be utilized to calculate the average tristimulus values. Usually, the colors of the products images are complex and nonuniform, and they cannot be measured by colorimeter accurately. On the other hand, the automatic visual measurement of colors in an industrial production process can improve the overall quality of the products. The advantage of color measurement by colorized vision system is that machines can evaluate the color continuously and objectively [1]. And the automatic color measurement performed by a colorized vision system proposed with digital sensor and LED array illuminator is discussed.
In our previous work, colorized vision system with the digital sensor and optimal LED array illuminator was investigated for color vision [9–11]. Nevertheless, the uncertainty of LED array illuminator for color measurement would be a deadly factor due to the stray light errors, the instabilities of colorimetric system, and the spectral mismatch of illumination. Compared with the standardized D65 illuminator, the average color reproduction error of our LED array illuminator is up to 7.63, which posts a considerable error for color measurement [12]. Therefore, in order to achieve a lower uncertainty of the color measurement, the color calibration for optimizing illumination is necessary. In this paper, color calibration is performed under the standardized D65 illuminator and the LED array illuminator, respectively, and it is applied to the same scenes captured by the same digital sensor.
Furthermore, the color images generated by the digital sensor are usually device-dependent. Via the CIE 1931 2° color matching functions (CMFs), the standard spectral tristimulus values are established by CIE1931 and are utilized for device-independent color representations on the basis of the properties of human vision system. Besides, different digital sensors can produce different RGB responses, as illustrated in Figures 1(a) and 1(b); the CIE 2° color matching functions and the spectral response functions of our digital sensor severally are represented and it can be seen that there exists a remarkable difference between the digital sensor and the CIE 2° color matching functions. And color measurement is performed on the tristimulus instruments, and even small deviations of the spectral response functions from the CMFs can produce significant color measurement errors [13]. Hence the tristimulus values produced by digital sensor and the standard tristimulus values of the same scene will be obviously different. Therefore, color calibration for the digital sensor is definitely necessary. A plenty of researchers once used the optimal tristimulus filter to design the surface color measurement devices [13–17]. Wolski et al. [14] formulated a nonlinear optimization of sensor response functions to minimize the error in CIEL∗a∗b color space for colorimetry reflective and emissive surface. Ng et al. [16] developed an imaging colorimeter in order to obtain CIEXYZ coordinates for measuring tooth color. Kosztyán et al. [13] once put forward a matrix-based color correction to reduce the systematic errors for color measurement. They have investigated the matrix elements for optimizing the smallest spectral mismatch errors of digital sensor and different illumination distribution. However, in their researches, though they have designed optimal filter to realize CIE color matching functions, it is expensive and difficult for some cheap real-time industry color measurement. In this paper, some simple methods are proposed to decrease the value of tristimulus errors resulting from the spectral mismatch of digital sensor and CIE standardized color matching functions.
Comparison of spectral response: (a) CIE 2° color matching functions and (b) spectral response functions of our digital sensor.
Several color calibration algorithms are proposed for various tasks in different device-independent color spaces. The most commonly used methods include the Polynomial-Based Regression, the Neural Network mapping algorithms, the Support Vector Regression, and the Ridge Regression [18–29]. Moreover, these color calibration methods have been applied into the general imaging devices [16, 18–33], such as digital cameras [16, 18–21, 25–27], digital colposcopy [22], scanners [23, 24, 30], printers and cathode ray tube/liquid crystal display (CRT/LCD) monitors [23, 28], and tristimulus colorimeter [31–33]. Most related researches put emphasis on the comparison of color calibration methods for various tasks. Wang and Zhang [21] put forward an optimized calibration scheme for tongue image. In the comparison with several popular color calibration algorithms, Polynomial-Based Regression was selected as the most suitable method for a specific tongue image calibration in sRGB color space. Shi and Healey [24] used the Polynomial-Based Regression approach to calibrate the color scanner. By considering the color scanner calibration as a reflectance estimation, linear reflectance models were applied into the calibration process. Hong et al. [25] used the polynomial model to derive the colorimetric mapping between camera RGB signals and CIE tristimulus. They have studied the influence of the alteration of reference sample number and calibration matrices terms number on the color calibration accuracy. Li et al. [22] presented a calibration system for the digital colposcopy in CIEL∗a∗b∗ color space. They used a polynomial transformation matrix to calibrate the colposcopy images. Cheung et al. [20] once performed a comparative study between the artificial neural networks and the polynomial transformation for a better camera characterization. It was concluded that polynomial transformation offers a better alternative due to the simple principle and less time-consuming for training. Thus, the method of Polynomial-Based Regression is selected as a simple calibration method in this paper. However, in these literatures mentioned above, they merely selected the device-independent color space and explained why they select this color space. They have totally overlooked the comparison of calibration results in the different device-independent color spaces, and there is no actual data comparison after color calibration. Hence, this paper performs the comparison of two common device-independent color spaces from the aspects of the calibration accuracy, the convenience of image display, and the complexity of the calibration algorithms of the colorized vision system proposed. Significantly, the objective of this paper is to present a novel color calibration method for decreasing tristimulus values error [34] caused by the spectral mismatch of digital sensor and the color reproduction error of LED illuminator.
This paper is organized as follows: the choice of two commonly used device-independent color spaces of the colorized vision system and the transformation relationship are discussed in Section 2. The colorized vision system proposed, the experiment condition, and the training samples are explained in Section 3. Then, a colorized vision system mathematical model on which the color calibration procedure is based and the Polynomial-Based Regression method is discussed in Section 4. A detailed explanation of Polynomial-Based Regression transformation from the device-dependent to device-independent tristimulus values is attached. In Section 5, the comparison of sRGB color space and CIEL∗a∗b∗ color space in color calibration process is provided elaborately; besides the calibration accuracy, the average color difference, the chromaticity diagram, and the average chromaticity coordinate of images before and after calibration are discussed. Printed matter of clothes and colored ceramic tile are used as samples in Section 6. There is an application experiment to confirm the validity of the colorized vision system for color measurement. Ultimately, the discussion and conclusion are given in Section 7.
2. Color Space for Calibration
In this section, the selection of two commonly used device-independent color spaces is described. Color space plays a central role in the development of color vision system, and it makes a significant influence in the calibration process as well.
Color space, as a mathematical model, is presented by three or four values of color components for the purpose of describing the colors under a particular standard, such as the RGB, CMYK, and HSV color spaces. Color space can be broadly split into two basic types: the device-independent color spaces and the device-dependent color spaces. Additionally, the device-independent color space can produce the same color whatever color input or output device is used. And the device-dependent color spaces describe the tristimulus values defined by the characteristics of a specific imaging device. The purpose of the color calibration is to transform the device-dependent to device-independent tristimulus values. Besides, as illustrated in Table 1, the commonly used color spaces are represented, but not all the color spaces listed in Table 1 have the same effect when they are used for color calibration. Plenty of researches have selected the sRGB [19, 21, 27], CIEXYZ [20, 25, 28, 30], and CIEL∗a∗b∗ [22] as the device-independent color spaces. One of the objectives of this paper is to compare the color calibration results calculated in the different device-independent color spaces.
The commonly used color spaces.
Color spaces
Device-dependent
RGB, HSV, HIS, YUV, and so forth
Device-independent
sRGB, CIE XYZ, CIE L∗a∗b∗, CIE L∗u∗v∗, and so forth
The sRGB color space is developed by an IEC technical committee and has been endorsed by many CRT (cathode ray tube) companies. It embodies much advantage for the color calibration. First, the color space has been introduced to many CRT-based display devices, the computer monitors, the printer, and the Internet included, which means that images calibrated in the sRGB color space can be displayed in computer monitor immediately. Second, the standardized D65 illuminant (color temperature is 6500 K) has been recommended by the International Commission on Illumination (CIE) to simulate the daylight sunshine. By illuminating under the D65 illuminator, the images captured by the digital sensor will show an optimum display effect. Fortunately, sRGB has a white point of D65 illuminator in the middle of color gamut [12]. Finally, the sRGB tristimulus values have a constant transformational relationship with CIEXYZ, and it is expressed as follows:(1)RGB=ΓsRGB→XYZ·RGB=0.4124530.3575800.1804320.2126710.7151600.0721690.0193340.1191930.950227·RGB,where ΓsRGB→XYZ represents the transformation model built from sRGB color space to CIEXYZ color space.
The color difference is the calibration evaluation criterion in the CIEL∗a∗b∗ color space. The CIEXYZ tristimulus values are a nonlinear transformation to CIEL∗a∗b∗ [35], it is defined as below:(2)L∗=116·fYY0-16,a∗=500·fXX0-fYY0,b∗=200·fYY0-fZZ0,fx=x1/3,x>0.0088567.787·x+16116,x≤0.008856,where (X0,Y0,Z0) is reference white point of D65 in sRGB color space, and it is assigned the value of (94.81, 100.0, 107.32). The color difference ΔE of two specific colors (L1∗,a1∗,b1∗) and (L2∗,a2∗,b2∗) is calculated by the Euclidean distance. The color difference of two colors can be expressed as follows:(3)ΔE=L1∗-L2∗2+a1∗-a2∗2+b1∗-b2∗2.
Generally speaking, a value of less than ΔE is a theoretical totally noticeable difference, below 3ΔE goes unnoticed difference, and between 3ΔE and 6ΔE is little difference observed by human beings. During the subsequent analysis, (3) explained above is adopted to calculate the color difference and the color chromatic aberration. For the sake of convenience, the nonlinear transformation ΓsRGB→XYZ is introduced which indicates a transformation from sRGB to CIEL∗a∗b∗, and it is defined as follows:(4)L∗a∗b∗=ΓsRGB→L∗a∗b∗RGB=ΓsRGB→XYZ·ΓXYZ→L∗a∗b∗RGB.
This transformation mentioned above is perfectly invertible and CIEL∗a∗b∗ to CIEXYZ transformation ΓsRGB→L∗a∗b∗=ΓL∗a∗b∗→sRGB-1 as well. CIEXYZ, as a color space based on human vision characteristics, is a commonly utilized colorimetric system to describe a specific color as CIE observer recommended by CIE. The CIE-like device-independent color spaces all have a one-to-one mapping into the CIEXYZ color space. Therefore, many researchers have selected this as the device-independent color spaces for color calibration. As for the CIEL∗a∗b∗, a uniform color space, it is also recommended by CIE in the 1976, and it is used for evaluating the color difference. Currently, it is the most popular color space for analyzing the difference of diverse colors. Some researchers have proved that it performs better than RGB on the color texture analysis and image segmentation [22]. So it is convenient to perform the image processing and understanding in this color space.
In conclusion, the CIEXYZ and CIEL∗a∗b∗ color spaces both belong to the CIE. The color difference can be calculated rapidly in the CIEL∗a∗b∗ color space, and CIEXYZ color space is not very abundant in the latter image processing. Therefore, we select the sRGB and CIE-L∗a∗b∗ color spaces which represent the RGB-like and CIE-like color spaces, respectively.
3. Color Vision System and Condition
Colorized vision system presented in this paper is explained minutely in this section and the experiment condition and the training samples as well. Munsell ColorChecker and standardized color patches are often selected as the training samples for color calibration. On account of the limitation of digital camera and in order to keep the influence of nonuniform images for calibration, this study selects Pantone color patches as the training and testing samples whose color is similar to the Munsell ColorChecker. The standard tristimulus values of Pantone color patches are measured by the X-Rite SP60 colorimeter. These values are combined with the standardized tristimulus values of matrices TssRi,sGi,sBi,i=1,2,…,N, and TlsLi∗,sai∗,sbi∗,i=1,2,…,N. Further speaking, the colorized vision system includes the IMPERX digital camera and the LED array illuminator.
The IMPERX (IPX-2M30-GCCI) digital camera is a 12-bit resolution for each channel coupled with a standard C-mount zoom lens (LM35HC) while the spatial resolution is 1600 by 1200 pixels. Besides, the camera has a high sensitivity in the spectral range of 400–1000 nm. To extend the application of this system, the calibration is performed under two illumination conditions which include the artificial D65 illuminator and LED array illuminator [35]. When the proportion of three primary colors of LED illuminator is set (R:G:B) = (254 : 237 : 90), the relevant color temperature 6504 K is very close to D65 [12].
In order to ensure the accuracy of color calibration, the illumination and viewing geometry should be approximately 45/0° as recommended by CIE which means that the angle of optical axis of illumination and the normal direction of the color patches is 45°±5°, and the angle of camera captured and the normal direction of the color patches is less than 10°. In the colorized vision system for color calibration, the color patches are placed on the 45° grey board and the camera is fixed on the tripod which is perpendicular to the grey board. Figure 2 illustrates the applied artificial D65 illumination and the scene of calibration for colorized vision system with LED array illuminator. Besides, the LED illuminator has 94.70% lateral uniformity irradiance distribution within the diameter of 80 mm at the panel-target distance of 200 mm [9]. To minimize the impact of external environment on illumination, the whole imaging system is installed in a dark cell to capture the color patches images. Besides, the image acquisition system is operated by a PC equipped with GigE software.
Color vision system calibration results: (a) the artificial D65 illuminator and (b) the scene of calibration for color vision system.
4. Calibration Methods4.1. Theoretical Basis
In this subsection, the mathematical model of color vision is discussed exhaustively in device-independent and device-dependent color spaces. Then the model of the calibration process of colorized vision system is explained. A color image is always the result of a complex interaction among three components: the physical content of the scene, the illumination incident on the scene, and the characteristics of the digital camera included [35–37]. Tristimulus values of image pixel Ri,Gi,BiT in the device-dependent color spaces are obtained by the digital sensor as follows:(5)RiGiBi=ki·∫380700Eλ·Siλ·fRλfGλfBλdλ,where E(λ) is the spectral distribution of illumination and Si(λ) is the surface reflectance of object at pixel i (i=1,2,…,N). And fR(λ), fG(λ), and fB(λ) are the camera spectral response functions for red, green, and blue color bands, respectively, and k is a scale factor. To simplify the integral operation, it is sufficient to approximate the various continuous spectra by their values at a number of discrete sample points; thus (5) can be transformed to the below equation:(6)RiGiBi=∑i=1nki·Eλ·Siλ·fRλfGλfBλ.When the camera spectral response functions and the surface reflectance of the object along the axis are represented by the n×3 matrix q and n×1 vector s, respectively, (6) can be expressed in matrix notation as follows:(7)ρ=sT·E·q,where E is an n×n diagonal matrix containing samples E(k) along the diagonal. This simplified mathematical model has been widely used in researches [17, 38, 39]. Tristimulus values T from device-independent color space also can be expressed in the matrix notation as below:(8)T=sT·L·r,where s is the same parameter as in (7) and r is an n×3 matrix of the color matching functions as illustrated in Figure 1(b). And L is an n×n diagonal matrix whose diagonal elements are elements of the vector. T represents the standardized CIE illuminant.
Via comparing (7) and (8), an obvious difference between the tristimulus values is embodied in the device-independent and device-dependent color spaces and the color difference results from the illuminant and spectral response functions of digital sensor. Although the artificial standardized CIE D65 illuminator is adopted, the color temperature is about 6675 K (standard D65 is 6500 K) and the average color reproduction error is up to 3.25. The result implies that the illumination calibration on matrix ρ is needed. Hence the calibration of proposed colorized vision system is performed under the artificial standardized CIE D65 illuminator and the LED array illuminator severally. The calibration process can be viewed as a mapping problem. The purposed is to find the optimal mapping which can be expressed as follows:(9)Φ=argminΦ∑i=1NΦρi-Ti2,where Ti=sTLD65r (i=1,2,…,N) is the matrix of standardized tristimulus values and LD65 represents that the illuminant L is D65 illuminator while ∗2 is the error matrix in CIEL∗a∗b∗ color space. By using artificial D65 or LED illuminator for colorized vision system, the matrix ρi (i=1,2,…,N) can be defined as follows:(10)ρad65=sT·Ead65·q,i=1,2,…,N,ρLED=sT·ELED·q,i=1,2,…,N,where Ead65 and ELED are the spectral distribution samples of artificial D65 and LED illuminators of the diagonal, respectively. On the basis of (10) and (11) substituting in (9), profile of colorized vision system under artificial D65 and LED illuminators can be expressed as the Φ(ad65) and Φ(LED) respectively. Nowadays, many optimization regression algorithms are applied to solve the mapping problem. In the next subsection, the Polynomial-Based Regression method is explained as the solution algorithm.
4.2. The Polynomial-Based Regression
The commonly used color calibration algorithms, the Polynomial-Based Regression, the Neural Network mapping algorithms, the Support Vector Regression, and the Ridge Regression included, are applied widely to obtain the mapping matrix. And the comparison among these regression algorithms is performed by many researches [20, 21, 31]. The Polynomial-Based Regression is considered as the most commonly used method due to the simple executive routine and less training time. In this subsection, the Polynomial-Based Regression method is applied to get ΦOPT in sRGB and CIEL∗a∗b∗ color spaces.
The mapping coefficient matrix ΦOPT can be expressed as ΦRGB→sRGB in the sRGB color space. The basic principle of polynomial transform is as follows. Initially assume that the number of reference Panton color patches is N. Subsequently, according to the analysis of Sections 3 and 4.1, the average tristimulus values of images captured by the digital camera can be represented by a matrix ρ(Ri,Gi,Bi) (i=1,2,…,N) and the matrix of standard tristimulus values measured by SP60 in sRGB is Ts(sRi,sGi,sBi) (i=1,2,…,N). Then the mapping coefficient matrix with m terms of RGB to sRGB can be written as below:(11)sRsGsB=m·RGBΦRGB→sRGBT=Θma1Ra1Ga1Ba2Ra2Ga2B⋮⋮⋮amRamGamB.When the m term is 3, the transformation is only a linear transform. But the spectral response function of the digital camera is a nonlinear combination of the color matching functions. The purpose of polynomial transformation is to add more terms for increasing the transformation accuracy. Therefore, there are 4, 6, and 11 elements in the following expression:(12)Θ4RGB=1RGB,Θ6RGB=1RGBRGGBBR,Θ11RGB=1RGBRGGBBRR2G2B2RGB.
Polynomial combination Θ4 for color calibration is given as an example and the transformation model can be expressed as follows:(13)sR1sG1sB1sR2sG2sB2⋮⋮⋮sRnsGnsBn=1R1G1B11R2G2B2⋮⋮⋮⋮1RnGnBna11a12a13a21a22a23a31a32a33a41a42a43,where (13) also can be written in the matrix format as below:(14)Ts=X·ΦRGB→sRGBT,where ΦRGB→sRGBT is the transposed matrix of mapping coefficient matrix ΦRGB→sRGB and X is the matrix generated by different polynomial combination Θm. This equation can easily be solved in a least-squares sense. Then (9) can be expressed as follows:(15)Φ=argminΦRGB→sRGB∑i=1NX·ΦRGB→sRGBT-Ti2.
According to the batch least-squares algorithm, the solution to (14) and (15) is as follows:(16)ΦRGB→sRGB=XT·X-1·XT·Ts,where X-1 is the inverse matrix of X. The polynomial combination Θ11 is recommended by some researches [19, 25]. Hence, in this paper, Θ4 and Θ11 are performed to calibrate the colorized vision system and made a comparison on their accuracy. By means of the mapping ΦRGB→sRGB, the tristimulus of captured color patches ρ(Ri,Gi,Bi) (i=1,2,…,N) in the RGB color space is transformed to Pρ(Ri,Gi,Bi) (i=1,2,…,N) in the sRGB color space which is expressed as below:(17)Pρ1,Pρ2,…,PρN=mRGB·ΦRGB→sRGB=Θm·R1G1B1R2G2B2⋮⋮⋮RNGNBNa1Ra1Ga1Ba2Ra2Ga2B⋮⋮⋮amRamGamB.
In this sRGB color space, the mapping error of the calibration images is hardly represented for the human perception.
The sRGB values Pρ(Ri,Gi,Bi) (i=1,2,…,N) to CIEL∗a∗b∗Lρ(Li∗,ai∗,bi∗) are transformed under the certain transformation ΓsRGB→L∗a∗b∗ for calculating the color difference and performing the subsequent image processing:(18)LρLi∗ai∗bi∗=ΓRGB→L∗a∗b∗·PρRiGiBi.
The mapping error ΔEi(ΦRGB→sRGB) is determined between the standardized CIE-L∗a∗b∗ tristimulus value Tl:(sLi∗,sai∗,sbi∗) (i=1,2,…,N) and estimated value L:(Li∗,ai∗,bi∗) (i=1,2,…,N) and is calculated as below:(19)ΔEiΦRGB→sRGB,m=sLi∗-Li∗2+sai∗-ai∗2+sbi∗-bi∗2.
There is no appropriate formula to measure the color difference of two specific images while (19) is just used to compute the color difference ΔEave(Image) of two specific tristimulus values. Generally, the color of calibration image is not uniform; the color difference between the mean value of image and the standard value is utilized as the evaluation criterion. But the images represented by the mean tristimulus values are not comprehensive enough. Therefore, in this paper, the difference of chromaticity coordinate between each pixel of images and the chromaticity coordinate of standardized value is calculated to evaluate the chromaticity differences of calibration images. Using the certain transformation ΓsRGB→XYZ, the XYZ values of images before color calibration Ib(Xj,Yj,Zj) and images after calibration Ia(Xj,Yj,Zj) are obtained, where j is the jth pixel of image. And thus chromaticity coordinates Cb(xj,yj) and Ca(xj,yj) can be explained as below:(20)xj=XjXj+Yj+Zj,yj=YjXj+Yj+Zj.
The chromaticity diagrams of images are used to compare the calibration results intuitively between the images before and after the calibration. The average Euclidean distances between Cb(xj,yj), Ca(xj,yj), and the standard chromaticity coordinate C(xs,ys) are used as evaluation criterion of color difference which is defined as follows:(21)ΔC=1k·∑j=1kxj-xs+yj-ys+zj-zs,where k is the number of image pixels, while in the subsequent sections, the average Euclidean distance of chromaticity coordinates ΔCb, ΔCa and the color difference ΔEave(Image) are used to assess the calibration impact in fixed quantity.
In conclusion, the whole color calibration process for colorized vision system in sRGB color space has been completed. As illustrated in Figure 3, the schematic diagram of the color calibration algorithm in sRGB color space is explained minutely. When it comes to the term of different illumination condition, the mapping coefficient matrices are expressed as ΦRGB→sRGB(LED) and ΦRGB→sRGB(ad65). And then the color calibration process in CIEL∗a∗b∗ color space for color vision system is introduced. Assuming the RGB values matrix of captured images of color patches is from sRGB color space, ρ can be expressed as ρs(Ri,Gi,Bi) (i=1,2,…,N). The assumed sRGB values ρs are transformed to CIEL∗a∗b∗ color space ρl(Li∗,ai∗,bi∗) (i=1,2,…,N) under the certain transformation ΓRGB→L∗a∗b∗. The mapping coefficient matrix ΦRGB→L∗a∗b∗ with 4 and 11 terms is obtained by polynomial regression using (12)–(18). Using the mapping matrix ΦRGB→L∗a∗b∗, the assumed tristimulus value of captured color patches ρs(Ri,Gi,Bi) (i=1,2,…,N) in sRGB color space is transformed to Lρs(Li∗,ai∗,bi∗) (i=1,2,…,N) in the CIEL∗a∗b∗ color space, which is expressed as follows:(22)LρsLi∗,ai∗,bi∗=mρlLi∗,ai∗,bi∗ΦRGB→L∗a∗b∗T.
Schematic diagram of the calibration algorithm in sRGB color space.
The mapping error ΔEi(ΦRGB→sRGB) is determined by (20). According to the analysis of Section 4.1, the mappings ΦRGB→sRGB(LED) and ΦRGB→sRGB(ad65) represent the mapping matrices of the color vision system under LED illuminator and artificial D65 illuminator, respectively. Furthermore, Figure 4 illustrates the schematic diagram of the calibration algorithm in CIE-L∗a∗b∗ color space. The evaluation of the mapping coefficient ΦRGB→sRGB and ΦRGB→L∗a∗b∗ with 4 and 11 terms and the final choice for our color vision system are reviewed when discussing the experiment results.
Schematic diagram of the calibration algorithm in CIE-L∗a∗b∗ color space.
5. Experiment Results and Discussion
Comparing Figure 3 with Figure 4, it can be seen that there are many advantages and disadvantages in these calibration processes. In sRGB color space, it is very convenient to perform the color calibration because the captured images do not need any processing to be done before calibration and the calibrated images can be immediately displayed in CRT monitor for human perception. The sRGB color space is unsuitable for later image processing for each color channel is relevant to the luminosity. But the transformation to CIEL∗a∗b∗ color space is just a certain transformation ΦsRGB→L∗a∗b∗. The CIEL∗a∗b∗ color space is a perceptually uniform color space in which we can detect the mapping error conveniently and perform the color texture analysis better than in sRGB color space [40]. However, before calibration the transformation of images is necessary. And the calibration accuracy will be influenced by the digital conversion accuracy.
The color calibration of colorized vision system is performed in the CIEL∗a∗b∗ and sRGB color spaces under the LED array illuminator and artificial D65 illuminator severally. And the experiment conditions are already described in Section 3. Besides, all the algorithms are implemented by MATLAB software. Then the mappings ΦRGB→sRGB(LED), ΦRGB→sRGB(ad65), ΦRGB→L∗a∗b∗(LED), and ΦRGB→L∗a∗b∗(ad65) are obtained by using Polynomial-Based Regression with 4 and 11 terms. With calibration accuracy, the measurements of colors made in the colorized vision system proposed are close to the reference instrument X-Rite SP60. In order to quantify the color calibration accuracy, the maximum mapping error ΔEmax and the average mapping error ΔEave of twenty-four color patches are calculated. Tables 2 and 3 both illustrate the accuracy comparison in the sRGB color space and CIEL∗a∗b∗ color space [41].
The accuracy comparison ΔEi(ΦRGB→sRGB) in sRGB color space obtained by mapping coefficient matrix ΦRGB→sRGB.
m
Obtained by mapping ΦRGB→sRGB
Terms
Mapping ΦRGB→sRGB(ad65)
Mapping ΦRGB→sRGB(LED)
Maxmapping error ΔEmaxΦRGB→sRGB(ad65)
Average mapping error ΔEaveΦRGB→sRGB(ad65)
Maxmapping error ΔEmaxΦRGB→sRGB(LED)
Average mapping error ΔEaveΦRGB→sRGB(LED)
4
43.41
10.92
42.54
11.85
11
4.46
2.56
5.89
2.39
The accuracy comparison ΔEi(ΦRGB→L∗a∗b∗) in CIE-L∗a∗b∗ color space obtained by mapping coefficient matrix ΦRGB→L∗a∗b∗.
m
Obtained by mapping ΦRGB→L∗a∗b∗
Terms
Mapping ΦRGB→L∗a∗b∗(ad65)
Mapping ΦRGB→L∗a∗b∗(LED)
Max mapping error ΔEmaxΦRGB→L∗a∗b∗(ad65)
Average mapping error ΔEaveΦRGB→L∗a∗b∗(LED)
Maxmapping error ΔEmaxΦRGB→L∗a∗b∗(ad65)
Average mapping error ΔEaveΦRGB→L∗a∗b∗(LED)
4
45.02
11.17
43.32
12.69
11
9.54
4.92
5.89
4.94
Comparing Table 2 with Table 3, it is shown that whatever the Polynomial-Based Regression with 4 or 11 terms, color vision system illuminated under the artificial D65 illuminator or the white field (R:G:B) = (254 : 237 : 90) of LED illuminator, the maximum mapping error ΔEmax and the average mapping error ΔEave of the twenty-four color patches in sRGB color space illustrate higher accuracy than in CIEL∗a∗b∗ color space. The average mapping errors of mapping ΦRGB→sRGB are up to 2.56 and 2.39 with 11 terms which are located in unnoticeable scope. But the mapping errors of ΦRGB→L∗a∗b∗ are 4.92 and 4.94; the accuracies are obviously on the high side due to the redundant transformation process before calibration in CIEL∗a∗b∗ color space. Integrating the analysis results of color calibration schematic diagram, sRGB color space is strongly recommended to serve as the device-independent color space for color calibration for the less transformation process before calibration, the higher accuracy, and the more convenience for displaying images after calibration. Besides, the sRGB tristimulus can be conveniently transformed to the CIE-XYZ or CIEL∗a∗b∗ color space under the certain transformation ΓsRGB→XYZ→L∗a∗b∗ for evaluating the calibrated images and performing the later image processing.
In comparison with Polynomial-Based Regression with 4 and 11 terms, the mapping errors of 4 terms obtained from ΦRGB→sRGB and ΦRGB→L∗a∗b∗ are still higher than that of 11 terms. When RGB1 are used for polynomial regression, it regards the problem of finding an optimal mapping Φ as a simple linear transformation. The experiments illustrate that this problem cannot be regarded as a linear transformation because the spectral response functions of the digital camera are not a linear combination of the color matching functions. To solve the optimal mapping Φ problem, the high-order polynomial transformation is suitable. In this paper the polynomial regression with 11 terms in sRGB color space is applied for the next application.
Figures 5(a) and 5(b) illustrate the comparison of color difference of twenty-four color patches before color calibrations ΔEi(LED), ΔEi(ad65) and mapping errors ΔEi(ΦRGB→sRGB(LED)), ΔEi(ΦRGB→sRGB(ad65)) for the colorized vision system illuminated under the LED illuminator and artificial D65 illuminator. In the literature [21], the images captured by standardized CIE lighting conditions and an industrial camera seem as the reference images. However, in our experiments, the average color difference of the images captured by the colorized vision system proposed under the artificial D65 illuminator is still up to 16.09 because of the mismatch of spectral response functions of the digital sensor and the average color reproduction error of artificial D65 illuminator. It is essential to perform the color calibration on the “reference” images. Due to the narrow width of LED illuminator, the average color difference before calibration is up to 27.68 which is much higher than the artificial D65 illuminator (see also Figures 5(a) and 5(b)). After mapping the RGB tristimulus to sRGB tristimulus ΦRGB→sRGB, the average color difference under artificial D65 illuminator is changed from 16.09 to 2.56; the color calibration for the digital sensor is successful. Under the LED illuminator the value is changed from 27.68 to 2.39 which means that the color vision system can also be applied for automatic color measurement.
The comparison of color difference of color patches. (a) Mapping ΦRGB→sRGB(LED) under the LED illuminator and (b) mapping ΦRGB→sRGB(ad65) under the artificial D65 illuminator.
The evaluation of calibration images is also the important target because the mean tristimulus values cannot represent the whole images for color measurement. Two color patches, 225 M and 368 M of captured images, are calibrated for evaluating the calibration images. Besides, 225 M and 368 M are the identification serial numbers of Panton color patches.
Figures 6(a), 6(b), 7(a), and 7(b) illustrate the images of 225 M and 368 M captured by the same digital sensor under the artificial D65 illuminator and the LED illuminator before calibration, respectively. The color temperature and chromaticity coordinate of the D65 and the LED illuminator are 6675 K (0.3081,0.3374) and 6505 K (0.3132,0.3244) severally. They are all very close to the standardized D65 illuminator (6504 K (0.3127,0.3290)). However, an obvious color discrepancy exists in the images before calibration. The color differences between the average color of images and the standardized tristimulus values are up to 13.73, 18.54 and 31.59, 50.96 for the two color patches illustrate the color images obtained by the mappings ΦRGB→sRGB(ad65) and ΦRGB→sRGB(LED), respectively. Figures 6(e) and 7(e) show the virtual images made by the standardized tristimulus values ΔEave(Image). It can be seen that the color of neither row between calibrated and standardized images looks much more similar by the human eyes perception. As shown in Table 4, the color differences between the mean color values of neither row images and the standard tristimulus values are also computed. The color differences of two color patches are 1.42, 0.74 and 1.76, 1.55 which are all located in totally unnoticeable or unnoticeable range.
The comparison of ΔEave(Image), ΔC between captured images and theoretical images before and after calibration.
Serial number
368 M
225 M
Standard chromaticity coordinate
(0.3385,0.4260)
(0.3458,0.2715)
Illuminate
D65
LED
D65
LED
Precalibration ΔEave(Image)
13.73
18.54
31.59
50.96
Postcalibration ΔEave(Image)
1.42
0.74
1.76
1.55
Precalibration ΔCb
0.0539
0.0501
0.1558
0.0954
Postcalibration ΔCa
0.0098
0.0105
0.0135
0.0044
Improved ratio
81.82%
79.04%
91.33%
95.39%
The comparison of 368 M color patches. (a) Before calibration under D65 illuminator and (b) before calibration under LED illuminator and (c) obtained by ΦRGB→sRGB(ad65) and (d) obtained by ΦRGB→sRGB(LED) and (e) theoretical images of standardized tristimulus values.
The comparison of 225 M color patches. (a) Before calibration under D65 illuminator and (b) before calibration under LED illuminator and (c) obtained by ΦRGB→sRGB(ad65) and (d) obtained by ΦRGB→sRGB(LED) and (e) theoretical images of standardized tristimulus values.
The colors of the actual calibration images are usually not uniform and cannot be represented by mean values. Hence only utilizing the average color difference of images ΔEave(Image) as the evaluation criterion is not objective enough. Therefore, the chromaticity diagram corresponding to all the pixels of images before and after color calibration is applied to compare the calibration results significantly. The average Euclidean distance between the chromaticity coordinate of every pixel in images and the standard chromaticity coordinate is also used in quantitative analysis. Figures 8(a) and 8(b) illustrate the x-y chromaticity diagram of two color patches before and after color calibration which are transformed from Figures 6(a)–6(e) to Figures 7(a)–7(e) by means of (1)–(19).
Chromaticity diagram x-y before and after calibration. (a) 225 M color patch and (b) 368 M color patch.
Although the color of captured color patches is uniform, it can be seen from Figure 8 that the nonuniform characteristic existed in the chromaticity diagram due to the nonuniform illumination, the irregularity of the color patches, and so forth. Figures 8(a) and 8(b) show that the average distance between the standardized chromaticity coordinate and the x-y chromaticity coordinates of images before calibration captured under the D65 illuminator are closer than those captured under LED illuminator. The average distances ΔCb of chromaticity coordinate for 225 M and 368 M color patches are 0.0539, 0.0501 and 0.1558, 0.0954, respectively, which confirms that the images captured under the D65 illuminator are better than those under the LED illuminator by the same digital sensor. As we can see, the chromaticity coordinate distributions of images calibrated by means of mappings ΦRGB→sRGB(ad65) and ΦRGB→sRGB(LED) are almost overlapped with each other. From Figure 8, we can also notice that all the chromaticity coordinates of calibration images tend to be clustered around the standard chromaticity coordinate as the central point in the chromaticity diagram. Average chromaticity coordinates between the captured images after color calibration decrease to 0.0098, 0.0105 and 0.0135, 0.0044, respectively. Such a small value of the chromaticity coordinate distance between images cannot even be perceived by human being, and the color differences ΔEave(Image) also confirm these results. The improved ratios are all stand at the level 80%. All the data results are illustrated in Table 4. Therefore, these evaluation criteria will be applied to evaluate the results of the further application.
sRGB and CIEL∗a∗b∗, as the device-independent color spaces, are utilized to conduct the experiment of color calibration. Additionally, the method of Polynomial-Based Regression is adopted to calculate the color difference. As shown in Tables 2 and 3, a lower difference value is represented under the circumstance of m=4. However, a considerable amount of difference value is obtained when m=11, and it indicates that the sRGB color space is regarded as the model for a better improvement of color calibration. Moreover, as for the images taken under different illuminators, a better performance of color calibration is gained by the mapping ΦRGB→sRGB.
The experiment results have shown that sRGB color space selected as the device-independent color space is better than CIEL∗a∗b∗ color space whatever the polynomial regression with 4 or 11 terms under the circumstance of colorized vision system captured under artificial D65 illuminator or LED illuminator. The images captured by artificial D65 illuminator cannot seem as the reference images since the average color difference is up to 16.09. The images acquired under various illuminator conditions by the same digital sensor are successfully transformed by the mapping coefficient matrix ΦRGB→sRGB in sRGB color space. The average Euclidean distance ΔC of each pixel of images and the average color difference of images ΔEave(Image) and x-y chromaticity diagram are adopted as the evaluation criteria. These results confirm that the calibration for the colorized vision system proposed is useful and beneficial.
In this section, two mapping matrices, ΦRGB→sRGB and ΦRGB→L∗a∗b∗, are introduced and the experiments of color calibration are conducted under different illuminators, respectively. Through the method of Polynomial-Based Regression with 4 and 11 terms, a better performance of mapping matrix ΦRGB→sRGB is obtained and it is applied to conduct the application experiments.
6. Applications Experiment
In order to verify the validation of the colorized vision system furtherly after calibration which can be applied for color measurement, the printed matter of clothes and the ceramic tile are chosen in this paper as experiment samples. Unlike the comparison and the choice experiments, in real application, all the images are acquired by the colorized vision system under LED illuminator because the results in Section 5 have illustrated the calibration accuracy of LED illuminator is high enough as well as D65 illuminator. Apart from the calibration accuracy, LED illuminator has a superiority of luminous efficacy, compactness, durability, and convenience for industrial scene. The white balance of LED illuminator (R:G:B) = (254 : 237 : 90) is utilized to capture images.
Figure 9 illustrates the images of clothes printed matter before and after calibration, respectively. It can be explained that an obvious distinction is detected in the images before and after calibration. We invite several human beings with normal vision to compare actual clothes color with the calibrated color in Figure 9. They all think the color of Figure 9(b) is close to the color of clothes. The average color difference ΔEave and the average Euclidean distance ΔC of images of blue printed matter in Figure 9 are also computed. Before calibration, the values are 17.63 and 0.0236 while after calibration the values fall to 4.92 and 0.0092. The average color difference ΔEave(Image) locates in the range of little difference. Figure 12(a) illustrates the chromaticity diagram of the blue printed matter of clothes. As illustrated in Figure 12(a), the chromaticity coordinate of the standardized value is at the edge of the chromaticity diagram of the images after calibration. The standard tristimulus value is measured by colorimeter X-Rite SP60. And it is just an average value in the range of circle with three-centimeter diameter. However, the colors of measured samples are not uniform. Hence, the standardized tristimulus value is not very accurate and is just used as the reference substance.
The images of printed matter of clothes. (a) Before calibration and (b) after calibration.
Colored ceramic tile is an important construction material which has been widely applied. The color of ceramic tile is easily nonuniform because of the inaccurate control of the stove temperature and the sending speed. Nowadays, the color of ceramic tile is still measured by human vision or colorimeter. Figure 10 shows the ceramic tile images before and after color calibration. From Figures 10(a) and 10(b), we can notice that mapping coefficient matrix ΦRGB→sRGB(LED) is sensitive to the specular reflection because the ceramic tile is a kind of material with high specular reflection. There are many black pixels in the middle of the images after calibration, and the whole image is in the distortion model. Hence, a linear polarizer is placed in front of the digital sensor to capture the image. Then the mapping ΦRGB→sRGB(LED) is also used to calibrate the images. As visually shown in Figure 11, the specular reflection is eliminated and the calibration images show the normal color. We also invite several people with normal vision to compare the ceramic tile color. They are unanimous that the color of image after calibration is close to the ceramic tile.
The images of ceramic tile. (a) Before calibration and (b) after calibration.
The images captured by a linear polarizer before and after calibration. (a) Before calibration and (b) after calibration.
The x-y chromaticity diagram. (a) Blue printed matter and (b) middle region of ceramic tile.
We select the middle of the images to calculate the average color difference and to draw the chromaticity diagram. As shown in Table 5, the average color difference and the average Euclidean distance of chromaticity coordinate are 34.57 and 0.0166, which fall to 5.69 and 0.0115 after color calibration. Figure 12(b) illustrates x-y chromaticity diagram of the middle region of ceramic tile. It can be seen that the improved ratio of ΔEave(Image) and the chromaticity diagram are not in accordance with the average color difference. The Euclidean distance of average RGB tristimulus values between before and after calibration is up to 106. But the chromaticity coordinate represents the corresponding proportion of the tristimulus values, which are close to each other. In this situation, comparison of chromaticity coordinate is insignificant to the calibration results. We could have used average color difference ΔEave(Image) to estimate the difference before and after calibration.
The comparison of ΔEave(Image), ΔC between captured images and reference value measured by SP60 before and after calibration.
Obtained by mapping ΦRGB→sRGB(LED)
Blue printed matter
Middle region of ceramic tile
Chromaticity coordinate measured by SP60
(0.28,0.3135)
(0.3313,0.3456)
Precalibration ΔEave(Image)
17.63
30.93
Postcalibration ΔEave(Image)
4.92
5.82
Precalibration ΔCb
0.0236
0.0166
Postcalibration ΔCa
0.0092
0.0115
Improved ratio
61.02%
30.72%
From what is discussed above, we can conclude that the tristimulus values obtained by mapping ΦRGB→sRGB(LED) are close to the values measured by SP60 colorimeter. The calibration accuracy in real application is lower than the application experiment because the reference values measured by SP60 are not that accurate. Colorized vision system after calibration can be used to replace the colorimeter for color measurement in pixel available under certain condition. The mapping algorithm Φ is sensitive to specular reflection. The linear polarizer should be added to the colorized vision system proposed for color measurement when the sample belongs to the kinds of material with high specular reflection.
7. Conclusion
This paper explains a colorized vision system with a digital sensor and LED array illuminator for color measurement. In order to improve the measurement accuracy, the color calibration is implemented for the color vision system illuminated under artificial D65 illuminator and LED array illuminator. First, the mathematical model of the calibration process is derived from the tristimulus values principle of CIE1931. The calibration process is converted to solve the optimal mapping problem. Second, the Polynomial-Based Regression is used to obtain the mapping coefficient matrix. By using the mapping matrix, the tristimulus values in device-independent color space are obtained from the RGB values of images captured by color vision system. Third, color calibration for the colorized vision system proposed is performed by polynomial regression method in two commonly used color spaces, the CIEL∗a∗b∗ and sRGB color spaces included. The sRGB color space is recommended as the device-independent color space due to the less transformation process before color calibration, the higher accuracy, and the more convenience to display images after calibration. The images illuminated by artificial D65 illuminator cannot seem as the reference images because the average color difference of color patches is up to 16.0816ΔE before calibration. The mapping matrix has proved effective in reducing the color difference to less than 3ΔE and improving the ratio of average chromaticity coordinate ΔC of the images more than 80%.
Finally, printed matter of clothes and colored ceramic tile are used as the application samples for color measurement. The mapping matrix is sensitive to the pixels with specular reflection. When the sample belongs to the kinds of material with high specular reflection, a linear polarizer should be added to reduce the component of specular reflection for our color vision system. The average color differences between the estimated values and values measured by SP60 colorimeter are less than 6ΔE. The experiment results show that the colorized vision system applied for color measurement is feasible.
Competing Interests
The authors declare that they have no competing interests.
Acknowledgments
This research is supported by the National Natural Science Foundation of China (51305137), the National Science Foundation of Jiangxi Province (20151BBE50116), and the National Science Foundation of Jiangxi Province (GJJ14388).
StokmanH. M. G.GeversT.KoenderinkJ. J.Color measurement by imaging spectrometryKılıçK.Onal-UlusoyB.BoyacıI. H.A novel method for color determination of edible oils in L^{∗}a^{∗}b^{∗} formatde OliveiraA. C. M.BalabanM. O.Comparison of a colorimeter with a machine vision system in measuring color of Gulf of Mexico sturgeon filletsBoukouvalasC.KittlerJ.MarikR.PetrouM.Automatic color grading of ceramic tiles using machine visionBoukouvalasC.KittlerJ.MarikR.PetrouM.Color grading of randomly textured ceramic tiles using color histogramsPladellorensJ.PintóA.SeguraA. J.CadevallC.AntóJ.PujolJ.VilasecaM.CollJ.A device for the color measurement and detection of spots on the skinLeónK.MeryD.PedreschiF.LeónJ.Color measurement in L^{∗}a^{∗}b^{∗} units from RGB digital imagesPospíšilJ.HrdýJ.HrdýJ.Jr.Basic methods for measuring the reflectance color of iron oxidesZhuZ.-M.QuX.-H.JiaG.-X.OuyangJ.-F.Uniform illumination design by configuration of LED array and diffuse reflection surface for color vision applicationZhuZ. M.QuX. H.LiangH. Y.JiaG. X.Effect of color illumination on color contrast in color vision application7855Optical Metrology and Inspection for Industrial Applications, 785510November 201018Proceedings of SPIEZhuZ.QuX.JiaG.-X.Wavelength intervals selection of illumination for separating objects from backgrounds in color vision applicationsZhuZ.-M.QuX.-G.ChaoB.JiaG.-X.ZhangF.-M.Study on colorimetric properties of LED array sources for color vision applicationKosztyánZ. T.EppeldauerG. P.SchandaJ. D.Matrix-based color measurement corrections of tristimulus colorimetersWolskiM.BoumanC. A.AllebachJ. P.WalowitE.Optimization of sensor response functions for colorimetry of reflective and emissive objectsVrhelM. J.TrussellH. J.BoschJ.Design and realization of optimal color filters for correctionNgD.AllebachJ. P.AnalouiM.PizloZ.Non-contact imaging colorimeter for human tooth color assessment using a digital cameraNgD.-Y.AllebachJ. P.A subspace matching color filter design methodology for a multispectral imaging systemChiuL.-C.FuhC.-S.Dynamic color restoration method in real time image system equipped with digital image sensorsHaeghenY. V.NaeyaertJ.LemahieuI.PhilipsW.An imaging system with calibrated color image acquisition for use in dermatologyCheungV.WestlandS.ConnahD.RipamontiC.A comparative study of the characterisation of colour cameras by means of neural networks and polynomial transformsWangX.ZhangD.An optimized tongue image color correction schemeLiW.Soto-ThompsonM.GustafssonU.A new image calibration system in digital colposcopyVrhelM. J.TrussellH. J.Color device calibration: a mathematical formulationShiM.HealeyG.Using reflectance models for color scanner calibrationHongG.LuoM. R.RhodesP. A.A study of digital camera colorimetric characterization based on polynomial modelingChangY.-C.ReidJ. F.RGB calibration for color image analysis in machine visionKaoW.-C.WangS.-H.KaoC.-C.HuangC.-W.LinS.-Y.Color reproduction for digital imaging systemsProceedings of the IEEE International Symposium on Circuits and Systems (ISCAS '06)May 2006Island of Kos, Greece459946022-s2.0-33750109387BastaniB.CressmanB.FuntB.Calibrated color mapping between LCD and CRT displays: a case studyChengH. D.CaiX.MinR.A novel approach to color normalization using neural networkSharmaG.Target-less scanner color calibrationGardnerJ. L.Comparison of calibration methods for tri-stimulus colorimetersGardnerJ. L.Tristimulus colorimeter calibration matrix uncertaintiesEppeldauerG.Spectral response based calibration method of tristimulus colorimetersTestaD. D.RossiM.Lightweight lossy compression of biometric patterns via denoising autoencodersJackowskiM.GoshtasbyA.BinesS.RosemanD.YuC.Correcting the geometry and color of digital imagesCorzoL. G.PeñarandaJ. A.PeerP.Estimation of a fluorescent lamp spectral distribution for color image in machine visionHealeyG. E.KondepudyR.Radiometric CCD camera calibration and noise estimationSapiroG.Color and illuminant votingVoraP. L.TrussellH. J.Mathematical methods for the design of color scanning filtersVrhelM. J.TrussellH. J.Optimal color filters in the presence of noisePaschosG.Perceptually uniform color spaces for color texture analysis: an empirical evaluation