Although the use of computer color matching can reduce the influence of subjective factors by technicians, matching the color of a natural tooth with a ceramic restoration is still one of the most challenging topics in esthetic prosthodontics. Back propagation neural network (BPNN) has already been introduced into the computer color matching in dentistry, but it has disadvantages such as unstable and low accuracy. In our study, we adopt genetic algorithm (GA) to optimize the initial weights and threshold values in BPNN for improving the matching precision. To our knowledge, we firstly combine the BPNN with GA in computer color matching in dentistry. Extensive experiments demonstrate that the proposed method improves the precision and prediction robustness of the color matching in restorative dentistry.
With the rapid development of technology, various new materials are brought into dentistry. People no longer only pay attention to the functional recovery such as chewing and durability; instead they pay more attention to aesthetics [
Color difference between target tooth and a shade guide tab.
The computer color matching (CCM) technique provides the color matching of teeth restoration with a broad new method for research and application. Along with the Kubelka Munk theory put forward in 1931, computer color matching had been widely used in dyeing and printing industry. In a series of research from 1992 to 1994, Ishikawa-Nagai et al. realized computer color matching of opaque layer on the color of porcelain-fuse-to-metal restorations (PFM) using spectrophotometer [
It is worth mentioning that there is obvious chromatism between part of the porcelain pieces and natural dentition in the CCM based experiments by Ishikawa-Nagai et al. [
BP neural network is one of the most popular neural network methods presently [
Artificial neural network (ANN) is accepted as a technology offering an alternative way to simulate complex and ill-defined problems. Back propagation neural network (BPNN) is a typical ANN that has been widely used in many medical fields such as medical image analysis, expert system for clinical diagnosis and treatment, medical signal analysis, and processing. It has successfully solved many complicated nonlinear problems. BPNN has hierarchical feed forward network architecture, and the outputs of each layer are sent directly to each neuron of the previous layer. BPNN can have many layers while all pattern recognition and classification tasks can be accomplished with a three-layer BPNN, as shown in Figure
The BPNN structure with 3 layers.
According to Kolmogorov theorem and BP fix quantification, three-layer BP network with nonlinear excitation function can approach any nonlinear function at any precision. Multilayer perceptron is widely employed due to this remarkable advantage. However, the standard BP algorithm has some defects as follows. In mathematics, it can be seen as a nonlinear gradient optimization problem. Therefore, it is easy to fall into local minima and cannot reach the global optimal solution. Too much training makes convergent velocity slow. It is difficult to determine the structure of hidden layer nodes due to lacking of theoretical guidance. There exists tendency to forget old samples during training with new samples.
Aiming at these problems, three kinds of commonly used methods have been proposed.
The formula shows that part of prior weight adjustment quantity will be added to current weight. The
An initial learning efficiency should be set. After a round of weight adjustment, if the total error increases, current adjustment is regarded as invalid, and adjust learning efficiency according to formula
Whereas, if the total error descends, current adjustment is regarded as valid, meanwhile, adjust learning efficiency according to formula
It is considered to have entered the flat area when
The initial weights and threshold of traditional neural network are randomly generated. In addition, network connection weights and threshold of the whole distribution will influence the effect of data fitting. Improper initial parameters can lead to no convergence or fall into local extremum which will worsen the accuracy of the final prediction.
In clinical applications, it is needed to provide better service to patients with low error and high stability. Genetic algorithm (GA) is adopted to improve the accuracy of computer color matching of restoration. GA will optimize the initial weights and threshold values. It can effectively reduce the randomness of initial parameters. The local optimal defects of BP algorithm will be overcome due to more stable predictive effect by using GA and neural network.
Genetic algorithm is a simulated evolutionary process method. It follows the principle of evolution and takes the good individual evolution as the optimal solution. The flowchart of genetic algorithm is shown in Figure
Flow chart of genetic algorithm.
Each step of the genetic algorithm is explained as follows.
The probability of individual
We mixed VITA VMK95 dentin porcelain powder according to different proportion. The powder will be molding in homemade stainless steel mold (the diameter is 15 mm; the thickness is 3 mm). Then, we put the porcelain powder into porcelain pieces in porcelain furnace and manufacture porcelain pieces of specimen. Finally, the color of porcelain restoration database is generated by measuring the shade of the specimen with crystaleye dental spectrophotometer [
Measure of the shade of the specimen with crystaleye.
Now a total of 119 sets of data by using the above-mentioned method have been obtained. The 75% of the data is used as training data set while the 25% of the data is used as test data set. The example of experimental data is shown in Table
Example of experimental data.
|
|
|
|
|
|
|
|
|
|||||||
66.89667 | −0.01 | 14.92667 | 0.32 | 0 | 0 | 0.08 | 0 |
72.46333 | −1.33333 | 14.58667 | 0.32 | 0 | 0 | 0 | 0.08 |
65.74667 | 1.33 | 19.24333 | 0.16 | 0 | 0.16 | 0.08 | 0 |
69.91667 | 1.136667 | 18.86667 | 0.16 | 0 | 0.08 | 0.08 | 0.08 |
65.10667 | 1.513333 | 19.40333 | 0.16 | 0.08 | 0 | 0.08 | 0.08 |
67.76333 | 0.643333 | 20.15667 | 0.16 | 0.08 | 0.08 | 0 | 0.08 |
65.49667 | 1.73 | 20.37333 | 0.16 | 0.08 | 0.08 | 0.08 | 0 |
63.86667 | 1.81 | 20.13 | 0.08 | 0 | 0.16 | 0.16 | 0 |
65.42 | 1.366667 | 21.55 | 0.08 | 0 | 0.16 | 0.08 | 0.08 |
64.71667 | 1.663333 | 18.87333 | 0.08 | 0 | 0.08 | 0.16 | 0.08 |
65.76667 | 1.56 | 20.55333 | 0.08 | 0 | 0.08 | 0.08 | 0.16 |
According to actual situation in the previous section, the number of nodes in input layer is 3 and it is 5 in output layer. As a result of the multihidden layers network structure is more complicated and the three layers of neural network can implement almost all pattern recognition and classification tasks; three-layer neural network is employed.
How to choose the number of hidden layer nodes has not been solved with a good analytic expression. The number of hidden layer nodes is often determined by the experience or testing.
Formula (
In formula (
In order to get the specific number of hidden layer nodes, we introduced the ideas of trial and error and conducted a series of 10 trials. Each trial of test performed 20 times of prediction. The experimental data is training data set referred to in the previous section. Different trials have different hidden layer nodes while other parameters in different trials are consistent. The experiment results are shown in Figure
Comparisons of predictive ability of BPNN with different number of hidden layer nodes.
As shown in Figure
After the network structure is identified, we conducted BPNN prediction experiments by using MATLAB neural network toolbox. The toolbox provides us with a variety of improved algorithms. Our statement of building the training model is as below: net = newff(inputn, outputn, hiddennum, {“tansig”, “tansig”}, “traingd”).
We can see from the above function that the two excitation functions are both tangent
Examples of actual output and expected output of experiment.
Actual output | Expected output | ||||||||
---|---|---|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
|
|
|
0.1605 | 0.0566 | 0.0262 | 0.0412 | 0.0576 | 0.24 | 0.08 | 0.08 | 0 | 0 |
0.0009 | 0.1468 | 0.1755 | 0.0317 | 0.044 | 0 | 0 | 0 | 0.4 | 0 |
0.0045 | 0.1113 | 0.0952 | 0.024 | 0.0419 | 0.16 | 0.08 | 0 | 0.16 | 0 |
0.3388 | 0.1398 | 0.0169 | 0.051 | 0.0468 | 0.16 | 0.08 | 0 | 0 | 0.16 |
0.0208 | 0.125 | 0.3111 | 0.2054 | 0.1481 | 0.08 | 0 | 0 | 0.08 | 0.24 |
0.2196 | 0.0578 | 0.0212 | 0.0427 | 0.0545 | 0.24 | 0 | 0.08 | 0 | 0.08 |
0.0014 | 0.1297 | 0.1619 | 0.026 | 0.0443 | 0.08 | 0.24 | 0 | 0.08 | 0 |
0.0067 | 0.0875 | 0.058 | 0.0116 | 0.0523 | 0.08 | 0.16 | 0.16 | 0 | 0 |
0.0345 | 0.0798 | 0.0281 | 0.0139 | 0.0425 | 0 | 0 | 0 | 0.08 | 0.32 |
0.0008 | 0.1492 | 0.1702 | 0.0285 | 0.0462 | 0 | 0 | 0.08 | 0.32 | 0 |
0.0227 | 0.0912 | 0.0413 | 0.0231 | 0.0424 | 0.16 | 0.16 | 0 | 0 | 0.08 |
0.0077 | 0.1009 | 0.074 | 0.0213 | 0.0421 | 0 | 0 | 0.16 | 0 | 0.24 |
0.0022 | 0.1204 | 0.094 | 0.0174 | 0.0465 | 0 | 0.24 | 0 | 0 | 0.16 |
0.0017 | 0.1287 | 0.1474 | 0.0258 | 0.0438 | 0 | 0.32 | 0 | 0.08 | 0 |
0.1147 | 0.0658 | 0.0167 | 0.0162 | 0.0393 | 0.32 | 0 | 0.08 | 0 | 0 |
0.0028 | 0.1242 | 0.1194 | 0.0321 | 0.0417 | 0 | 0 | 0 | 0.32 | 0.08 |
0.0922 | 0.0716 | 0.0166 | 0.0124 | 0.0395 | 0.08 | 0 | 0.24 | 0 | 0.08 |
We can use formula (
At last, the mean square error (MSE) is used to represent the total error of this structure. MSE is calculated by using formula
We conducted a series of 10 tests. All tests have the same parameters and MSE of tests is shown in Table
The MSE of BPNN.
Serial number | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
|
||||||||||
MSE | 0.0289 | 0.0417 | 0.0406 | 0.0346 | 0.0387 | 0.0416 | 0.035 | 0.0368 | 0.0366 | 0.0408 |
The fitness function of GA algorithm is the BP algorithm provided by MATLAB neural network toolbox. We choose the Levenberg-Marquardt algorithm as the training function [
The initialization parameter, namely, the threshold and weights, can be obtained after the GA process. Then, BPNN model with the initialization parameters is constructed. In this section, we chose the appending momentum item and introduced gradient factor to improve the BPNN. Similarly, we conducted a series of 10 trials. And we got the MSE of each group finally. The MSE of experiment in each trial is shown in Table
The MSE of GA+BP.
Serial number | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
|
||||||||||
MSE | 0.0335 | 0.0327 | 0.0333 | 0.0349 | 0.03 | 0.0324 | 0.0318 | 0.0338 | 0.0309 | 0.0316 |
Comparisons between BPNN and GA+BP are shown in Figure
Comparisons of experimental results.
A more perfect forecasting model for dental porcelain computer color matching called GA+BP is proposed. Based on the research and comprehensive discussion about the traditional BPNN, the initial weights and threshold are optimized by GA firstly. Experiments show that it enhances the convergence performance and stability of the BPNN by determining the appropriate initial parameters instead of random selection of initial parameters. It makes the color matching of restoration more objective and accurate.
The GA+BP can help reach the prediction goal of CCM in actual research. Therefore it has high practical application value and plays a guidance role in CCM. With the development of computer science comprehensively introduced into the medical field, stomatological hospital will have more ability to provide better services for patients in the future.
The authors declare that there is no conflict of interests regarding the publication of this paper.
This research is partially supported by the National Natural Science Foundation of China no. 81200805 and Ph.D. Programs Foundation of Ministry of Education of China (no. 20120001120080).