A Novel Edge Feature Description Method for Blur Detection in Manufacturing Processes

A novel inspection sensor by using an edge feature description (EFD) algorithm based on a support vector machine (SVM) is proposed for industrial inspection of images. This method detects and adaptively segments blurred images by using the proposed algorithm, which uses EFD to effectively classify blurred samples and improve the conventional methods of inspecting blurred objects; the algorithm selects and optimally tunes suitable features.Theproposed sensor applies a suitable feature-extraction strategy on the basis of the sensing results. Experimental results demonstrate that the proposed method outperforms the existing methods.


Introduction
Vision-based techniques are being increasingly used in industrial inspection.Effective inspection of blurred objects has for long been a challenge, and object blurring is one of the prime causes of poor performance in vision-based inspection.Object blurring occurs mainly due to the movement of the object and defocussing and shaking of the camera.Several deblurring methods have been proposed to address object motion [1] and camera shakes [2].Xu et al. [3] proposed an optical flow-based model to simultaneously correct both types of blurs.However, a basic assumption is required in the optical flow algorithm: the pixel intensity or color does not change when the pixel flows from one image to another.Therefore, a technique that effectively recognizes blurred objects without any assumptions is necessary.This paper proposes a feature description-based sensor and an adaptive image segmentation method for inspecting blurred objects.Seeded region growing (SRG) [4] is one of the several image segmentation techniques currently available.The proposed method comprises an edge feature description (EFD) and a support vector machine (SVM) that use the regions produced by the adaptive SRG-based algorithm for effectively categorizing the objects.
Edge detection is an essential preprocessing technique in vision-based applications.Evaluations and comparisons of edge detection techniques are found in the literature [5].Some frequently used methods detect edges on the basis of abrupt changes in the gray level [6,7]; however, orienting the edges by using these methods is difficult.Recently, Liu and Fang [8] proposed an ant colony optimization-(ACO-) based approach for edge detection.They adopted a user-defined threshold in the pheromone update process to suppress noises in the detected image.Silva Jr. et al. [9] modified the gravitational edge detection technique [10] with a nonstandard neighborhood configuration [11] to reduce the speckle noise in synthetic aperture radar images.For real-time edge detection, Khan et al. [12] integrated a range sensor on field programmable gate arrays (FPGA) and successfully executed image normalization along with edge detection for real-time video processing.In manufacturing, blurred edge detection is also a critical issue.Thus, the EFD-based method is proposed to deal with the blurred edge detection for industrial inspection of blurred images.
Several relevant studies have explored vision-based inspection for industrial applications [13,14].Gracia et al. [13] developed an inspection system and process for herb flowers.Weyrich et al. [14] developed an industrial vision-based automatic inspection system for welded nuts on support hinges.In addition, image segmentation is a fundamental problem that must be addressed in vision-based inspection.Aiteanu et al. [15] proposed content-based threshold adaptation for segmenting images to discover the optimal threshold value.However, inaccuracies in this procedure distort the results.Thus, this paper proposes an adaptive method based on SVM assessments and an EFD algorithm in the region growing sequence.This strategy segments images without computing the assessment function for every pixel added in the region.
Considering previous studies, the proposed method used adaptive region growing (ARG) segmentation and an SVM combined with an EFD algorithm for inspection in manufacturing.The proposed method contributes the following to the literature.During industrial inspection, the proposed sensor senses blurred objects and based on the sensing results suitably tunes the selection of a feature-extraction strategy.In object classification, the EFD-based inspection sensor effectively recognizes blurred objects without any assumptions.Finally, this EFD-based algorithm improves the conventional methods of inspecting blurred objects.
The remainder of this paper is organized as follows.Section 2 describes the proposed EFD-based method including ARG segmentation and EFD algorithm and then introduces the EFD-based algorithm for industrial inspection.Section 3 presents the experimental results obtained using various samples and comparisons of existing methods.The final section presents the conclusion.

ARG Segmentation and EFD
Algorithm.An ARG-based algorithm uses a set of initial seeds and groups neighboring pixels with each initial seed within the growth regions when the pixels satisfy the selection criterion, which is expressed as where   is the normalized gray level of the th pixel in one of the regions and   is the adjustable threshold in the region, which depends on the result of the proposed recognition.
For inclusion in one of the regions, a pixel must be eightconnected to at least one pixel in that region.The regions are merged if a pixel is connected to more than one region.This paper proposes a new method for the EFD of a segmented image.According to the 3 × 3 mask depicted in (2) edge pixels in the segmented images typically belong to one of eight possible edge patterns in (3)- (10).
In the edge pattern, nine pixels can be divided into two groups,  0 and  1 , respectively.For Edge Patterns 1-4, feature vector  = ( 1 ,  2 ,  3 ), where  1 =  1 +  4 +  7 ,  2 =  2 +  5 +  8 , and  3 =  3 +  6 +  9 , was used for edge description.For Edge Patterns 5-8, two feature vectors  and  = ( 4 ,  5 ,  6 ), where  4 =  1 +  +  3 ,  5 =  4 +  5 +  6 , and  6 =  7 +  8 +  9 , were used for edge description.For example, a value of 1 is set for initial seeds in the ARG-based algorithm.The values of the pixels in  0 and  1 are 1 and 0, respectively.Thus, for Edge Patterns 1, 2, 3, and 4,  is (3, 3, 0), (2, 2, 2), (0, 3, 3), and (2, 2, 2), respectively, and for Edge After all pixels in an image were processed using the aforementioned procedure, the edge was classified using feature vectors  and .An SVM was used for the classification [16].The edge descriptor  from the feature description is expressed as where   represents the seven coefficients of the normalized edge numbers from Edge Patterns 1, 2 (4), 3, 5, 6, 7, and 8 and   ranges from 0 to 1. Table 1 illustrates edges (1-8) in the descriptors  1 ,  2 , and  3 .The edges were normalized as the seven coefficients, and  1 ,  2 , and  3 were {1, 0.46, 0.82, 0.32, 0.30, 0.09, 0.07}, {1, 0.57, 0.89, 0.26, 0.24, 0.18, 0.20}, and {0.06, 1, 0.06, 0.75, 0.13, 0.15, 0.14}, respectively.For SVM classifiers, two parameters, parameter  and the RBF kernel parameter , must be optimized.Parameter  is a user-specified positive parameter used for controlling the tradeoff between SVM complexities.This study adopted the hold-out procedure for determining the two parameters, in which the samples were classified into training samples, on which the classifiers were trained, and other samples, on which the classifier accuracy was tested.Table 2 presents the class labels of the samples.The edge descriptor {  } extracted from each image was used as the data set input.In this study, 140 data sets of each sample were used.The SVM was trained and tested using these data sets.Forty data sets were randomly selected as the training samples, and the others were used for evaluating the SVM classifier accuracy.Table 3 records the testing accuracy at various combinations of the two parameters.Superior testing accuracy was realized using  = 2 13 and  = 2 −3 in the SVM.

EFD-Based
Algorithm for Industrial Inspection.This section describes the inspection sensor and the EFD-based approach.As an example of industrial inspection, the EFD method was applied to inspect eyeglass lenses.The tested technology determines the degree of the lenses.This study used the EFD-based inspection sensor for effectively recognizing blurred objects.Inspecting object blurring in the manufacturing process is difficult because the inspector may fail to focus the blurred image on the target panel.Therefore, developing an inspection sensor that bridges the gap between the blurred image and the inspector is essential.Figure 1 is the block diagram of the inspection sensor.The proposed sensor senses blurred objects and applies a suitable feature-extraction strategy on the basis of the sensing results.The sensor's procedure is summarized as follows.( 1 2, for inspecting 200 Class B sample images,   and   were set to 0.78 and  5 ,  6 , and  7 , respectively;   and   , respectively, represent the optimal thresholds   and the optimal edge descriptors for Class B samples.The input images were converted to 1024 × 768 pixel images with an 8-bit gray level.The 256 gray levels were normalized in the range 0-1, and the images were segmented using ARG.Step (2) determines image processing according to the following condition: where  is the displacement of the sensing device and  is the threshold of the displacement (0.5 mm in this The procedure of the DWT-based processing is as described in a previous vision-based study [17].The procedure is complete when no image remains in the image queue.  and   were reset and other samples were inspected similarly.The inspection is complete when the image queue is empty.Figure 2 depicts the experimental setup.The target panel, installed on a platform, indicated the degrees of the lens.The lens was mounted on the support frame of a sensing telescope 10.67 m from the target panel.Data on the test lens were listed in Table 2.The lens was selected from the 100 validation samples.During inspection, an eyeglass lens of unknown degree was mounted on the telescope, and the surface light from the platform illuminated the target panel.To produce camera shakes, a spring-dashpot system was positioned below the telescope, and vibrations were manually induced.An accelerometer, placed behind the telescope, measured the strength of the vibrations and forwarded them to an industrial computer, which converted the signals to displacements and processed the image signal generated using a charge-coupled device (CCD) camera and the displacement information.Conventionally, the telescope lens is manually focused on the target panel.The proposed method processes the telescopic images captured using the CCD camera according to object-sensing results-an approach that quickly determines the degrees of the lens to solve the objectblur problem when focusing the sensing telescope.
As shown in Figure 3, the proposed algorithm applies the following steps to automatically obtain suitable   and   .
Step 2. A value of 1 is set for initial seeds in the gray image, and the threshold values   are set in the range 0.01-0.99Step 3. ARG segmentation is implemented.
Step 5. Images are classified using SVM.
Step 6.The recognition rate of each adjustable threshold   is determined for the given image; it is defined as follows: where   is the number of correctly classified images during the test run and  is the total number of test data sets (in this study,  was 100).If the recognition rate exceeds a given value , Step 7 commences; otherwise, Steps 2-6 are repeated.
Step 7. The process terminates if the sample images with all cases of the given number of descriptors have already been tested; otherwise, Steps 1-6 are repeated.In addition, the algorithm stops if   does not satisfy the condition in Step 6.
For example, Step 1 inputs the sample images and descriptors  5 ,  6 , and  7 .Step 2 sequentially sets   in the range 0.01-0.99,and the ARG segmentation is implemented (Step 3).Step 4 extracts the edge features, which are classified in Step 5. When  is 0.9 (i.e., a 90% accuracy rate), Steps 2-6 are repeated until   exceeds 0.77.Step 7 determines whether to stop the algorithm in the  5 ,  6 , and  7 cases.The final value of the threshold   in this test was 0.78.Thus, the algorithm automatically obtained suitable   and   as 0.78 and { 5 ,  6 ,  7 }, respectively.

Results Obtained Using the EFD-Based
Sensor.First, the study employed a general classification to test the proposed method.Table 5 presents the class labels of wrench samples used for the general classification.The edge descriptors ( 4 ,  5 , and  6 ) of the wrench samples were listed in Table 6.{0.03, 0.24, 0.03, 0.25, 0.86, 0.28, 1}, and {0.03, 0.22, 0.03, 0.32, 0.80, 0.40, 1}, respectively.For the test case, 40 blurry wrench images were randomly selected as training samples, and 100 blurry wrench images were used for the classification.Figure 4 shows an example of the blurred-image segmentation using the EFD-based method.Edges (1-8) of the segmentation are {503, 3429, 445, 4982, 12484, 6235, 15607} and the corresponding edge descriptors are {0.03,0.22, 0.03, 0.32, 0.80, 0.40, 1}.Based on the edge descriptors (Table 6), the blurry image in Figure 4 belongs to samples .Table 7 lists the classification results obtained using the EFD-based method and demonstrates an average accuracy rate of 93%.Then, the experiment was conducted to test the accuracy and performance of the proposed method in manufacturing.As depicted in Figure 2, an experiment setup detail for the test mainly includes target panel, a telescope, a CCD camera, and industrial computer.Figures 5 and 6 show the blurredimage segmentation yielded by the EFD-based and DWTbased inspection methods, respectively.Figure 5 shows that the images with optimal edge descriptors processed using the proposed method are visibly different and can be classified on the basis of their differences.However, the segmented images obtained using DWT are nearly identical (Figure 6). Figure 7 displays the segmentation results for samples A, B, C, and D with the given descriptors {  } = { 7 }, { 6 ,  7 }, { 5 ,  6 ,  7 }.The most suitable descriptors are { 5 ,  6 ,  7 } because the corresponding images are visibly different.Tables 8 and 9 list the classification results obtained using the proposed sensor with and without camera shakes, respectively.The EFD yielded more accurate results (an average accuracy rate   of 93%) than the DWT did for samples A, B, C, and D in the inspection with camera shakes.However, the DWT was more appropriate in the other cases (Table 9).The results demonstrate that the proposed sensor can select a suitable feature-extraction strategy depending on camera shakes.
Table 10 illustrates the selection thresholds of the sample images with a given number of descriptors {  }.The thresholds for the descriptors are selected automatically by the SVM.The most suitable descriptors { 5 ,  6 ,  7 } are obtained from among several descriptors.This study employed  the leave-one-out cross-validation [18] with various thresholds to verify the selection threshold of the proposed algorithm.Table 11 shows that samples A, B, C, and D had the smallest MSEs: 0.1017, 0.1269, 0.1429, and 0.1351, respectively.The thresholds   = 0.70, 0.80, 0.80, and 0.50 were the optimal selections for samples A, B, C, and D in the inspection because these values yielded the highest accuracy rate (Table 10).This study used the computational complexity [19] for quantifying the amount of time required for the proposed algorithm.The time cost function () quantifies the amount of time required for an algorithm used in binary search tree operations and is given by where (log ) is the logarithmic time required by an algorithm for all -sized inputs of size in the big- notation, which excludes coefficients and lower-order terms.Classifiers based on artificial neural networks (ANNs) and Bayes classifier were also investigated for comparison.Similar to the SVM-based experiment, 40 data sets were randomly selected as training samples, and 100 data sets were used for validation.The tested ANN was a three-layer neural network with three neurons ({ 5 ,  6 ,  7 } descriptors) in the input layer, five neurons in the hidden layer, and four neurons (the four sample types) in the output layer.The Bayes classifier is given by where  is a set of { 5 ,  6 ,  7 } descriptors of the edge descriptor from a sample.Subsequently, the mean vectors   and covariance matrix   of the coefficients for the th sample were derived.The samples are identified as the  class by minimizing the calculated value of ().The 100 blurry images of wrenches from the image queue were tested for a general classification.Figure 8 presents the experiment results.The segmented image using the model [3,20] produces details (Figures 8(b) and 8(c)).However, after using the EFD-based method, the edge of the wrench becomes clear (Figure 8(d)).Table 14 lists the time cost function () and accuracy rates of the classification.The accuracy rates were 89% for Xu et al. [3], 90% for Whyte et al. [20], and 93% for the proposed method.The time cost function () for this study is lesser than that in the other two methods because the EFD-based method classifies blurry images without image deblurring.Thus, the proposed method outperforms the other methods.

Conclusion
This paper proposes a novel, adaptive inspection sensor for industrial inspection of images.The proposed EFD-based sensor can sense blurred objects and tune the selection of the feature-extraction strategy according to the sensing results.In object classification, the EFD-based algorithm selects and optimally tunes suitable features.Unlike other recognition methods, the EFD-based method combined with image deblurring directly uses the edge descriptors in the ARG to recognize blurred objects.The experimental results demonstrated that the proposed method recognizes diverse blurry samples efficiently at a recognition rate of 93%.
) Input the sample images from the image queue.(2) Determine whether the image satisfies the condition for blurred-image processing.(3) Perform EFD-based extraction.(4) Perform SVM classification.(5) Determine whether any image remains in the image queue.As shown in Table

Figure 1 :
Figure 1: A block diagram of the inspection sensor.

Figure 3 :
Figure 3: Flow diagram of the EFD-based algorithm.

Figure 4 :
Figure 4: An example of the blurred-image segmentation using EFD-based method for a general classification: (a) a camera shake image and (b) the image segmentation of (a).

Figure 5 :
Figure 5: The blurred-image segmentation using EFD-based inspection with optimal edge descriptors { 5 ,  6 ,  7 }: (a) a camera shake image of Class A, (b) a camera shake image of Class D, (c) the image segmentation of (a), and (d) the image segmentation of (b).

Figure 6 :
Figure 6: The blurred-image segmentation using DWT-based inspection with optimal 5-level DWT: (a) the image segmentation of Figure 5(a) and (b) the image segmentation of Figure 5(b).
[20] proposed a parametrized geometric model for camera shakes and applied it for deblurring within the framework of existing camera shake removal algorithms.Similar to the earlier experiments, 40 blurry images (real camera shake blur) were randomly selected as training samples, and 100 blurry images were used for validation.The same block diagram (Figure1) used in the proposed method was employed.(1) Input the 100 blurry images from the image queue.(2) Perform preprocessing, image deblurring (using the model[3,20]), and ARG segmentation.(3) Perform DWT-based extraction.(4) Perform SVM classification.(5) Determine whether any image remains in the image queue.

Figure 8 :
Figure 8: Comparison of existing methods: (a) a real camera shake image, (b) the image segment of (a) using the preprocessing [3] with 5 iteration numbers of updating latent image and PSF, (c) the image segment of (a) using the preprocessing [20] with a nonuniform kernel, and (d) the image segment of (a) using the EFD-based method.

Table 2 :
The class labels of samples used in the experiments.
Table 4 lists the classification results for different sample sizes.Sample sizes of 140, 280, and 400 samples in each class are apparently unrelated to the classification results.

Table 4 :
The classification results using different sample sizes of each class.

Table 5 :
The class labels of wrench samples used for a general classification.

Table 7 :
Test results using the EFD-based method for a general classification:  (rows): true values,  (columns): predicted values.

Table 10 :
An example of the selecting thresholds of samples with a given number of descriptors {  }.

Table 12 :
The running time growth rates using the proposed algorithm.
Table 12 reports the running time growth rates of the proposed algorithm, including running the EFD-based extraction and SVM.Under given parameters (  = 0.78,   = { 5 ,  6 ,  7 } in the blurred-image scenario and   = 0.82, 5-level DWT, in other scenarios) and 200 sample images (40 images of Class B including 25 blurred images) in the image queue, the sensor demonstrated an accuracy of 100% in inspecting Class B samples with response times within 13 s of (log ).

Table 13
lists the time cost function () and accuracy rates of the experiments.The accuracy rates for the SVM, ANN, and Bayes classifiers are 93%, 83%, and 79%, respectively, demonstrating that the SVM outperforms the other classifiers.

Table 13 :
Comparison of time cost function () and accuracy rates using different classifiers.

Table 14 :
[3]]arison of time cost function () and accuracy rates using distinct methods.To compare the performances of distinct conventional methods, nonuniform image deblurring schemes by Xu et al.[3]and Whyte et al.[20]were compared with the proposed method.Xu et al.[3]proposed the nonuniform point spread function on the basis of the optical flow estimation model for removing blurring caused by camera shakes.Whyte et al.