Adaptive Fractional Differentiation Harris Corner Detection Algorithm for Vision Measurement of Surface Roughness

The Harris algorithm via fractional order derivative (the adaptive fractional differentiation Harris corner detection algorithm), which adaptively adjusts the fractal dimension parameter, has been investigated for an analysis of image processing relevant to surface roughness by vision measurements. The comparative experiments indicate that the algorithm allows the edge information in the high frequency areas to be enhanced, thus overcoming shortcomings. The algorithm permits real-time measurements of surface roughness to be performed with high precision, superior to the conventional Harris algorithm.

In the context of surface analysis, fractals have been applied to model the behaviors of complex surface structures [18][19][20][21][22][23].As a power mathematical tool the fractional calculus has been suggested to be adequate dealing with phenomena with fractals complexities (see [24][25][26][27] and the cited references therein).
The rapid development of computer technologies allows a variety of algorithms based on fractional calculus to 2D image processing to be developed [28][29][30][31].Recently, analyzing surface roughness of a hard disk by visual measurements and assessment of the measurement precision (less 20 m), the effects on the processing of images captured by a camera have been developed through improved Harris algorithm employing fractional calculus [30].The aim of this paper is to propose the adaptive fractional differentiation Harris corner detection algorithm for vision measurement of surface roughness.
The paper is organized as follows.Section 2 presents the system of visual measurement for surface roughness.Section 3 gives the theoretical basis for image corner detection and edge detection, too.Section 4 develops an adaptive fractional differentiation Harris algorithm for image corner detection.Section 5 outlines the main results and the conclusions.

The System of Visual Measurement for Surface Roughness
The system for visual measurement of surface roughness consisting of a projector creating a thin line of light, camera, position system, and a hard disk is shown in Figure 1.
The procedure used to measure structured light is as follows.

Measured object
Light plane P u (X u , Y u ) (i) First, a thin luminous straight line is generated and projected onto the surface of the object that is to be measured using the laser line projector.
(ii) Then, a stripe image   −       modulated by the object height is formed by the camera.The camera coordinate system (see Figure 2) is defined by the world coordinate system (  −       ) of the hard disk measurement system.The point   is the center of the perspective projection, while     is the optical axis of the camera.Additionally,   −     is the image coordinate system, where   is the point of intersection between the optical axis     and the image plane.The distance between points   and   is the focal length.
(iii) When a linear structured laser light is projected to the plane a certain point   (  ,   ,   ) of the studied surface is projected through the center of the lens to the point   in the image plane.
(iv) The points such as   depend on the height of the object.Thus, if the object onto which the laser line is projected differs in height, the line is not imaged as a straight line but represents a profile of the object allowing the differences in heights of the object to be determined.In order to gather many object profiles the object moves by means of a scanning system.
The transformations from the pixel coordinates to the world coordinate system can be expressed mathematically as follows: where In order to calculate 3D world coordinates (  ,   ,   ) based on 2D image coordinates (, V) the light plane equation should be defined by the equation The precision with which the 2D image coordinates (, V) are processed determines the precision of the 3D world coordinates (  ,   ,   ) that directly affects the precision in the surface roughness measurements.Therefore, precise extraction of the 2D coordinates (, V) in the image processing section (see Figure 1) is of primary importance.
In the hard disk image, the highlighted areas have to be measured which invokes both the corner detection and the edge detection algorithms to be developed and consequently their image coordinates to be determined.

Theoretical Analyses of the Image Corner Detection and Edge Detection
There exist a variety of corner and edge detection algorithms, such as the Moravec corner detection algorithm [28], the Harris corner detection algorithm [29], and the SUSAN corner detection algorithm [30].In this context, Schmid [31] reported a comparative analysis of various corner detection algorithms and revealed that the Harris algorithm has the best detection performance.The Harris algorithm has been widely applied due to its simplicity in extraction of image corners.The detection results are not affected by factors such as image rotation and light intensity, for example.Moreover, the operator introduces a Gaussian smoothing template in processing calculations which makes the algorithm robust.
In accordance with the Harris corner detection algorithm a buttery hatch centered by pixel point (, ) moves with velocity  along the  direction and moves with velocity V along the  direction.The following analytical expression describes the gray level change: where Let us denote where is the Gaussian smoothing template, which improves the algorithm's ability to resist noise.Usually, both  ∈ (0.8, 1.2) and result in the quadratic form of (, ) being where The partial autocorrelation function is very close.The corner response function can be defined as where Det() =  −  2 is the value of the determinant of the matrix and Trace() =  +  denotes the matrix trace.
Let  1 and  2 be two eigenvalues of .If  1 and  2 are large and (, ) > 0, then the local white correlation function has a peak which is a corner point.Otherwise, if  1 >  2 and (, ) < 0, then the point belongs to the image edge; if (, ) → 0 it is shown the point is in an area of unconspicuous gray change.Figure 3 shows results from processing results before and after application of the Harris corner detection algorithm.

The Adaptive Fractional Differentiation Harris Corner Detection Algorithm
Developing the adaptive fractional differentiation Harris corner detection algorithm we have to introduce some definitions of fractional-order differentiation [30].Let the signal () ∈ [, ] ( < ,  ∈ ,  ∈ ) exit the +1 order derivative.For V > 0, the fractional differentiation of V order is defined as [30] Because the signal is digital, the first order signal is extended to 2D image signal (, ).In this context, the 3 × 3 fractional differentiation of signal (, ) is  (, )  V ≈ (−V)  ( + 1, ) +  (, ) Detecting the corner of high complexity texture image, it was detected that the Harris algorithm provides false corner points.Precisely, as a consequence of the first step of the Harris algorithm (see (5)) the first order differentiation can result in missing of details of many low-frequency region textures and consequently a lot of texture information cannot be retrieved.Moreover, many false corners simultaneously appear as it is shown in Figure 4. Therefore, the lowfrequency extraction precision of corner points by the Harris algorithm needs to be improved.The improvement developed in this paper is based on adaptive fractional differentiation.
As a first step in the improvement of the conventional Harris algorithm the Prewitt operator is replaced by fractional differentiation mask (the fractional differentiation mask in the  and  directions is given in Table 1).The fractal dimension represents the texture complexity and determines the order number of the fractional differentiation.As a consequent step a score box dimension algorithm [30] is used to calculate the order of fractional differentiation.
The experimental results reveal a nonuniform distribution of the gray image complexity that if the score box dimension is used [2,3], the fractal dimension of the complex texture image can be recovered [2.7,3].In order to recover the fractional differentiation order at [0,1] taking into account the complexity of the image texture, the interval [2,2.7] is amplified and mapped to [0.3,1]; inasmuch the interval [2,2.7) is simultaneously zoomed out and mapped to [0,0.3).
Because differential operation details can be easily lost in images having rich textures there exists a negative correlation between the complexities and the image texture.The order number V is defined as The proposed results relevant to corner extraction developed by the adaptive fractional differentiation Harris algorithm are shown in Figure 5.
A comparison between analysis of the original Harris algorithm and the adaptive fractional differentiation Harris corner detection is illustrated by Figure 6.In general,    The results reported in the previous section reveal that the fractional differentiation has advantages with respect to the first order differential used in the Prewitt operator.It is well-known that the differential operation promotes highfrequency signals and weakens the low-frequency signals.
As the order number increases, the more intensely highfrequency signals are enhanced and low-frequency signals are weakened.The order number of fractional differentiation is lower than that of integer order counterpart and the fractional differentiation is strengthening the high-frequency signals and the low-frequency parts of the signals remain as a nonlinearity.Comparing to the integer order differentiation, the fractional differentiation allows retaining more texture information in the low-frequency areas, thus maintaining the target contour of the image.Moreover, the fractional differentiation can enhance the edge information of highfrequency areas.
The data in Table 2 present the heights measured when the original Harris algorithm (V1) and the adaptive fractional differentiation Harris corner detection algorithm (V2) are used to process the stripe image.These data indicate that the highest measurement precision error of V2 is about 0.023 mm.It sufficiently satisfies the measurement requirements of the hard disk plane and is less than that of V1, too.

Conclusion
Large number of false corner points generated by the conventional Harris algorithm when image textures are complex have been avoided by application of the adaptive fractional differentiation.The Harris corner detection algorithm with implemented adaptive fractional differentiation is for the Advances in Mathematical Physics first time suggested in this paper as a principle contribution improving the data processing.The adaptive fractional differentiation allows the improved Harris corner detection to retain more texture information in low-frequency areas, thus maintaining the target contour of images and enhances the edge information of high-frequency areas in a manner better than the conventional Harris algorithm.The proposed algorithm enhances the Harris algorithm for visual measurements relevant to surface roughness of hard disks.

Figure 1 :
Figure 1: Structure of the system of vision measurement.

Figure 3 :
Figure 3: Processing results before and after application of the Harris corner detection algorithm: (a) original hard disk image; (b) hard disk image processed by the Harris algorithm.

Figure 4 :
Figure 4: Fake corners in an image processed by the Harris algorithm.

Table 1 :
Fractional differentiation mask applied in the  direction shown in (a) and  direction shown in (b).

Figure 5 :
Figure 5: Results of corner extraction using the adaptive fractional differentiation Harris corner detection algorithm.
(a) Result of the original Harris algorithm (b) Result of the improved Harris algorithm

Figure 6 :Figure 7 :
Figure 6: Comparison between the results of the original Harris algorithm and the improved Harris algorithm.
(a)); many texture details remain (indicated by the yellow ellipse) invisible.The results of the adaptive fractional differentiation Harris corner detection algorithm are shown in Figure 6(b).The image can be subsequently processed by application of common methods, such as the morphology and gray centroid methods; the results of light stripe center extraction are shown in Figure 7.The light stripe center region marked by A in Figure 7(a) is obviously inaccurate, whereas the light stripe center marked B in Figure 7(b) is more accurate than Figure 7(a).

Table 2 :
Comparison of the measurement precision.