In this paper, we propose a novel junction point detector based on an azimuth consensus for remote sensing images. To eliminate the impact of noise and some noncorrelated edges of SAR image, an azimuth consensus constraint is developed. In addition to detecting the locations of junctions at the subpixel level, this operator recognizes their structures as well. A new formula that includes a minimization criterion for the total weighted distance is proposed to compute the locations of junction points accurately. Compared with other well-known detectors, including Forstner, JUDOCA, and CPDA, the experimental results indicate that our operator outperforms them both in location accuracy of junction points and in angle accuracy of branch edges. Moreover, our method possesses satisfying robustness to the impact of noise and changes of the SAR images. Our operator can be potentially used to solve a number of problems in computer vision, such as SAR image registration, wide-baseline matching, and UAV navigation system.
1. Introduction
Synthetic Aperture Radar is able to provide high resolution ground data or images even under heavy weather conditions, and it is widely used in areas of military reconnoiter, topographical mapping, resource exploration and vegetation analysis, and so on. Therefore, it is very important for understanding and analyzing SAR images. Nevertheless, due to the impact of image-forming mechanism and wide dynamic range, SAR images consist of massive coherent macula noises and make the accurate extractions of corners very difficult at the same time, they also hinder the development of SAR image application systems. Therefore, it is crucial to detect and extract the identifiable, invariant, and information-intensive features for SAR image application systems.
Corner feature detection in images is a fundamental problem in computer vision and has been successfully used in visual tracking, panoramic image stitching, and motion estimation [1–3] among other applications. In these systems, detecting corner features is the first critical step toward many more complicated processes. Over the past decades, there have been many corner and junction detectors proposed in published literatures. These detectors compete with each other in terms of localization accuracy, speed, and information they provide. Common corner detection algorithms can be divided into three categories: (1) algorithms based on gray level statistics, which detect the pattern of the center point by calculating the numbers of similar approximation (or different) points between the local neighborhood (or boundary) and the center of the gray scale, such as SUSAN [4] and FAST [5]; (2) algorithms based on the second-order structure tensor, which builds the autocorrelation matrix of the corner around its local neighborhood region and determines whether the point is the corner by eigenvalue analysis of the matrix, such as the typical algorithms of Förstner and Gülch [6], Harris and Stephens [7], and KLT [8] and some other methods based on improved structure tensors [9, 10]; (3) algorithms based on curvature analysis, that is, the detection of the large curvature points on the edge at the corners, such as CSS [11], ECSS [12], and CPDA [13].
Although corner detectors have been found to perform very well in many areas, a number of inherent weaknesses have been exposed in practical application. Most often, the criterion on which the detection is based is neglected (e.g., local structural information). In particular for SAR images, because they include massive coherent noises, the traditional corner detectors will extract pseudopoints and affect the efficiency and results of SAR image matching and application. By contrast, junction detectors are concerned with both the locations and structures. Due to multibranches topology stability, during matching detectors can effectively filter out the external interference by using richer information and thereby ensure better matching results.
Although McDermott [14] observes that junction point detection is very difficult, even a well-developed human visual system is not an exception. However, due to the above-mentioned advantages of junction detectors, there are many junction detectors proposed in published literatures. The approach described in [15] detects junctions using a piecewise constant function that partitions a circular template into wedge-shaped regions and introduces a minimum description length principle and dynamic programming algorithm to compute the optimal parameters of a model. Chabat et al. [16] introduce a junction detector based on the analysis of local anisotropy and identifying corners as points with a strong gradient but not oriented in a single dominant direction. Cazorla et al. [17, 18] propose two Bayesian methods for junction classification that evolved from the Kona method: a region-based method and an edge-based method. Bergevin and Bubel [19] propose a junction characterization and a validation method where junction branches of volumetric objects are extracted at points of interest in a 2D image using a topologically constrained grouping process and a binary split tree. Perwass [20] proposes a method to extract the intersections between the conic curves and to determine all possible linear support domains, then to determine the edges from the image gradients and to determine the type of extracted junction points by the local geometry structural analysis of edges.
Recently, Elias and Laganière [21] propose JUnction Detection Operator based on Circumferential Anchors (JUDOCA), which represents the latest research result on junction point detection algorithms. JUDOCA has been successfully used to solve many problems, such as 3-D reconstruction, camera parameter enhancing, and indoor and obstacle localization [22–24]. However, JUDOCA also has some drawbacks; for example, it only computes integer-valued junction points and cannot achieve subpixel position precision and uses the path directions instead of the dip angles of junction branches, which brings extra errors, and its algorithm is sensitive to fractured edges.
In this paper, we present a novel branch-point detection algorithm based on azimuth consensus. At the same time, compared with previous methods, our proposed algorithm can extract junction points at the level of sub-pixel accuracy and low contrast change and build a set of characteristic descriptions for recognition. In addition, experimental results indicate that our proposed algorithm provides improved positioning accuracy for junction points and angle accuracy for branch edges, and it has improved the robustness to noises especially for SAR images.
The contribution of this paper is summarized as follows. Section 2 provides the definition of the azimuth consensus and groups the edge points that satisfy the azimuth consensus constraints. Section 3 describes the accurate calculation of junction point locations and the characteristic description for a junction point and its branch edge structure. Experimental results are present in Section 4, where proposed algorithm is compared with previous algorithms with respect to accuracy, contrast change, noise, and SAR images. Section 5 concludes the paper and discusses future work.
2. Junction Point Detection and Branch Edge Grouping2.1. Subpixel Location of Edge Points
For a given SAR image f(x), where x=[x,y]Τ is the coordinate vector, which is corresponding to a pixel in the image, the gradient of f(x) in the location x is defined as
(1)f⃑(x)=[fx(x),fy(x)]Τ,
where fx(x) and fy(x) are the first-order partial derivatives of f(x) with respect to the x- and y-directions, respectively.
The gradient magnitude and orientation in x are given by
(2)∥f⃑(x)∥=fx2(x)+fy2(x),o(x)=tan-1(fy(x)fx(x))//180°,
where // is the modular operation which limits the range of gradient orientation to [0°,180°), and the vertical direction of gradient orientation is defined as o⊥(x)=(o(x)+90°)//180°.
To find the edge points of an image, our method utilizes non-maxima suppression [25] to determine the gradient magnitude image in the direction of the gradient orientation (see Figure 1(a)). Let p1 and p2 be two solutions of the following equation (6) in the local neighborhood circle region that has x as its center and has a radius of r:
(3)f⃑(x)Τ(p-x)=0,∥p-x∥=r.
Non-maximal suppression and sub-pixel location of an edge point.
Nonmaximal suppression in the direction of the gradient orientation
Subpixel location of local maxima by a parabolic curve
Because it is possible that coordinate values of p1 and p2 are nonintegers, their gradient magnitude values of v(p1) and v(p2) are calculated by the bilinear interpolation method. Let ∥f⃑(x)∥NMS be the gradient magnitude image after performing non-maximal suppression. If ∥f⃑(x)∥<max(v(p1),v(p2)), then∥f⃑(x)∥NMS=0. If ∥f⃑(x)∥≥max(v(p1),v(p2)), then ∥f⃑(x)∥NMS=∥f⃑(x)∥. In the direction o(x), using the points (x,∥f⃑(x)∥), (p1,v(p1)) and (p2,v(p2)) fitting a parabolic curve, x′—the real-valued sub-pixel location of x—is the peak of the constructed parabolic curve (as shown in Figure 1(b)). The formula of the sub-pixel location x′ is
(4)x′=x+r·v(p1)-v(p2)2(v(p1)+v(p2)-2∥f⃑(x)∥)[cos(o(x))-sin(o(x))].
2.2. Azimuth Consensus
Let Ω be the local circular region for which x0=[x0,y0]Τ is its center and its radius is λ in the image ∥f⃑(x)∥NMS and define N that is an assembly consisting of nonzero intensity value, that is,
(5)N={x∣∥f⃑(x)∥NMS>0,x∈Ω,x≠x0}.
The set N′ is defined as the sub-pixel position of all points in N. For each point, let x=[x,y]Τ∈N, and then compute the angle sx0(x) of its corresponding center point x0, that is,
(6)sx0(x)={tan-1(y′-y0x′-x0),x′>x0,y′>y0tan-1(y′-y0x′-x0)+360∘,x′>x0,y′<y0tan-1(y′-y0x′-x0)+180∘,x′<x090∘,x′=x0,y′>y0270∘,x′=x0,y′<y0,
where x′ and y′ are the coordinates of sub-pixel position x′=[x′,y′]T(x′∈N′). Δx0(x) is defined as the angle between sx0(x) and o⊥(x):
(7)Δx0(x)=sin-1(|sin(sx0(x)-o⊥(x))|).
Formula (7) ensures that the range of the angle Δx0(x) belongs to [0∘,90∘]. If a certain point x=[x,y]T, which belongs to the set N defined by the formula (9), satisfies the formula (12) condition, and then x has azimuth consensus for x0(ε is a choosing angle threshold):
(8)Δx0(x)<ε.
From the geometrical presentation of azimuth consensus, we find that if point x is located in one of the straight lines that cross over the center x0, then Δx0(x)=0; otherwise Δx0(x)>0. Azimuth consensus consists of the relative position and the edge orientation information, so it is able to filter out the distortion of noise and unrelated edge points effectively.
2.3. Junction Point Classification and Branch Edge Grouping
The set M={x∣Δx0(x)<ε,x∈N} consists of all points in N that satisfy the azimuth consensus constraints, and the set M′ is a set of sub-pixel locations corresponding to M. From (6), the range of angles sx0(x), which is relative to the center x0, of the points in M belongs to [0∘,360∘). The points in M are classified based on the distribution of angles to determine all the junction branches of x0. Simultaneously, based on the classified results of the junction point classification algorithm, which is shown in Algorithm 1, the algorithm decides whether x0 is a junction point or not.
<bold>Algorithm 1: </bold>Junction point classification algorithm.
Input: M={x1,x2,…,xm}, the dip angle set {φ1,φ2,…,φm} corresponding to M, and the threshold angle τ
Output: The classification result set E of branch edges’ points
Initialize: Set the set E = NULL
Main steps of algorithm:
(1) Define a m×m matrix H and initialize H = null; and then compute all the elements of the matrix
(3) Update υ1, υ2 and φ by all angles that satisfy the condition cos(φk-φ)≥cos(τ)(k=1,2,…,m) searched
from the set {φ1,φ2,…,φm};
(4) Repeat step (3) until the angle set that satisfies the condition cos(φk-φ)≥cos(τ) never change and place
the junction points corresponding to the angle set into E;
(5) Repeat steps (2), (3) and (4) until all the elements of H are null, and then return E.
In Algorithm 1, to eliminate the wrap-around effect of the angles and ensure that the junction points corresponding to the angles close to 0∘ and 360∘ are divided into the same branch edge, we construct the matrix Η by the cosine value of the D value between the two angles. By executing the algorithm in Algorithm 1, all junction edge points E of the current point x0 are computed. If x0 is a valid junction point, then E must satisfy the following conditions.
The size of set E, that is, the number of branch edges, is larger than 2.
If the size of E is equal to two, the intersection angle must be larger than a fixed threshold to avoid Colinearity.
The junction and edge points in its local neighborhood area often violate the azimuth consensus constraints. So for every junction edge point, the gradient magnitude of all pixels that are in the Bresenham path [26] between x0 and its closest point is nonzero.
Using the above-described detection algorithm, many junction points are often detected in the neighborhood of the actual junction points. A response function as (13) is to determine the actual location of the junction points:
(9)c(x0)=∑i(∏x∈Χi12πσexp(-(Δx0(x))22σ2)·∥f⃑(x)∥NMS),
where Χi is a set composed of the ith branch points and σ is the standard deviation of the Gaussian function.
3. Accurate Junction Point Localization and Characterization3.1. Accurate Junction Point Localization
To improve the accuracy of the location of junction points, we compute the accurate location of junction points based on the minimal distance criterion [6, 27]. Assume that x0 is the integer-valued location of a junction point, that xe is its related junction edges’ point, and that xe′ is the sub-pixel location corresponding to xe. From (1), the gradient of xe is f⃑(xe), so the equation of line crossover xe′ is represented by
(10)f⃑(xe)Τ(x-xe′)=0.
The optimal locations of the junction points satisfy the condition that the total weighted distance from all line segments is the shortest, that is,
(11)x*=argminx∑xe∈E(Dxe(x))2=argminx∑xe∈E|f⃑(xe)Τ(x-xe′)|2=argminx∑xe∈E(x-xe′)Τf⃑(xe)f⃑(xe)Τ(x-xe′)=argminx(xΤAx-2xΤb+c),
where A, b, and c are shown as the following equation (12), respectively:
(12)A=∑xe∈Ef⃑(xe)f⃑(xe)Τ,b=∑xe∈Ef⃑(xe)f⃑(xe)Τxe′,c=∑xe∈Exe′Τf⃑(xe)f⃑(xe)Τxe′.
To minimize formula (11), the optimal location of junction point x* is determined by taking the derivative of the right-side function with respect to x and setting it to zero, yielding x*=A-1b.
3.2. Accurate Branch Edge Orientation and Characterization
After calculating the accurate location of junction point x*, update the current center location x0 as x*, and then, use (6), and compute the dip angles of branch edges points that fulfill the azimuth consensus constraints. With the dip angle set of a certain junction point labeled as {θ1,θ2,…,θu}, the optimal orientation of this branch edge is described as
(13)θ*=argmaxθ∑i=1ucos(θ-θi).
In (13), ∑icos(θ-θi) is considered to be the object function for the following reasons: first, the distribution of the edge angles is relatively centered on a small area, so it decreases the search range of the optimal solution for the best angle; second, the cosine function is not susceptible to the sign of the D-value between two angles; third, the cosine function can eliminate the warp-around effect and map the angles close to 0° and 360° into the same value.
Next, calculate the value of ∑icos(θ-θi) according to every value θ, find the maxima from a series of ∑icos(θ-θi), and set the maxima to θ*. After executing the above steps for the branch edges corresponding to all the junction points, the characterization description set for all junction points is created as
(14)J={Ji∣Ji=〈xi,yi,θi1*,θi2*,…,θimi*〉,i=1,2,…,n},
where n is the number of junction points detected, mi is the number of branch edges corresponding to a junction point, and every element Ji, which is represented as mi+2 tuple, describes the ith junction point location and branch edges angle information corresponding to the ith junction point.
4. Experimental Results Comparison and Analysis4.1. Experiment Design and Parameter Setting
The experiments included four parts: comparisons with the Forstner, CPDA, and JUDOCA algorithms in terms of the location accuracy, junction edge orientation accuracy, contrast changes, and the impact of noise. We chose these algorithms to be compared with our proposed method because of the error control and local optimization mechanism of these algorithms.
Note that in the accuracy experiment, because CPDA is unable to detect all the junction points using the default parameter values, during the edge extraction step, the high and low thresholds are modified to be 0.2 and 0.05, respectively, and the gap connection length is set to two, while the other three algorithms maintain their default values. In addition, in the following experiments, the parameters shown in Table 1 are equal, the other parameters use default values.
Other experimental parameters setting.
Public parameter
Value
Standard deviation of Gaussian partial derivatives
1.5
Radius of nonmaxima suppression
3
Range of branch edges’ angle (JUDOCA and our proposed method)
(30°, 150°)
4.2. Accuracy of the Location of the Junction Points and the Orientation of the Branch Edges
To quantify the precision of our chosen methods, the ground truth data are required for all the locations of the junction points and the orientations of the branch edges. Therefore, we construct two artificial images for testing (as shown in Figures 2(a) and 2(c), where the digits in the images number represent the junction points in consecutive order).
The results of junction points and junction branches detection for two images: quadrate image and polygon image. (a) Quadrate image. (b) The junction detection result of our proposed method. (c) Polygon image. (d) The junction detection result of our proposed method.
The First Test Image: quadrate image [28] of size 500*500, including 25 51*51 square boxes, as shown in Figure 2(a). These 25 squares have the following characteristics: (1) the four edges of the 1st square (the square in the upper left corner) are parallel with the corresponding horizontal and vertical lines; (2) the nth square is rotated 3.6(n-1) degrees (n=1,2,…,25) along the 1st square in a clockwise direction; (3) the distance between two centers of adjacent squares is 100 pixels; (4) the center coordinates of the 1st square are (51, 51). The other test image: polygon image [28] of size 256*256, including nine regular polygons as shown in Figure 2(c). These 9 polygons have the following characteristics: (1) the height of all regular polygons is 50 pixels; (2) the adjacent polygons form the public edges and vertices; (3) the coordinates of the junction points labeled as 1, 14, and 24 are (55, 76), (190, 65), and (130, 190), respectively.
Forstner, CPDA, JUDOCA, and our proposed algorithm are used to determine the junction points from the two images described above and compared with the known baseline data; the smaller the error of the result, the higher the detection accuracy. The formula of the location error of the junction points is defined as
(15)LE=(x-xg)2+(y-yg)2,
where (x,y) and (xg,yg) are the location of junction points extracted by the above-mentioned algorithms and the actual-location of junction points, respectively.
The location error curves for the above two test images determined by using the four detection operators are shown in Figure 3, where the horizontal axes represent the labeled junction points and the vertical axes describe the location error of the junction points determined by formula (15). The mean location error of all the junction points is listed in Table 2. The performance of the Forstner algorithm is the best option; for location accuracy, our proposed method can maintain the error to under-one pixel, just as the Forstner algorithm does, and achieve sub-pixel location accuracy (as shown in Figures 2(b) and 2(d)). The error of CPDA and JUDOCA are both larger than that of the Forstner algorithm and the mean error is greater than one pixel. For some junction points in Figure 2(c), the error of CPDA is even over 3 pixels. This is why our proposed algorithm introduces the same accurate location method to detect junction points and extract their accurate locations, while CPDA and JUDOCA only extract integer-valued junction point locations and cannot achieve sub-pixel location accuracy because the edge orientation information is unused to optimize junction point location.
The mean error of the junction points’ position among the Forstner algorithm, CPDA, JUDOCA, and our proposed method for the quadrate image and polygon image (unit: pixel).
Forstner
CPDA
JUDOCA
Proposed method
Quadrate
0.2437
1.3270
0.6525
0.2876
Polygon
0.4046
1.4206
1.4222
0.3475
Location accuracy contrast among the Forstner algorithm, CPDA, JUDOCA, and our proposed method for detected junction points. (a) The horizontal axis represents the labeled number of 1~100 junction points in Figure 2(a) and the vertical axis describes the location error of the junction points detected by the Forstner algorithm, CPDA, JUDOCA, and our proposed method. (b) The horizontal axis represents the labeled number of 1~30 junction points in Figure 2(c) and the vertical axis describes the location error of the junction points detected by the Forstner algorithm, CPDA, JUDOCA, and our proposed method.
4.3. Noise Impact
In this section, we test the impact of noise on our proposed algorithm. Random noises are added to the original image, and all operations are applied to this noisy version. Figure 4 from row 1 to row 2 shows the images combined with 1% and 1.2% noise, respectively, and each noisy image includes 26 junction points. When the test image includes the noise, the detection result often contains both the correct junction points and a certain number of pseudopoints (i.e., false alarms).
Junction points detection results under different noise conditions. From row 1 to row 2, the random noises added is 1% and 1.2%, respectively. The results of the four methods are shown as the images from column (a) to column (d).
Forstner
CPDA
JUDOCA
Proposed
Assuming the number of the real junction points in the test image is Ng and the number of the detected junction points is Nt, which includes Nc correct junction points, we can compute the recall rate (RR=Nc/Ng) and precision rate (PR=Nc/Nt) to examine the operator performance. In the experiment, we chose contrast measure criterion combined with RR and PR [29]:
(16)ACU=RR+PR2×100%.
The results of the four algorithms for the two noisy test images are shown in Figure 4 columns from (a) to (d). The results for Nt,Nc, RR, PR, and ACU determined from these test images are listed in Table 3. The experimental results indicate that RR of Forstner operator is the highest. However, the Forstner operator ignores the more stable edge information in the local area, thereby leading to the extraction of more pseudojunction points; therefore, PR is the lowest, and ACU only is approximately 50%. CPDA exhibits strong robustness to the impact of noise and results in a higher PR and a value of ACU that is able to reach 70%. This result is mainly because CPDA extracts junction points by using discrete curvature estimates and by using fracture-edge connectivity technology.
Performance of four methods with different levels of noise.
Forstner
CPDA
JUDOCA
Proposed
Nt
Nc
RR
PR
ACU
Nt
Nc
RR
PR
ACU
Nt
Nc
RR
PR
ACU
Image 1
173
24
0.92
0.14
53%
12
11
0.42
0.92
67%
15
14
0.54
0.93
74%
21
20
0.77
0.95
86%
Image 2
363
21
0.81
0.06
44%
16
14
0.54
0.88
71%
16
15
0.58
0.94
76%
19
19
0.73
1.00
87%
However, the noises can lead to parted branch edges and thereby cause RR to decrease. The robustness of JUDOCA to noise is weaker than that of CPDA, especially its ACU, which is under high noise conditions and reduced by half, compared to the value of ACU under low noise conditions. PR and ACU of our method are the highest among the four methods; that is, it has the best robustness to the impact of noise. This excellent performance using our method is due to the introduction of azimuth consensus constraints to filter out the noise and the lack of a strict requirement for a connective path of branch edges.
4.4. Experiment Results for SAR Images
This experiment is different from the first four experiments on artificial test images, as it is usually very difficult to extract the ground truth data from the natural images. This difficulty is primarily because junction point detection for SAR images is not only relative to the local structural pattern but is also often related to the observation view and the observer’s subjective judgment [14]. Therefore, in this section, we adopt the qualitative evaluation criterion and choose two SAR images, which have different revolutions (as shown in Figure 5). The results of the two images are shown in Figures 5(a) and 5(b), respectively. The results in Figure 5(a) indicate that, for some obvious junction points, JUDOCA missed more points than our proposed method and it lost a part of the branch edges; the results for the SAR image in the City of Maoming are shown in Figure 5(b) and indicate that the two algorithms achieve comparable detection results.
Comparison of the detection results for four SAR images [29]. (a) The detection results of the SAR image (size: 256*256). Left: JUDOCA; right: proposed algorithm. (b) The detection results of the SAR image in the City of Maoming (size: 1358*1036). Left: JUDOCA; right: proposed algorithm.
Note that under the situation consisting of massive noises, JUDOCA extracts more pseudojunction points around the circumference region than our method (especially in Figure 5(a)), but the points in the circle easily introduce significant location error because they are inappropriately detected as junction points. Thus, during the detection of a junction point, JUDOCA chooses and filters out the branch edges and noise impact only by the connective paths. By contrast, our proposed algorithm can effectively filter out the noise impact and the circumference points by using azimuth consensus and extract more useful structural information.
5. Conclusion
This paper presents a novel method for junction point detection to detect junctions accurately in SAR images. The proposed algorithm uses an azimuth consensus to filter out the impact of noise and pseudo-junction points, such as those from a homogeneous region and 1D-edge points. The experimental results demonstrate that our proposed algorithm exhibits improved performance in the detection accuracy and is less susceptible to contrast change and noise impact than JUDOCA and CPDA. One area of our future work is to apply the junction detector to computer vision. Possible applications include content-based image registration, SAR image stitching, and multisensory image matching.
Acknowledgment
This research was partially supported by the National Natural Science Foundation of China under Grants 61170159 and 60902093.
GolightlyI.JonesD.Corner detection and matching for visual tracking during power line inspectionZoghlamiI.FaugerasO.DericheR.Using geometric corners to build a 2D mosaic from a set of imagesProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern RecognitionJune 19974204252-s2.0-0030655377LiuY.ZhangX.HuangT. S.Estimation of 3D structure and motion from image cornersSmithS. M.BradyJ. M.SUSAN—a new approach to low level image processingRostenE.PorterR.DrummondT.Faster and better: a machine learning approach to corner detectionFörstnerW. F.GülchE.A fast operator for detection and precise location of distict point, corners and centres of circular featuresProceedings of the ISPRS Conference on Fast Processing of Photogrammetric Data1987281305HarrisC.StephensM.A combined corner and edge detectionProceedings of the 4th Alvey Vision Conference1988147151TomasiC.KanadeT.Detection and tracking of point features1991CMU-CS-91-132Pittsburgh, Pa, USAComputer Science Department, Carnegie Mellon UniversityKötheU.Edge and junction detection with an improved structure tensor2781Proceedings of the DAGM-Symposium (DAGM '03)20032532Lecture Notes in Computer ScienceBroxT.WeickertJ.BurgethB.MrázekP.Nonlinear structure tensorsMokhtarianF.SuomelaR.Robust image corner detection through curvature scale spaceMokhtarianF.MohannaF.Enhancing the curvature scale space corner detectorProceedings of the 8th Scandinavian Conference on Image Analysis2001Bergen, Norway145152HanJ. H.PostonT.Chord-to-point distance accumulation and planar curvature: a new approach to discrete curvatureMcDermottJ.Psychophysics with junctions in real imagesParidaL.GeigerD.HummelR.Junctions: detection, classification, and reconstructionChabatF.YangG. Z.HansellD. M.Corner orientation detectorCazorlaM.EscolanoF.GallardoD.RizoR.Junction detection and grouping with probabilistic edge models and Bayesian ACazorlaM. A.EscolanoF.Two Bayesian methods for junction classificationBergevinR.BubelA.Detection and characterization of junctions in a 2D imagePerwassC.Junction and corner detection through the extraction and analysis of line segments3322Proceedings of the 10th InternationalWorkshop Combinatorial Image Analysis (IWCIA '04)2004568582Lecture Notes in Computer ScienceEliasR.LaganièreR.JUDOCA: junction detection operator based on circumferential anchorsEliasR.Sparse view stereo matchingEliasR.Enhancing sensor measurements through wide baseline stereoHajjdiabH.EliasR.Lagani`ereR.Wide baseline obstacle detection and localization1Proceedings of the International Symposium on Signal Processing and Its Applications (ISSPA '03)20032124KovesiP.BresenhamJ.Algorithm for computer control of a digital plotterLindebergT.Junction detection with automatic selection of detection scales and localization scales1Proceedings of the 1st International Conference on Image Processing (ICIP '94)November 1994Austin, Tex, USA924928http://www.ipb.uni-bonn.de/softwarefoerstneroperator/http://www.crisp.nus.edu.sg/~research/tutorial/opt_int.htm#multispectral