QR decomposition and fuzzy logic based scheme is proposed for through-wall image enhancement. QR decomposition is less complex compared to singular value decomposition. Fuzzy inference engine assigns weights to different overlapping subspaces. Quantitative measures and visual inspection are used to analyze existing and proposed techniques.
1. Introduction
Mapping of scenes behind obstacles (including building wall, rubbers, grass, etc.) using through-wall imaging (TWI) is an unfolded research domain. Different military and commercial applications (including antiterrorism, hostage rescue and surveillance, etc. [1]) can benefit from TWI. Beside other challenges, minimization of unwanted artifacts (clutters/noise) has enjoyed special importance over last few years [2–13]. These unwanted artifacts significantly decrease target detection and recognition capabilities.
Existing TWI image enhancement (clutter removal) techniques include background scene subtraction (only feasible if with and without target images are available) [2], spatial filtering (assuming wall homogeneity at low frequencies) [3], wall modeling and subtraction (requiring complex process for inhomogeneous walls) [4, 5] Doppler filtering (applicable for moving targets only) [6], image fusion (requiring multiple data of the same scene) [7], and statistical techniques [8–13].
In this paper, a TWI image enhancement (clutter reduction) technique using QR decomposition (QRD) and fuzzy logic is presented (preliminary results presented in [13]). Weights are assigned to different QRD subspaces using fuzzy inference engine. Simulation results evaluated using mean square error (MSE), peak signal to noise ratio (PSNR), improvement factor (IF), and visual inspection (based on miss detection (MD) and false detection (FD)) are used to verify the proposed scheme.
2. Proposed Image Enhancement Using QRD
Let the input image M (having dimensions G×H) be decomposed into different subspaces (Mcl, Mtar, and Mno) using singular value decomposition (SVD) as
(1)M=∑g=1l1sgugvgT︸Mcl+∑g=l1+1l2sgugvgT︸Mtar+∑g=l2+1GsgugvgT︸Mno,
where U and V are singular vector matrices and S contains singular values. As discussed in [13], conventional SVD for TWI image enhancement assumes that the target is limited to the second spectral component only; that is,
(2)MSVD=s2u2v2T.
Besides the high computational complexity of SVD which is 4G2H+8GH2+9H3 [14], the statement of target containment in the second spectral component is not always true. To cater the above issues, QRD and fuzzy logic based scheme is proposed. The image M can be decomposed into an orthogonal unitary matrix Q (having dimensions G×H and column vectors qg) and an upper triangular matrix R (having dimensions G×H and row vectors rg), that is,
(3)M=QR.
Table 1 shows the accuracy, stability, and complexity analysis of different QRD algorithms (classical and modified Gram-Schmidt (CGS, MGS), Givens decomposition, Householder transformation (HT), etc. [14]) for TWI.
Comparison analysis of QRD algorithms for TWI.
Algorithms
Accuracy
Complexity
Stability
M-QR
QTQ-I
QTM-R
MR-1-Q
CGS QR
1.091×10-16
1.310×10-10
6.724×10-14
9.913×10-14
2GH2
Unstable
MGS QR
1.075×10-16
4.922×10-13
4.842×10-14
1.083×10-12
2GH2
Stable
HT QR
1.291×10-15
3.795×10-15
3.333×10-16
1.263×10-12
4G2H+2GH2+23H3
Stable
Givens QR
7.532×10-16
6.702×10-15
2.711×10-16
1.118×10-12
8G2H+2GH2+23H3
Stable
Identical to SVD, the first subspace M1=q1r1 represents wall clutters and rest subspaces contain targets and noise. Note that due to overlapping boundaries of targets and noise, it is difficult to extract target subspaces accurately. Foregoing in view, a weighting QRD based scheme is proposed to enhance targets. The enhanced image Mtar is
(4)Mtar=∑g=2Gwgqgrg,
where wg are weights applied to different subspaces. Fuzzy logic is used for the automatic weight assignment [15].
2.1. Input and Output MFs
Let ξg=∥rg∥ and Δξg=∥rg∥-∥rg+1∥ be norms and norm differences, respectively. Note that high value of ξg and Δξg the corresponding subspace qgrg more likely contains target(s) and is therefore enhanced by applying heavy weights (and vise versa).
Three Gaussian membership functions (MFs) ζXx(x1)=exp(-((c1-c-1(x))/σ1(x))2) and (x∈{High,Medium,Low}) are defined for ξg. Similarly ζYy(x2)=exp(-((c2-c-2(y))/σ2(y))2) and (y∈{High,Medium,Low}) are defined for Δξg, where {c1,c2}∈[0,1], c-1(x), c-2(y) and σ1(x), σ2(y) are means and variances of fuzzy sets, respectively.
K-means algorithm [16] is used to adjust the fuzzy parameters. ξk and Δξh are first clustered into three groups based on respective histograms. The means and variances, respectively, of each group are used as centers c-1(x), c-2(y) and spreads σ1(x), σ2(y) of MFs. Five equally spaced output MFs ζZz(d)=-((d-d-(z))/ϱ(z))2(z∈{VeryHigh,High,Medium,Low,VeryLow}), where mean d-(z) and variance ϱ(z) are used.
2.2. Product Inference Engine (PIE)
Gaussian fuzzifier maps the input ξg and Δξg as
(5)ζXY(c1,c2)=exp{-(c1-ξgv1)2}exp{-(c2-Δξhv2)2},
where v1 and v2 are parameters used for input noise suppression and are chosen as v1=2maxxσ1(x) and v2=2maxy3σ2(y) [15].
Fuzzy IF-THEN rules for image enhancement are the following.
Rule 1: IF ξh is XHigh and Δξh is YHigh, THEN whPIE is ZVeryHigh.
Rule 2: IF ξh is XMed and Δξh is YHigh, THEN whPIE is ZHigh.
Rule 3: IF ξh is XHigh and Δξh is YMed, THEN whPIE is ZHigh.
Rule 4: IF ξh is XMed and Δξh is YMed, THEN whPIE is ZMed.
Rule 5: IF ξh is XHigh and Δξh is YLow, THEN whPIE is ZMed.
Rule 6: IF ξh is XLow and Δξh is YMed, THEN whPIE is ZMed.
Rule 7: IF ξh is XMed and Δξh is YLow, THEN whPIE is ZLow.
Rule 8: IF ξh is XLow and Δξh is YHigh, THEN whPIE is ZLow.
Rule 9: IF ξh is XLow and Δξh is YLow, THEN whPIE is ZVeryLow.
The output of PIE using individual rule based inference, Mamdani implication, algebraic product for t-norm, and max operator for s-norm [15] is
(6)ζZ′(dg)=max{x,y,z}[sup{c1,c2}ζXY(c1,c2)ζXx(c1)ζYy(c2)ζZz(dg)].
The weights wgPIE are then computed as
(7)wgPIE=∑zd-(z)ϖg(z)∑z=1ϖh(z),
where ϖg(z) is the height of ζZ′(dg) in output MFs [15].
2.3. Takagi-Sugeno (TS) Inference
In contrast to PIE, TS inference engine adjusts the output MFs using adaptive and/or optimization techniques [17]. The TS rule-base (IF-THEN) for computing weights wgTS is
(8)IFξgisXj1ANDΔξgisYj2THENp(j1+j2-1)=(11+exp{-ξg}+exp{-Δξg})j1+j2-1.
Note that the output reduces for large j1+j2 (which is desirable). The aggregated weights wgTS are
(9)wgTS=∑j1=13∑j2=13p(j1+j2-1)t{ζXj1(ξk),ζYj2(Δξg)}∑j1=13∑j2=13t{ζXj1(ξg),ζYj2(Δξg)},
where t represents algebraic product (intersection operator).
3. Simulation and Results
Experimental setup for TWI is constructed using Agilent’s vector network analyzer (VNA) which generates stepped frequency waveforms between 2GHz and 3GHz (1GHz band width (BW)) having step size of Δf=5MHz and step size Nf=201. Maximum range is Rmax=30m and range resolution is ΔR=0.15m.
Broadband horn antenna which is mounted on two-dimensional scanning frame (having dimensions 2.4m×3m (width × height) and can slide along cross range and height) operates in monostatic mode with 12dB gain. Thickness of the wall is 5cm and relative permittivity and permeability are 2.3 and 1, respectively. The frame is placed 0.03m away from wall and scanning is controlled by a microcontroller based mechanism. The scattering parameters are recorded at each step and transferred to a local computer for image reconstruction and processing. Received data is converted into time domain and beamforming algorithm is used for image reconstruction. Existing and proposed image enhancement algorithms are simulated in MATLAB and quantitative analysis is performed using MSE, PSNR, IF, FD, MD, and visual inspection:
(10)MSE=1G×H∑g=1G∑h=1H(Mbs(g,h)-Mtar(g,h))2,PSNR(dB)=10log101MSE,IF(dB)=10log10[PMtar,t×PM,cPM,t×PMtar,c],
where Mbs is a reference image obtained by the difference of image, with and without target. PMtar,t and PMtar,c are average pixel values of target and clutter in enhanced image, respectively. PM,t and PM,c are average pixel values of target and clutter in the original image, respectively.
MD is defined as “target was present in the original image, but was not detected in enhanced image.” FD is defined as “target was not present in the original image, but was detected in enhanced image.” For calculating FD and MD, a threshold is calculated using global thresholding algorithm [18].
Figure 1 shows the original B-scan containing two targets, the background subtracted reference and enhanced images, using existing SVD and proposed QRD based schemes. It can be observed that proposed schemes detect both targets whereas SVD based scheme is unable to locate both targets accurately.
Image with two targets. (a) Original image. (b) SVD [11]. (c) Proposed fuzzy QRD (PIE). (d) Proposed fuzzy QRD (TS). (e) Background subtracted reference image.
Figure 2 shows another example containing three targets. The proposed scheme detects all targets and provides a better target to background ratio compared to SVD based scheme. It is further noted that the proposed TS inference based scheme provides better results compared to PIE.
Image with three targets. (a) Original image. (b) SVD [11]. (c) Proposed fuzzy QRD (PIE). (d) Proposed fuzzy QRD (TS). (e) Background subtracted reference image.
Table 2 shows that proposed fuzzy QRD schemes are better (as compared to the SVD image enhancement scheme) in terms of MSE, PSNR, IF, MD, and FD.
MSE, PSNR, IF, MD, and FD comparison.
Scenario
Scheme
MSE
PSNR
IF
MD
FD
Two targets
SVD [11]
0.2726
5.6442
8.1258
1
0
Fuzzy QRD (PIE)
0.1970
7.0553
11.2587
0
0
Fuzzy QRD (TS)
0.1726
7.6296
11.5870
0
0
Three targets
SVD [11]
0.2814
5.5068
7.1265
1
1
Fuzzy QRD (PIE)
0.1933
7.1377
10.8715
0
0
Fuzzy QRD (TS)
0.1824
7.3898
11.0127
0
0
4. Conclusion
QRD and fuzzy logic based image enhancement scheme is proposed for TWI. Compared with SVD, QRD provides less computational complexity. PIE and TS inference engines are used to assign weights to different QRD subspaces. Simulation results compared on visual and quantitative analysis show the significance of the proposed scheme.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
AminM. G.MoultonJ.KassamS.AhmadF.AminM.YemelyanovK.Target and change detection in synthetic aperture radar sensing of urban structuresProceedings of the IEEE Radar Conference (RADAR '08)May 2008162-s2.0-6184910459510.1109/RADAR.2008.4721104YoonY. S.AminM. G.Spatial filtering for wall-clutter mitigation in through-the-wall radar imagingDehmollaianM.SarabandiK.Refocusing through building walls using synthetic aperture radarSmithG. E.MobasseriB. G.Robust through-the-wall radar image classification using a target-model alignment procedureRamS. S.ChristiansonC.KimY.LingH.Simulation and analysis of human micro-dopplers in through-wall environmentsDebesC.TiviveF. H. C.AminM. G.BouzerdoumA.Wall clutter mitigation based on eigen-analysis in through-the-wall radar imagingProceedings of the 17th International Conference on Digital Signal Processing (DSP '11)July 2011182-s2.0-8005315563110.1109/ICDSP.2011.6004992RiazM. M.GhafoorA.Principle component analysis and fuzzy logic based through wall image enhancementRiazM. M.GhafoorA.Through wall image enhancement based on singular value decompositionVermaP. K.GaikwadA. N.SinghD.NigamM. J.Analysis of clutter reduction techniques for through wall imaging in UWB rangeGaikwadA. N.SinghD.NigamM. J.Application of clutter reduction techniques for detection of metallic and low dielectric target behind the brick wall by stepped frequency continuous wave radar in ultra-wideband rangeRiazM. M.GhafoorA.QR decomposition based image enhancement for through wall imagingProceedings of the IEEE Radar Conference2012978983GolubG. H.LoanC. F.WangL. X.KanungoT.MountD. M.NetanyahuN. S.PiatkoC. D.SilvermanR.WuA. Y.An efficient k-means clustering algorithms: analysis and implementationTakagiT.SugenoM.Fuzzy identification of systems and its applications to modeling and controlGonzalezR. C.WoodsR. E.