Revealing Traces of Image Resampling and Resampling Antiforensics

Image resampling is a common manipulation in image processing. The forensics of resampling plays an important role in image tampering detection, steganography, and steganalysis. In this paper, we proposed an effective and secure detector, which can simultaneouslydetectresamplinganditsforgedresamplingwhichisattackedbyantiforensicschemes.Wefindthattheinterpolation operationusedintheresamplingandforgedresamplingmakesthesetwokindsofimageshowdifferentstatisticalbehaviors fromtheunalteredimages,especiallyinthehighfrequencydomain.Torevealthetracesleftbytheinterpolation,wefirstapply multidirectionalhigh-passfiltersonanimageandtheresidualtocreatemultidirectionaldifferences.Then,thedifferenceisfit intoanautoregressive(AR)model.Finally,theARcoefficientsandnormalizedhistogramsofthedifferenceareextractedasthe feature.Weassemblethefeatureextractedfromeachdifferenceimagetoconstructthecomprehensivefeatureandfeeditinto supportvectormachines(SVM)todetectresamplingandforgedresampling.Experimentsonalargeimagedatabaseshowthatthe proposeddetectoriseffectiveandsecure.Comparedwiththestate-of-the-artworks,theproposeddetectorachievedsignificant improvementsinthedetectionofdownsamplingorresamplingunderJPEGcompression.


Introduction
Resampling is a useful image processing tool, such as upscaling in consumer electronics, downscaling in the online store, social networking, and picture sharing portal.However, some people intentionally utilize the resampling to create tampered images and upload these images to social networks to spread rumors.Due to the abuse in image tampering, resampling forensics attracts researchers' attentions [1][2][3][4][5][6][7][8][9][10][11][12].Resampling forensics can also be used to reveal the image's processing history or help people select the secure cover for stenography; for example, Kodovský and Fridrich analyzed how the parameters of downscaling affect the security of stenography [13].Hou et al. utilized the resampling forensics for blind steganalysis [14].Therefore, resampling forensics is of particular interest in the multimedia security field.
In the sequel, we called the resampling antiforensics [16] as forged resampling for short.
The appearance of antiforensic technology has been drawing the researchers' attentions to the security of the forensics [17,18].Sencar and Memon [18] formally define the security and robustness of the forensics.They pointed out that the security concerns the ability to resist intentionally concealed illegitimate postprocessing, while the robustness concentrates on the reliability against legitimate postprocessing.In our previous work [19], we employed partial autocorrelation coefficient to reveal the artifacts caused by the forged resampling.Li et al. [15] utilized steganalytic model, SRM, [20] to detect forged resampling and obtained excellent performance.
For a test image, we have no knowledge whether it has been processed by resampling or forged resampling.To avoid missing detection, an alternative approach is that sequentially testing the image by the resampling detector and forged resampling detector.Only if two detectors both predict the image is innocent is the test image taken as an innocent image.To simplify the detection procedure, we propose an integrated detector which can simultaneously detect resampling and its forged resampling.As both the resampled image and forged resampled image are generated via interpolation, we employ the histogram and coefficients of AR model on multidirectional differences to capture the interpolation traces.Experimental results indicate that the proposed integrated detector is effective and secure.
The rest of this paper is organized as follows.Section 2 reviews the resampling forensics and the antiforensic scheme [16].In Section 3, we introduce a new feature set for resampling forensics.The experiments are presented in Section 4. Section 5 concludes the paper.

Background
In this section, we first introduce the resampling and its periodical artifacts and then review the forged resampling scheme proposed by Kirchner and Böhme [16].

2.1.
Resampling.The frequently used image resampling operation, including scaling and rotation, consists of two basic operations: (1) resampling, which is also called as spatial transformation of coordinates, and (2) intensity interpolation, which assigns pixel values to the transformed pixels.
Assume that we want to rescale a  ×  image (, ) to a  ×  image (, ).Generally speaking, 2D image scaling can be separated into two 1D scaling operations along the row and column, respectively.Intuitively, image  is first rescaled along the row to get an intermediate image  of size ×; then image  is rescaled along the column to get rescaled image "Unaltered pixel" "Resampled pixel" with size of ×.The whole scaling process can be formulated as where the matrix   ( × ) and   ( × ) determined by the scaling factor  and interpolation kernel embodies the rescaling process for the row and column, respectively.According to (1), we can simplify the discussions of 2D scaling to 1D scaling.As image rotation is similar to image scaling, we concentrate on the image scaling in the following.
In the resampling phase, for a scaling factor  = V/ℎ (the greatest common divisor of V and ℎ is 1), the rescaling pixels are first mapped into the original pixel grid with equidistance ℎ/V.Then the intensities of rescaling pixels are calculated by the weighted sums of neighboring original pixel intensities.The weights are determined by the interpolation kernel function, which uses the distances between the rescaled grid and its neighboring original grids as the input.
Due to equidistant sampling, the distance sequences are periodical; thus the interpolated weights are periodical and periodic correlation patterns between neighboring pixels are introduced.Figure 1 shows an example of upscaling ( = 3/2) with bilinear interpolation in the th row of image .It is shown that the interpolated weights emerge with periodicity equal to 3. In this case, the scaling matrix   is as follows: From matrix   , we can infer that the 3kth ( = 1, 2, 3, . ..) column is a linear combination of its 4 neighboring columns, which reveals that the correlations among adjacent pixels are periodical.
Early works [1][2][3][4][5][6][7][8][9][10] utilized periodical linear correlations to detect resampling.Popescu and Farid [1] revealed the periodical correlation by a probability map (p-map), which is estimated by the expectation maximum algorithm.For an automatic detector, the periodical artifacts were transformed as peaks in the frequency domain as shown in the middle row of Figure 2.

The Forged Resampling Scheme.
As the equidistant sampling mainly results in the periodicity appearing in the resampled image, Kirchner and Böhme proposed two attacks to remove that periodicity [16].
(1) The first attack is based on geometric distortion with edge modulation (denoted by attack 1).To disturb equidistant sampling, the transformed pixel (, ) was added by a zero mean Gaussian noise, whose standard variance  controls the attack strength.That is, the transformed pixel (, ) turned into a distorted pixel ( +  1 ,  +  2 ), where ( 1 ,  2 ) is the Gaussian noise.Only geometric distortion severely degraded the visual quality, especially at the edge of the image.In order to improve the visual quality, the edge modulation was employed to tune attack strength.Particularly, the attack at the edge was weakened.After unequal sampling, the forged resampled image was obtained by applying the interpolation on the distorted pixel ( +  1 ,  +  2 ).
(2) The second attack is dual-path approach (denoted by attack 2).This approach applied attacks to the low and high frequency components of the resampled image.In the low frequency path, Kirchner and Böhme applied a nonlinear 5 × 5 median filter to destroy linear correlations among neighboring pixels.In the high frequency path, they first obtained the residual by subtracting a 5 × 5 median filtered version from its source image (, ) and then applied attack 1 on the residual to get the distorted resampled residual.The final forged image is obtained by adding the filtered resampled image and distorted resampled residual.
Both attacks successfully concealed the periodicity in the resampled image; meanwhile they preserved the image's visual quality.Figure 2 demonstrates that an unaltered image (first row) and its forged resampled image (third row) have nearly the same p-map and corresponding Fourier spectrum, which indicates that the periodicity-based detectors [1][2][3][4][5][6][7][8][9][10] probably misclassify a forged resampled image as an unaltered image.

The Proposed Method
The proposed method aims at classifying the resampled image and forged resampled image from the unaltered image.Such a forensic problem can be formulated as the following hypothesis test: H 0 : the test image is an unaltered image H 1 : the test image is a resampled image or a forged resampled image

Traces of Interpolation.
Compared with unaltered image, due to interpolation, resampled and forged resampled image inevitably leave behind interpolation artifacts.We mainly focus on blurring artifacts and statistical changes of the relationships among neighboring pixels.
In the interpolation phase, the intensity of resampled pixel is a weighted sum of its adjacent original pixels.So, the interpolation is similar to a low-pass filter [11].The blurring artifact is distinct in the upscaled and forged upscaled images.Figure 3 empirically shows the normalized histograms of the 1st-order horizontal difference ((, ) − (,  + 1)) estimated from BOSSRAW database [21] (please see Section 4 for more details about the database).It can be seen that the upscaled image and forged upscaled image have more bins than the unaltered image in the range of [−1, 1].Because the antialiasing suppresses some high frequency components in the image, the downscaled image with antialiasing also shows slightly blurring artifacts [13].To capture the blurring artifacts in the resampled and forged resampled image, we will employ the normalized histograms of the image difference as a feature subset.
The interpolation operation also changes the relationships of neighboring pixels.According to formula (1), an unknown scaled pixel is a linear combination of its adjacent  known original pixels, where  is kernel width.For a rescaled image, let us calculate the number of original pixels used to generate  consecutive rescaled pixels.First, the distance of these  rescaled pixels is ( − 1)/, so there are ⌊( − 1)/⌋ or ⌈( − 1)/⌉ original pixels located among these  consecutive pixels, where ⌊⋅⌋ and ⌈⋅⌉ denote the floor and ceil function for the nearest integer.Second, the left (or right) /2 original pixels participate in the interpolation process for the starting (or ending) rescaled pixel.So, we can infer that there are ⌈( − 1)/⌉ +  or ⌊( − 1)/⌋ +  original pixels used to generate p consecutive rescaled pixels.For example, Figure 1 shows that 6 upscaled pixels (, 1), (, 2), . . ., (, 6) are generated by 5 original pixels (, 1), (, 2), . . ., (, 5).According to the above conclusion, we can infer that the relationship of  consecutive rescaled pixels reflects the relationship of ⌈( − 1)/⌉ +  or ⌊( − 1)/⌋ +  consecutive original pixels and of course differs from the relationship of  consecutive original pixels.For the forged rescaled image, due to irregular sampling, some forged pixels cluster closely in the original grid and other pixels located sparsely in the original grid, which also indicates that the forged rescaled image behaves different relationships among neighboring pixels from those of unaltered image.To capture the relationships of neighboring pixels, we will fit the image difference into an AR model and extract AR coefficients as a feature subset.The AR feature can characterize high-order correlations of adjacent pixels, but it has much smaller dimension than SPAM [22] (Subtractive Pixel Adjacency Matrix).

Multidirectional Differences.
As aforementioned, the interpolation is somewhat similar to low-pass filter [11] and thus causes significant statistical changes in the high frequency components of an interpolated image.In general, high frequency components (such as texture and edge) of a natural image are multidirectional.Accordingly, the changes caused by interpolation are also multidirectional.To capture these changes, we first design multidirectional high-pass filters to create multidirectional differences and then extract the proposed feature from these differences.
We employ the kernels of the 1st-order difference to derive multidirectional kernels.The kernels of commonly used 1storder differences are shown as follows.
The Kernel of the 1st-Order Difference in the Horizontal (H), Vertical (V), Diagonal (D), and Antidiagonal (AD) Direction.It is given as follows: We utilize combinations of horizontal (H), vertical (V), diagonal (D), and antidiagonal (AD) kernels to construct the multidirectional kernels as shown in the following.

Multidirectional Kernel Groups G(1)-G(3). They are given as follows:
Here "+" means the combination of two directional kernels.
From the 1st-order H or V kernel, the 2nd-order H, V, and H + V kernels (here "+" means the combination of two directional kernel) are derived to reflect the interpolation traces in both H and V directions.Similarly, the 2nd-order D, AD, and D + AD kernels are generated.We also consider the combinations between {H, V} and {D, AD} and obtain four kinds of kernel: H + D, V + D, H + AD, and V + AD.Finally, we have 28 2nd-order filter kernels in total.Following the above way, we can create higher-order kernels.However, the number of higher-order kernels increases sharply, which will increase computation burden in the feature extraction phase, so we only choose aforementioned 28 2nd-order filter kernels to create multidirectional differences.
Based on the kernel's direction, we divide all kernels into 3 groups (denoted by G(1)-G(3) in "Multidirectional Kernel Groups G(1)-G(3)").It is noted that any kernel within a group can be obtained by rotating or flipping other kernels within the same group.We therefore specify that kernels within a group share the same pattern.Considering that spatial statistics in natural images are symmetric with respect to mirroring and flipping [22], we can average the feature sets extracted from the same group to reduce the feature dimension.
To further enhance the interpolation traces left in the high frequencies, besides the image itself, a high frequency spatial residual (denoted by  1 (, )) is created to construct multidirectional differences.To do this, we firstly divide the Discrete Cosine Transform (DCT) frequency into 3 subbands with equal interval and then select the high frequency subband as shown in Figure 4 to create a high frequency spatial residual  1 (, ) by the inverse DCT.The whole process can be formulated as where (, V) is a high-pass filter.We empirically find that the type of (, V) (such as Gaussian high-pass filter) and the partition of the subband have trifling impacts on the resampling detector.For the sake of conciseness, we employ the above proposed method to generate  1 (, ).For notation convenience, an image (, ) is denoted as  0 (, ) in the sequel.
Each kernel shown in "Multidirectional Kernel Groups G(1)-G(3)" is convoluted with an image  0 (, ) and its high frequency residual  1 (, ) to generate the 2nd-order difference (denoted by (, )).Finally, we get 56 kinds of differences.Inspired by the rich model for steganalysis [20], assembling the feature from the multidirectional differences is expected to be beneficial to the challenging forensic problem, such as detecting resampling in a JPEG compressed image.

The Feature Construction.
In this subsection, we first extract the AR feature (FAR) and histogram feature (FH) from each image difference and then assemble FAR and FH extracted from 56 differences to construct the final feature set.
FAR is extracted based on the direction of (, ).(1) For the differences derived from H kernel, as it is mainly used to reflect the variations in the horizontal direction, FAR is extracted in the horizontal direction.(2) Similarly, for the differences created by V kernel, FAR is extracted in Advances in Multimedia u (0, 0) Figure 4: The high frequency DCT subband in red shaded region is used to create  1 (, ).The coordinate (0, 0) is the DC coefficient.

the vertical direction. (3) For the differences created by other kernels shown in "Multidirectional Kernel Groups G(1)-G(3),"
the AR coefficients are first extracted from the horizontal and vertical direction, respectively, and FAR are then obtained by averaging them.
Extracting FAR in the horizontal direction is as follows.First, concatenate all rows of (, ) to generate a 1D sequence  = [(1, :),  (LR) (2, :), (3, :),  (LR) (4, :), . ..],where  (LR) (, :) ( is the row index) is a left-right flipped version of the th row.Then, input  into an AR model formulated to calculate AR coefficients [23].Transposing (, ), we can extract FAR in the vertical direction in the same way.The AR model can be formulated as where , (), () represent the order, AR coefficients, and prediction error, respectively.According to the symmetric distribution of the difference as shown in Figure 3, FH is calculated as follows: where ℎ  (or ℎ − ) is the normalized frequency of the difference element which is equal to  (or −).
To reduce the dimensionality, under the assumption that the kernels within a group belong to the same pattern, we average FAR and FH within the same difference group and denote them as FAR   and FH   (group index  = 1, 2, 3; residual index  = 0, 1).The proposed feature constructed from multidirectional differences (denoted by FD) is obtained as in ( 8) by concatenating the feature subset FD  extracted from   (, ).The dimensions of FAR   and FH   are  and  + 1, respectively.Thus, the total dimension of FD is 6 ( +  + 1).
( = 0, 1) , We set the parameters  and  based on the distributions of AR coefficients and histograms for unaltered images and resampled images.Figure 5 shows the distributions of AR coefficients estimated from BOSSRAW database [21] (please see Section 4 for more details about the database).
For the sake of brevity, we only show the plots for FAR 0 1 and FAR 1 1 .Recall that FAR 0 1 and FAR 1 1 are, respectively, extracted from the differences of  0 (, ) and the differences of  1 (, ).The subscript "1" represents the fact that the differences are generated by G(1) kernel groups.Both plots show that 12-order AR feature is able to distinguish the scaled or forged scaled image from the unaltered image, so we set  = 12.It is shown that FAR 0 1 and FAR 1 1 present different plot shapes, which indicates that they are complementary in the resampling forensics.The parameter  is empirically set as 5, because we observed that most of the difference elements fall within [−5, 5], such as the images in the BOSSRAW database.With  = 12 and  = 5, the dimension of FD is 108.
The proposed detector is summarized as follows: (1) Select the high frequency band of DCT as shown in Figure 4 to create the spatial residual  1 (, ).
(4) Feed the feature set extracted from the training images into SVM to train the proposed detector.

Experimental Results
We test the proposed detector on a composite image database which is comprised of 3000 never resampled images.The BOSSBase [21] and Dresden Image Database (DID) [24] are widely used in the image forensics.Their raw image source database is denoted by BOSSRAW and DIDRAW, respectively.We randomly select 1500 raw images from BOSSRAW and DIDRAW database, respectively, to create the composite database.Before further processing, all images are converted to 8-bit gray images.
The unaltered composite database is provided as the source database for creating resampled image database.We created three kinds of resampled database: upscaling, downscaling, and rotation.We also used the antiforensic method proposed by Kirchner and Böhme [16] to create three kinds of forged resampled database: forged upscaling, forged downscaling, and forged rotation.The commonly used parameters   1) are used to generate various types of resampled images.We use the same number for each type in the resampled or forged database.For example, for 12 types of upscaling (four types of scaling factor, three kinds of kernel), we allot each one with 3000/12 = 250 images.To preclude the influence from image resolution, unaltered, resampled, and forged resampled images are center-cropped to 512 × 512.
SVM with Gaussian kernel is employed as the classifier [25].To avoid overfitting, we conducted a grid-search for the best parameters of SVM by fivefold cross validations on the training set.For training and testing purpose, we created several training-testing pairs.Each pair owns 6000 images, which is comprised by unaltered composite database and its altered version.Training is performed on a random 50% subset of the pair, and testing is performed on the remaining 50%.Hereafter, the same SVM setups are adopted unless particularly specified.The receiver operating characteristic (ROC) curves and the detection error   are used to evaluate the SVM-based detector's performance.In formula (9), FPR and TPR denote the false positive rate and true positive rate, respectively.
To the best of our knowledge, there are no related works which simultaneously detect the resampled image and forged image from the unaltered image.We compare our proposed FD-based detector with the state of the art in the resampling forensics: FE-based detector [11] and FM-based detector [12].As FE-based detector [11] and FM-based detector [12] have captured some artifacts of interpolation, we suppose they may be effective in forged resampling detection.Additionally, FE detector and FM detector are SVM-based, so it is convenient to compare them with the proposed detector under same experimental settings.We also note that the steganalysisbased detectors [15,26] have achieved excellent performances in the resampling detection.However, because of huge dimension (34761-D) of steganalysis feature, extracting the 34761-D feature and training the model by the SVM are very time-consuming, so we do not directly compare our method with the steganalysis-based detectors [15,26].In the following, we first evaluate the effectiveness of the proposed composite feature.Then, we show that the FDbased detector can not only detect resampled or forged resampled images from unaltered images as traditional method [11,12,26] but also simultaneously detect both resampled and forged resampled images from unaltered images.Finally, we give an example of splicing detection using the proposed detector.

Evaluating Effectiveness of the Composite Feature.
The proposed feature FD is a composite of subset FD  ( = 0, 1) as shown in (8).To verify that no subset is redundant, we compared FD with FD 0 , FD 1 through detecting upscaled images from unaltered images.As aforementioned, the composite of FD 0 and FD 1 is expected to be beneficial for detecting resampling in a JPEG compressed image.To test FD's robustness against lossy JPEG compression, both unaltered and upscaled images are postcompressed by JPEG 80.With SVM testing,   of FD, FD 0 , and FD 1 is 7.23%, 8.30%, and 10.57, respectively.This result means that FD yields lowest   , which indicates that the feature subset FD 0 extracted from the difference of image and FD 1 extracted from the difference of high frequency residual are collaborative in the resampling classification.In the following, we only reported the result of FD.

Detecting Unaltered Images from Resampled Images.
In this subsection, the proposed detector is tested by distinguishing the unaltered image from the resampled image.To this end, we create 3 uncompressed training-testing pairs: upscaled versus unaltered, downscaled versus unaltered, and rotated versus unaltered and their corresponding JPEG 95 and JPEG 80 version.
Table 2 shows the results for three kinds of feature.Under the uncompressed scenario, the proposed FD-based detector achieves nearly perfect performance (  < 1%) for the detections of upscaling, downscaling, and rotation.The FD-based detector performs much better than two other detectors, especially in the detection of downscaling or rotation.For example, in the detection of downscaling without JPEG compression,   of the FD-based detector is, respectively, 14.10

Detecting Unaltered Images from Forged Resampled
Images.In this subsection, we test whether the proposed FD-based detector can resist the malicious attack [16].The test is designed for distinguishing the unaltered images from the forged resampled images.The FE-based detector [11] and FM-based detector [12] initially do not aim at detecting the antiforensic scheme [16], but they have captured some artifacts of interpolation in the resampled image, such as energy density.Hence, we also test whether FE-based detector and FM-based detector can detect the interpolation artifacts hidden in the forged resampled image.Table 3 shows the detailed results.Without JPEG compression, the FD-based detector achieves nearly perfect performance (  < 0.2%), which indicates that FD-based detector can effectively resist the attacks from antiforensic scheme [16].Figure 7 shows the corresponding ROC curves under the uncompressed scenario.It can be seen that the ROC curve of the proposed detector is always above that of other two detectors.The advantage of FD-based detector is prominent when FPR is low.For example, in the downscaling detection, the TPR of FD-based detector is 99.93% at FPR = 1%, which is about 59.53 percentage points and 87.63 percentage points higher than that of the FM-based detector and FE-based detector.The FE-based detector and FM-based detector have gotten good performances in the detections of forged upscaling and forged rotation.However, their performances deteriorate in the forged downscaling detections.Under JPEG compression scenario, the results in

Detecting Unaltered Images from Resampled Images and
Forged Resampled Images.In applications, we may have no prior knowledge about the test image.For a more practical detector, we train the SVM detector by unaltered images and "ALL" images including resampled images and forged images.Such a detector requires that the forensic features be distinguishable between heterogeneous images.To visually demonstrate the ability of FD, we map FD into a 2D space by linear discriminate analysis (LDA).Clear distinctions among three types can be seen in Figure 8.We create 3 training-testing pairs in this subsection as shown in Table 4. "ALL" class is comprised by 1500 resampled images and 1500 forged resampled images.We randomly select 500 upscaled images, 500 downscaled images, and 500 rotated images to compose the resampling class in "ALL" database.The forged resampling class is formed by the same manner.

Advances in Multimedia
Table 4 gives the detailed results.Under the uncompressed scenario, it can be seen that FD-based detector can effectively distinguish the altered image ("ALL" class)   9 demonstrates that, with FPR = 1%, the FD-based detector achieves TPR = 98.3%, which indicates that the proposed detector is practical in the real applications.

An Example of Splicing Detection.
In this subsection, we use the proposed detector to detect the spliced tampering.Since the location of the pasted object is unknown, the questioned image is divided into nonoverlapped blocks at first, and then each block is predicted by the proposed detector.The block size is set to be 64 × 64.Accordingly, the SVM detector is trained on 64 × 64 blocks.The training set is composed of 3000 unaltered images and 3000 "ALL" images as used in Section 4.3.settings [16] (attack 2,  = 0.4,  = 0.8, bilinear).Figures 10(b) and 10(c) show the tampering detection results for the uncompressed and JPEG 95 compressed tampering, respectively.The 64 × 64 block predicted as tampered is marked in red color.Although the proposed detector is only trained on the image blocks with a single scaling operation, it can locate most of the spliced region, including the region which underwent multiple scaling operations.As the edge of the inserted object is a composite of unaltered and altered block, some missing detections emerge in the edge of two pasted birds.Note that this tampered example is simple tampering.In real life, the forgers will adopt various ways to escape the detection of forensic tools.Generalized forensic tools, which can identify usual image operations and their combinations, may be useful in the detections of complicated tampered images.

Conclusion
In this paper, we have proposed a novel integrated detector for detecting image resampling and forged resampling, which simultaneously addresses the effectiveness and security concerns.We design multidirectional differences to extract the feature.To capture the traces of resampling and forged resampling, the feature is extracted from the coefficients of the autoregressive model and histograms.Experiments on a large composite image database show that the proposed detector is effective and secure and yields great improvements in the detection of downsampling or resampling under JPEG compression.The tampering detection results illustrate that the proposed detector is promising in practical applications.We have found that the lossy JPEG compression affects the performance of the proposed detector.The performance degrades with increasing JPEG compression ratio.Improving the detector's robustness against heavy JPEG compression is our future work.

Figure 1 :
Figure 1: Example of the upscaling ( = 3/2, bilinear) process for the th row of image B (the first line).The corresponding interpolated weights are shown in the bracket.

Figure 8 :
Figure 8: The 2D feature of FD (after LDA) estimated from 1500 uncompressed images of BOSSRAW database.

Figure 10 :
Figure 10: An example showing (a) an unaltered image and tampering detection results for (b) uncompressed and (c) JPEG compressed (QF = 95) tampering.The red box of size 64 × 64 indicates that this box is predicted as tampered by the proposed detector.

Table 1 :
Parameters used to create resampled image database.

Table 2 :
(%) of each detector on detecting resampled images from unaltered images.Here "without" means without applying JPEG compression on test images.The best result is displayed by bold texts.

Table 3 :
(%) of each detector on detecting forged resampled images from unaltered images.Here "without" means without applying JPEG compression on test images.The best result is displayed by bold texts.

Table 3
indicate that the proposed FD-based detector also outperforms two other detectors.

Table 4 :
(%) of each detector on detecting "ALL" images from unaltered images.Here "without" means without applying JPEG compression on test images.The best result is displayed by bold texts.