Loading [MathJax]/extensions/MathMenu.js
RRU-Net: The Ringed Residual U-Net for Image Splicing Forgery Detection | IEEE Conference Publication | IEEE Xplore

RRU-Net: The Ringed Residual U-Net for Image Splicing Forgery Detection


Abstract:

Detecting a splicing forgery image and then locating the forgery regions is a challenging task. Some traditional feature extraction methods and convolutional neural netwo...Show More

Abstract:

Detecting a splicing forgery image and then locating the forgery regions is a challenging task. Some traditional feature extraction methods and convolutional neural network (CNN)-based detection methods have been proposed to finish this task by exploring the differences of image attributes between the un-tampered and tampered regions in a image. However, the performance of the existing detection methods is unsatisfactory. In this paper, we propose a ringed residual U-Net (RRU-Net) for image splicing forgery detection. The proposed RRU-Net is an end-to-end image essence attribute segmentation network, which is independent of human visual system, it can accomplish the forgery detection without any preprocessing and post-processing. The core idea of the proposed RRU-Net is to strengthen the learning way of CNN, which is inspired by the recall and the consolidation mechanism of the human brain and implemented by the propagation and the feedback process of the residual in CNN. The residual propagation recalls the input feature information to solve the gradient degradation problem in the deeper network; the residual feedback consolidates the input feature information to make the differences of image attributes between the un-tampered and tampered regions be more obvious. Experimental results show that the proposed detection method can achieve a promising result compared with the state-of-the-art splicing forgery detection methods.
Date of Conference: 16-17 June 2019
Date Added to IEEE Xplore: 09 April 2020
ISBN Information:

ISSN Information:

Conference Location: Long Beach, CA, USA
References is not available for this document.

1. Introduction

Recently, the widespread availability of image editing software makes it extremely easy to edit or even change the digital image content, which is becoming a fearful problem. Struggling to the public trust in photographs, in this paper, our research is specifically focused on the image splicing forgery detection. The splicing forgery copies parts of one image and then pastes into another image to merge a new image as shown in Fig. 1.(a). Because the tampered regions come from other images, the differences of image attributes between the un-tampered and tampered regions exist, such as lighting, shadow, sensor noise, camera reflection and so on, which can be utilized to identify an image suspected of being tampered with and to locate the tampered regions in the forgery image. The existing splicing forgery detection methods have tried to make use of some feature extraction methods for exploring the differences of image attributes. According to the feature extraction methods used in the existing splicing forgery detection methods, they can be mainly classified into two classes: traditional feature extraction-based detection methods and convolutional neural network (CNN)-based detection methods.

An enhanced input features by the residual feedback in the proposed RRU-net. (a) The splicing forgery image; (b) The ground-truth image; (c) The global response of the enhanced input of the first building block in the proposed RRU-net.

Select All
1.
J. H. Bappy, A. K. Roy-Chowdhury, J. Bunk, L. Nataraj and B. Manjunath, "Exploiting spatial structure for localizing manipulated image regions", Proceedings of the IEEE International Conference on Computer Vision, pp. 4970-4979, 2017.
2.
L.-C. Chen, G. Papandreou, F. Schroff and H. Adam, Rethinking atrous convolution for semantic image segmentation, 2017, [online] Available: .
3.
W. Chen, Y. Q. Shi and W. Su, "Image splicing detection using 2-d phase congruency and statistical moments of characteristic function" in Security Steganography and Watermarking of Multimedia Contents IX, International Society for Optics and Photonics, vol. 6505, pp. 65050R, 2007.
4.
Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox and O. Ronneberger, "3d u-net: learning dense volumetric segmentation from sparse annotation", International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 424-432, 2016.
5.
P. Ferrara, T. Bianchi, A. De Rosa and A. Piva, "Image forgery localization via fine-grained analysis of cfa artifacts", IEEE Transactions on Information Forensics and Security, vol. 7, no. 5, pp. 1566-1577, 2012.
6.
H. Gou, A. Swaminathan and M. Wu, "Noise features for image tampering detection and steganalysis", ICIP, vol. 6, pp. 97-100, 2007.
7.
K. He, X. Zhang, S. Ren and J. Sun, "Deep residual learning for image recognition", Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
8.
Y.-F. Hsu and S.-F. Chang, "Detecting image splicing using geometry invariants and camera characteristics consistency", Multimedia and Expo 2006 IEEE International Conference on, pp. 549-552, 2006.
9.
J. Hu, L. Shen and G. Sun, "Squeeze-and-excitation networks", Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7132-7141, 2018.
10.
V. Iglovikov and A. Shvets, Ternausnet: U-net with vg-g 11 encoder pre-trained on imagenet for image segmentation, 2018, [online] Available: .
11.
S. Ioffe and C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, 2015, [online] Available: .
12.
M. K. Johnson and H. Farid, "Exposing digital forgeries in complex lighting environments", IEEE Transactions on Information Forensics and Security, vol. 2, no. 3, pp. 450-461, 2007.
13.
M. K. Johnson and H. Farid, "Exposing digital forgeries through specular highlights on the eye", International Workshop on Information Hiding, pp. 311-325, 2007.
14.
Z. Lin, J. He, X. Tang and C.-K. Tang, "Fast automatic and fine-grained tampered jpeg image detection via dct coefficient analysis", Pattern Recognition, vol. 42, no. 11, pp. 2492-2501, 2009.
15.
B. Liu and C.-M. Pun, "Deep fusion network for splicing forgery localization", ECCV workshop, pp. 15, 2018.
16.
J. Long, E. Shelhamer and T. Darrell, "Fully convolutional networks for semantic segmentation", Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431-3440, 2015.
17.
B. Mahdian and S. Saic, "Detection of resampling supplemented with noise inconsistencies analysis for image forensics", Computational Sciences and Its Applications 2008. ICCSA 08. International Conference on, pp. 546-556, 2008.
18.
B. Mahdian and S. Saic, "Using noise inconsistencies for blind image forensics", Image and Vision Computing, vol. 27, no. 10, pp. 1497-1503, 2009.
19.
V. Nair and G. E. Hinton, "Rectified linear units improve restricted boltzmann machines", Proceedings of the 27th international conference on machine learning (ICML-10), pp. 807-814, 2010.
20.
Y. Rao and J. Ni, "A deep learning approach to detection of splicing and copy-move forgeries in images", Information Forensics and Security (WIFS) 2016 IEEE International Workshop on, pp. 1-6, 2016.
21.
O. Ronneberger, P. Fischer and T. Brox, "U-net: Convolutional networks for biomedical image segmentation", International Conference on Medical image computing and computer-assisted intervention, pp. 234-241, 2015.
22.
J. Shore and R. Johnson, "Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy", IEEE Transactions on information theory, vol. 26, no. 1, pp. 26-37, 1980.
23.
Z. Tang, X. Zhang, X. Li and S. Zhang, "Robust image hashing with ring partition and invariant vector distance", IEEE Trans. Information Forensics and Security, vol. 11, no. 1, pp. 200-214, 2016.
24.
C. v2.0, 2009, [online] Available: http://forensicsjdealtest.org/casiav2/.
25.
W. Wang, J. Dong and T. Tan, "Effective image splicing detection based on image chroma", Image Processing (ICIP) 2009 16th IEEE International Conference on, pp. 1257-1260, 2009.
26.
X. Wang, K. Pang, X. Zhou, Y. Zhou, L. Li and J. Xue, "A visual model-based perceptual image hash for content authentication", IEEE Transactions on Information Forensics and Security, vol. 10, no. 7, pp. 1336-1349, 2015.
27.
Y. Wei, X. Bi and B. Xiao, "C2r net: The coarse to refined network for image forgery detection", 2018 17th IEEE International Conference On Trust Security And Privacy In Computing And Communications/12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE), pp. 1656-1659, 2018.
28.
Y. Wu and K. He, Group normalization, 2018, [online] Available: .
29.
C.-P. Yan, C.-M. Pun and X.-C. Yuan, "Quaternion-based image hashing for adaptive tampering localization", IEEE Transactions on Information Forensics and Security, vol. 11, no. 12, pp. 2664-2677, 2016.
30.
S. Ye, Q. Sun and E.-C. Chang, "Detecting digital image forgeries by measuring inconsistencies of blocking artifact", Multimedia and Expo 2007 IEEE International Conference on, pp. 12-15, 2007.

Contact IEEE to Subscribe

References

References is not available for this document.