Detecting Video Inter-Frame Forgeries Based on Convolutional Neural Network Model

Full Text (PDF, 1124KB), PP.1-12

Views: 0 Downloads: 0

Author(s)

Xuan Hau Nguyen 1,2,* Yongjian Hu 1 Muhmmad Ahmad Amin 1 Khan Gohar Hayat 1 Van Thinh Le 2 Dinh Tu Truong 3

1. School of Electronics and Information Engineering, South China University of Technology, Guangzhou 510640, P.R.China

2. Faculty Electronics of and Informatics Engineering Mien Trung Industrial and Trade College, Phu Yen 620000, Vietnam

3. Natural Language Processing and Knowledge Discovery Laboratory, Faculty of Information Technology, Ton Duc Thang University, Ho Chi Minh City, 700000, Vietnam

* Corresponding author.

DOI: https://doi.org/10.5815/ijigsp.2020.03.01

Received: 3 Dec. 2019 / Revised: 8 Jan. 2020 / Accepted: 5 Feb. 2020 / Published: 8 Jun. 2020

Index Terms

Video forensic, video forgery detection, video inter-frame forgery detection, convolutional neural network, video authenticity, passive forensic

Abstract

In the era of information extension today, videos are easily captured and made viral in a short time, and video tampering has become more comfortable due to editing software. So, the authenticity of videos becomes more essential. Video inter-frame forgeries are the most common type of video forgery methods, which are difficult to detect by the naked eye. Until now, some algorithms have been suggested for detecting inter-frame forgeries based on handicraft features, but the accuracy and processing speed of those algorithms are still challenging. In this paper, we are going to put forward a video forgery detection method for detecting video inter-frame forgeries based on convolutional neural network (CNN) models by retraining the available CNN model trained on ImageNet dataset. The proposed method based on state-the-art CNN models, which are retrained to exploit spatial-temporal relationships in a video to detect inter-frame forgeries robustly and we have also proposed a confidence score instead of the raw output score based on these networks for increasing accuracy of the proposed method.  Through the experiments, the detection accuracy of the proposed method is 99.17%. This result has shown that the proposed method has significantly higher efficiency and accuracy than other recent methods.

Cite This Paper

Xuan Hau Nguyen, Yongjian Hu, Muhmmad Ahmad Amin, Khan Gohar Hayat, Van Thinh Le, Dinh-Tu Truong, " Detecting Video Inter-Frame Forgeries Based on Convolutional Neural Network Model", International Journal of Image, Graphics and Signal Processing(IJIGSP), Vol.12, No.3, pp. 1-12, 2020. DOI: 10.5815/ijigsp.2020.03.01

Reference

[1]Milani, S., et al., An overview on video forensics. APSIPA Transactions on Signal and Information Processing, 2012. 1.

[2]Yang, J., T. Huang, and L. Su, Using similarity analysis to detect frame duplication forgery in videos. Multimedia Tools and Applications, 2016. 75(4): p. 1793-1811.

[3]Singh, G. and K. Singh, Video frame and region duplication forgery detection based on correlation coefficient and coefficient of variation. Multimedia Tools and Applications, 2018: p. 1-36.

[4]Wang, Q., et al., Video inter-frame forgery identification based on consistency of correlation coefficients of gray values. Journal of Computer and Communications, 2014. 2(04): p. 51.

[5]Subramanyam, A. and S. Emmanuel. Video forgery detection using HOG features and compression properties. in 2012 IEEE 14th International Workshop on Multimedia Signal Processing (MMSP). 2012. IEEE.

[6]Kobayashi, M., T. Okabe, and Y. Sato, Detecting forgery from static-scene video based on inconsistency in noise level functions. IEEE Transactions on Information Forensics and Security, 2010. 5(4): p. 883-892.

[7]Yu, L., et al., Exposing frame deletion by detecting abrupt changes in video streams. Neurocomputing, 2016. 205: p. 84-91.

[8]He, K., et al. Deep residual learning for image recognition. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.

[9]Huang, G., et al. Densely connected convolutional networks. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.

[10]Chen, Y., et al. Dual path networks. in Advances in Neural Information Processing Systems. 2017.

[11]Szegedy, C., et al. Going deeper with convolutions. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.

[12]Krizhevsky, A., I. Sutskever, and G.E. Hinton. Imagenet classification with deep convolutional neural networks. in Advances in neural information processing systems. 2012.

[13]Szegedy, C., et al. Inception-v4, inception-resnet and the impact of residual connections on learning. in Thirty-First AAAI Conference on Artificial Intelligence. 2017.

[14]Zoph, B. and Q.V. Le, Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016.

[15]Sandler, M., et al. Mobilenetv2: Inverted residuals and linear bottlenecks. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.

[16]Deng, J., et al. Imagenet: A large-scale hierarchical image database. in 2009 IEEE conference on computer vision and pattern recognition. 2009. Ieee.

[17]Wang, W. and H. Farid. Exposing digital forgeries in video by detecting duplication. in Proceedings of the 9th workshop on Multimedia & security. 2007. ACM.

[18]Chao, J., X. Jiang, and T. Sun. A novel video inter-frame forgery model detection scheme based on optical flow consistency. in International Workshop on Digital Watermarking. 2012. Springer.

[19]Liu, Y. and T. Huang, Exposing video inter-frame forgery by Zernike opponent chromaticity moments and coarseness analysis. Multimedia Systems, 2017. 23(2): p. 223-238.

[20]Long, C., A. Basharat, and A. Hoogs, A Coarse-to-fine Deep Convolutional Neural Network Framework for Frame Duplication Detection and Localization in Video Forgery. arXiv preprint arXiv:1811.10762, 2018.

[21]Carreira, J. and A. Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. in proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017.

[22]Zoph, B., et al. Learning transferable architectures for scalable image recognition. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.

[23]Oquab, M., et al. Learning and transferring mid-level image representations using convolutional neural networks. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2014.

[24]Weiss, K., T.M. Khoshgoftaar, and D. Wang, A survey of transfer learning. Journal of Big data, 2016. 3(1): p. 9.

[25]Chollet, F. Xception: Deep learning with depthwise separable convolutions. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.

[26]Simonyan, K. and A. Zisserman, Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.

[27]Horn, B.K. and B.G. Schunck, Determining optical flow. Artificial intelligence, 1981. 17(1-3): p. 185-203.

[28]Jia, S., et al., Coarse-to-fine copy-move forgery detection for video forensics. IEEE Access, 2018. 6: p. 25323-25335.

[29]Kingra, S., N. Aggarwal, and R.D. Singh, Inter-frame forgery detection in H. 264 videos using motion and brightness gradients. Multimedia Tools and Applications, 2017. 76(24): p. 25767-25786.

[30]Al Hamidi, S., VFDD (Video Forgery Detection Database) Version 1.0. http://sites.scut.edu.cn/misip/main.psp, 2017.

[31]Xuan Hau, N. and H. Jongjian, VIFFD - The data set for detecting video inter-frame forgeries. Mendeley Data; http://dx.doi.org/10.17632/r3ss3v53sj.4, 2019. V4.