Towards Query Efficient and Derivative Free Black Box Adversarial Machine Learning Attack

Full Text (PDF, 668KB), PP.16-24

Views: 0 Downloads: 0

Author(s)

Amir F. Mukeri 1,* Dwarkoba P. Gaikwad 1

1. AISSMS College of Engineering, Pune, 411001, India

* Corresponding author.

DOI: https://doi.org/10.5815/ijigsp.2022.02.02

Received: 20 Nov. 2021 / Revised: 15 Dec. 2021 / Accepted: 8 Jan. 2022 / Published: 8 Apr. 2022

Index Terms

Adversarial Machine Learning, Robust Deep Learning, Nature Inspired Optimization, Computer Vision, Security & Privacy.

Abstract

While deep learning has shown phenomenal success in many critical applications such as in autonomous driving and medical diagnosis, it is vulnerable to black box adversarial machine learning attacks. Objective of these attacks is to mislead a classifier in making mistakes. Hard Label attacks are those in which an adversary has access only to the top-1 prediction label and has no knowledge about model parameters or gradient loss. Secondly, for security concerns, the number of model queries that an attacker can perform for evaluation are restricted. In this paper, we propose a novel nature-inspired optimization algorithm for generating adversarial examples. Proposed algorithm is derivative-free, meta-heuristic algorithm. It searches for optimum adversarial examples in high-dimensional image space using simple arithmetic operations inspired by Brownian motion of molecules in fluids and gases. Experiments with CIFAR-10 image dataset yielded encouraging results with a query budget of less than 1000 and with a minimal distortion to original image. Its performance was determined to be comparable and exceeded in some cases compared to previous state of the art attacks.

Cite This Paper

Amir F. Mukeri, Dwarkoba P. Gaikwad, " Towards Query Efficient and Derivative Free Black Box Adversarial Machine Learning Attack", International Journal of Image, Graphics and Signal Processing(IJIGSP), Vol.14, No.2, pp. 16-24, 2022. DOI: 10.5815/ijigsp.2022.02.02

Reference

[1] I. Evtimov, K. Eykholt, E. Fernandes, T. Kohno, B. Li, A. Prakash, A. Rahmati and D. Song, "Robust physical-world attacks on machine learning models," arXiv preprint arXiv:1707.08945, vol. 2, p. 4, 2017.

[2] L. Abualigah, A. Diabat, S. Mirjalili, M. Abd Elaziz and A. H. Gandomi, "The arithmetic optimization algorithm," Computer methods in applied mechanics and engineering, vol. 376, p. 113609, 2021.

[3] A. Krizhevsky, V. Nair and G. Hinton, "Cifar-10 (canadian institute for advanced research)," URL http://www. cs. toronto.edu/kriz/cifar.html, vol. 5, p. 4, 2010.

[4] P.-Y. Chen, H. Zhang, Y. Sharma, J. Yi and C.-J. Hsieh, "Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models," in Proceedings of the 10th ACM workshop on artificial intelligence and security, 2017.

[5] A. Ilyas, L. Engstrom and A. Madry, "Prior convictions: Black-box adversarial attacks with bandits and priors," arXiv preprint arXiv:1807.07978, 2018.

[6] P. Tao, Z. Sun and Z. Sun, "An improved intrusion detection algorithm based on GA and SVM," Ieee Access, vol. 6, p. 13624–13631, 2018.

[7] C.-C. Tu, P. Ting, P.-Y. Chen, S. Liu, H. Zhang, J. Yi, C.-J. Hsieh and S.-M. Cheng, "Autozoom: Autoencoder-based zeroth order optimization method for attacking black-box neural networks," in Proceedings of the AAAI Conference on Artificial Intelligence, 2019.

[8] N. Carlini and D. Wagner, "Towards evaluating the robustness of neural networks," in 2017 ieee symposium on security and privacy (sp), 2017.

[9] R. Mosli, M. Wright, B. Yuan and Y. Pan, "They might not be giants: crafting black-box adversarial examples with fewer queries using particle swarm optimization," arXiv preprint arXiv:1909.07490, 2019.

[10] W. Brendel, J. Rauber and M. Bethge, "Decision-based adversarial attacks: Reliable attacks against black-box machine learning models," arXiv preprint arXiv:1712.04248, 2017.

[11] M. Cheng, S. Singh, P. Chen, P.-Y. Chen, S. Liu and C.-J. Hsieh, "Sign-opt: A query-efficient hard-label adversarial attack," arXiv preprint arXiv:1909.10773, 2019.

[12] T. Brunner, F. Diehl, M. T. Le and A. Knoll, "Guessing smart: Biased sampling for efficient black-box adversarial attacks," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019.

[13] M. Cheng, T. Le, P.-Y. Chen, J. Yi, H. Zhang and C.-J. Hsieh, "Query-efficient hard-label black-box attack: An optimization-based approach," arXiv preprint arXiv:1807.04457, 2018.

[14] H. A. Kholidy and F. Baiardi, "Cidd: A cloud intrusion detection dataset for cloud computing and masquerade attacks," in 2012 Ninth International Conference on Information Technology-New Generations, 2012.