Journal of Xidian University ›› 2021, Vol. 48 ›› Issue (1): 168-175.doi: 10.19665/j.issn1001-2400.2021.01.019

Previous Articles     Next Articles

Adaptive fast and targeted adversarial attack for speech recognition

ZHANG Shudong(),GAO Haichang(),CAO Xiwen(),KANG Shuai()   

  1. School of Computer Science and Technology,Xidian University,Xi’an 710071,China
  • Received:2020-08-14 Online:2021-02-20 Published:2021-02-03
  • Contact: Haichang GAO;;;


Adversarial examples are malicious inputs designed to induce deep learning models to produce erroneous outputs,which make humans imperceptible by adding small perturbations to the input.Most research on adversarial examples is in the domain of image.Recently,studies on adversarial examples have expanded into the automatic speech recognition domain.The state-of-art attack on the ASR system comes from C &W,which aims to obtain the minimum perturbation that causes the model to be misclassified.However,this method is inefficient since it requires the optimization of two terms of loss functions at the same time,and usually requires thousands of iterations.In this paper,we propose an efficient approach based on strategies that maximize the likelihood of adversarial examples and target categories.A large number of experiments show that our attack achieves better results with fewer iterations.

Key words: deep neural networks(DNN), adversarial examples, speech recognition, machine learning, adversarial attacks

CLC Number: 

  • TN912.34