Mitigating Closed-model Adversarial Examples with Bayesian Neural Modeling for Enhanced End-to-End Speech Recognition

Kavli Affiliate: Zeeshan Ahmed

| First 5 Authors: Chao-Han Huck Yang, Zeeshan Ahmed, Yile Gu, Joseph Szurley, Roger Ren

| Summary:

In this work, we aim to enhance the system robustness of end-to-end automatic
speech recognition (ASR) against adversarially-noisy speech examples. We focus
on a rigorous and empirical "closed-model adversarial robustness" setting
(e.g., on-device or cloud applications). The adversarial noise is only
generated by closed-model optimization (e.g., evolutionary and zeroth-order
estimation) without accessing gradient information of a targeted ASR model
directly. We propose an advanced Bayesian neural network (BNN) based
adversarial detector, which could model latent distributions against adaptive
adversarial perturbation with divergence measurement. We further simulate
deployment scenarios of RNN Transducer, Conformer, and wav2vec-2.0 based ASR
systems with the proposed adversarial detection system. Leveraging the proposed
BNN based detection system, we improve detection rate by +2.77 to +5.42%
(relative +3.03 to +6.26%) and reduce the word error rate by 5.02 to 7.47% on
LibriSpeech datasets compared to the current model enhancement methods against
the adversarial speech examples.

| Search Query: ArXiv Query: search_query=au:”Zeeshan Ahmed”&id_list=&start=0&max_results=10

Read More

Leave a Reply