Model X-ray:Detect Backdoored Models via Decision Boundary

Kavli Affiliate: Ting Xu

| First 5 Authors: Yanghao Su, Jie Zhang, Ting Xu, Tianwei Zhang, Weiming Zhang

| Summary:

Deep neural networks (DNNs) have revolutionized various industries, leading
to the rise of Machine Learning as a Service (MLaaS). In this paradigm,
well-trained models are typically deployed through APIs. However, DNNs are
susceptible to backdoor attacks, which pose significant risks to their
applications. This vulnerability necessitates a method for users to ascertain
whether an API is compromised before usage. Although many backdoor detection
methods have been developed, they often operate under the assumption that the
defender has access to specific information such as details of the attack, soft
predictions from the model API, and even the knowledge of the model parameters,
limiting their practicality in MLaaS scenarios. To address it, in this paper,
we begin by presenting an intriguing observation: the decision boundary of the
backdoored model exhibits a greater degree of closeness than that of the clean
model. Simultaneously, if only one single label is infected, a larger portion
of the regions will be dominated by the attacked label. Building upon this
observation, we propose Model X-ray, a novel backdoor detection approach for
MLaaS through the analysis of decision boundaries. Model X-ray can not only
identify whether the target API is infected by backdoor attacks but also
determine the target attacked label under the all-to-one attack strategy.
Importantly, it accomplishes this solely by the hard prediction of clean
inputs, regardless of any assumptions about attacks and prior knowledge of the
training details of the model. Extensive experiments demonstrated that Model
X-ray can be effective for MLaaS across diverse backdoor attacks, datasets, and
architectures.

| Search Query: ArXiv Query: search_query=au:”Ting Xu”&id_list=&start=0&max_results=3

Read More