June 23, 2021

SpywareNews.com

Dedicated Forum to help removing adware, malware, spyware, ransomware, trojans, viruses and more!

MixDefense: A Defense-in-Depth Framework for Adversarial Example Detection Based on Statistical and Semantic Analysis. (arXiv:2104.10076v1 [cs.CR] CROSS LISTED)

Machine learning with deep neural networks (DNNs) has become one of the
foundation techniques in many safety-critical systems, such as autonomous
vehicles and medical diagnosis systems. DNN-based systems, however, are known
to be vulnerable to adversarial examples (AEs) that are maliciously perturbed
variants of legitimate inputs. While there has been a vast body of research to
defend against AE attacks in the literature, the performances of existing
defense techniques are still far from satisfactory, especially for adaptive
attacks, wherein attackers are knowledgeable about the defense mechanisms and
craft AEs accordingly. In this work, we propose a multilayer defense-in-depth
framework for AE detection, namely MixDefense. For the first layer, we focus on
those AEs with large perturbations. We propose to leverage the `noise’ features
extracted from the inputs to discover the statistical difference between
natural images and tampered ones for AE detection. For AEs with small
perturbations, the inference result of such inputs would largely deviate from
their semantic information. Consequently, we propose a novel learning-based
solution to model such contradictions for AE detection. Both layers are
resilient to adaptive attacks because there do not exist gradient propagation
paths for AE generation. Experimental results with various AE attack methods on
image classification datasets show that the proposed MixDefense solution
outperforms the existing AE detection techniques by a considerable margin.