June 21, 2021

SpywareNews.com

Dedicated Forum to help removing adware, malware, spyware, ransomware, trojans, viruses and more!

Do Gradient-based Explanations Tell Anything About Adversarial Robustness to Android Malware?. (arXiv:2005.01452v2 [cs.LG] UPDATED)

While machine-learning algorithms have demonstrated a strong ability in
detecting Android malware, they can be evaded by sparse evasion attacks crafted
by injecting a small set of fake components, e.g., permissions and system
calls, without compromising intrusive functionality. Previous work has shown
that, to improve robustness against such attacks, learning algorithms should
avoid overemphasizing few discriminant features, providing instead decisions
that rely upon a large subset of components. In this work, we investigate
whether gradient-based attribution methods, used to explain classifiers’
decisions by identifying the most relevant features, can be used to help
identify and select more robust algorithms. To this end, we propose to exploit
two different metrics that represent the evenness of explanations, and a new
compact security measure called Adversarial Robustness Metric. Our experiments
conducted on two different datasets and five classification algorithms for
Android malware detection show that a strong connection exists between the
uniformity of explanations and adversarial robustness. In particular, we found
that popular techniques like Gradient*Input and Integrated Gradients are
strongly correlated to security when applied to both linear and nonlinear
detectors, while more elementary explanation techniques like the simple
Gradient do not provide reliable information about the robustness of such
classifiers.