February 26, 2021

SpywareNews.com

Dedicated Forum to help removing adware, malware, spyware, ransomware, trojans, viruses and more!

Blind Backdoors in Deep Learning Models. (arXiv:2005.03823v4 [cs.CR] UPDATED)

We investigate a new method for injecting backdoors into machine learning
models, based on compromising the loss-value computation in the model-training
code. We use it to demonstrate new classes of backdoors strictly more powerful
than those in the prior literature: single-pixel and physical backdoors in
ImageNet models, backdoors that switch the model to a covert, privacy-violating
task, and backdoors that do not require inference-time input modifications.

Our attack is blind: the attacker cannot modify the training data, nor
observe the execution of his code, nor access the resulting model. The attack
code creates poisoned training inputs “on the fly,” as the model is training,
and uses multi-objective optimization to achieve high accuracy on both the main
and backdoor tasks. We show how a blind attack can evade any known defense and
propose new ones.