June 14, 2021

SpywareNews.com

Dedicated Forum to help removing adware, malware, spyware, ransomware, trojans, viruses and more!

Anonymizing Machine Learning Models. (arXiv:2007.13086v2 [cs.CR] UPDATED)

There is a known tension between the need to analyze personal data to drive
business and privacy concerns. Many data protection regulations, including the
EU General Data Protection Regulation (GDPR) and the California Consumer
Protection Act (CCPA), set out strict restrictions and obligations on companies
that collect or process personal data. Moreover, machine learning models
themselves can be used to derive personal information, as demonstrated by
recent membership and attribute inference attacks. Anonymized data, however, is
exempt from data protection principles and obligations. Thus, models built on
anonymized data are also exempt from any privacy obligations, in addition to
providing better protection against such attacks on the training data. Learning
on anonymized data typically results in a significant degradation in accuracy.
We address this challenge by guiding our anonymization using the knowledge
encoded within the model, and targeting it to minimize the impact on the
model’s accuracy, a process we call accuracy-guided anonymization. We
demonstrate that by focusing on the model’s accuracy rather than information
loss, our method outperforms state of the art k-anonymity methods in terms of
the achieved utility, in particular with high values of k and large numbers of
quasi-identifiers. We also demonstrate that our approach achieves similar
results in its ability to prevent membership inference attacks as alternative
approaches based on differential privacy. This shows that model-guided
anonymization can, in some cases, be a legitimate substitute for such methods,
while averting some of their inherent drawbacks such as complexity, performance
overhead and being fitted to specific model types. As opposed to methods that
rely on adding noise during training, our approach does not rely on making any
modifications to the training algorithm itself.