June 14, 2021

SpywareNews.com

Dedicated Forum to help removing adware, malware, spyware, ransomware, trojans, viruses and more!

Differentially Private Deep Learning under the Fairness Lens. (arXiv:2106.02674v1 [cs.LG])

Differential Privacy (DP) is an important privacy-enhancing technology for
private machine learning systems. It allows to measure and bound the risk
associated with an individual participation in a computation. However, it was
recently observed that DP learning systems may exacerbate bias and unfairness
for different groups of individuals. This paper builds on these important
observations and sheds light on the causes of the disparate impacts arising in
the problem of differentially private empirical risk minimization. It focuses
on the accuracy disparity arising among groups of individuals in two
well-studied DP learning methods: output perturbation and differentially
private stochastic gradient descent. The paper analyzes which data and model
properties are responsible for the disproportionate impacts, why these aspects
are affecting different groups disproportionately and proposes guidelines to
mitigate these effects. The proposed approach is evaluated on several datasets
and settings.