Privacy auditing techniques for differentially private (DP) algorithms are
useful for estimating the privacy loss to compare against analytical bounds, or
empirically measure privacy in settings where known analytical bounds on the DP
loss are not tight. However, existing privacy auditing techniques usually make
strong assumptions on the adversary (e.g., knowledge of intermediate model
iterates or the training data distribution), are tailored to specific tasks and
model architectures, and require retraining the model many times (typically on
the order of thousands). These shortcomings make deploying such techniques at
scale difficult in practice, especially in federated settings where model
training can take days or weeks. In this work, we present a novel “one-shot”
approach that can systematically address these challenges, allowing efficient
auditing or estimation of the privacy loss of a model during the same, single
training run used to fit model parameters. Our privacy auditing method for
federated learning does not require a priori knowledge about the model
architecture or task. We show that our method provides provably correct
estimates for privacy loss under the Gaussian mechanism, and we demonstrate its
performance on a well-established FL benchmark dataset under several
adversarial models.
Dedicated Forum to help removing adware, malware, spyware, ransomware, trojans, viruses and more!
More Stories
Amazon Starts Flagging ‘Frequently Returned’ Products That You Maybe Shouldn’t Buy
Russia Supplies Iran With Cyber Weapons as Military Cooperation Grows
Microsoft Unveils OpenAI-Based Chat Tools for Fighting Cyberattacks