Privacy-Preserving machine learning (PPML) can help us train and deploy
models that utilize private information. In particular, on-device Machine
Learning allows us to completely avoid sharing information with a third-party
server during inference. However, on-device models are typically less accurate
when compared to the server counterparts due to the fact that (1) they
typically only rely on a small set of on-device features and (2) they need to
be small enough to run efficiently on end-user devices. Split Learning (SL) is
a promising approach that can overcome these limitations. In SL, a large
machine learning model is divided into two parts, with the bigger part residing
on the server-side and a smaller part executing on-device, aiming to
incorporate the private features. However, end-to-end training of such models
requires exchanging gradients at the cut layer, which might encode private
features or labels. In this paper, we provide insights into potential privacy
risks associated with SL and introduce a novel attack method, EXACT, to
reconstruct private information. Furthermore, we also investigate the
effectiveness of various mitigation strategies. Our results indicate that the
gradients significantly improve the attacker’s effectiveness in all three
datasets reaching almost 100% reconstruction accuracy for some features.
However, a small amount of differential privacy (DP) is quite effective in
mitigating this risk without causing significant training degradation.
Dedicated Forum to help removing adware, malware, spyware, ransomware, trojans, viruses and more!
More Stories
Automakers Ask Judge to Block Pending Enforcement of Massachusetts’ Right-to-Repair Law
Automakers Ask Judge to Block Pending Enforcement of Massachusetts’ Right-to-Repair Law
A Quake on Mars Showed Its Crust is Thicker Than Earth’s