Analyzing Information Leakage of Updates to Natural Language Models
2020pp. 363–375
Citations Over TimeTop 10% of 2020 papers
Santiago Zanella-Béguelin, Lukas Wutschitz, Shruti Tople, Victor Rühle, Andrew Paverd, Olga Ohrimenko, Boris Köpf, Marc Brockschmidt
Abstract
To continuously improve quality and reflect changes in data, machine learning applications have to regularly retrain and update their core models. We show that a differential analysis of language model snapshots before and after an update can reveal a surprising amount of detailed information about changes in the training data. We propose two new metrics---differential score and differential rank---for analyzing the leakage due to updates of natural language models. We perform leakage analysis using these metrics across models trained on several different datasets using different methods and configurations. We discuss the privacy implications of our findings, propose mitigation strategies and evaluate their effect.
Related Papers
- → Differential Private Noise Adding Mechanism and Its Application on Consensus Algorithm(2020)93 cited
- → Programming language techniques for differential privacy(2016)37 cited
- → Differential private noise adding mechanism: Basic conditions and its application(2017)35 cited
- → Differential Private Noise Adding Mechanism and Its Application on Consensus(2016)4 cited
- Differential Private Noise Adding Mechanism: Fundamental Theory and its Application.(2016)