Privacy-Preservation in Machine Learning: Threats and Solutions

Date:

Abstract: In recent years, privacy threats against user data have become more diverse. Attacks are no longer solely directed against databases where sensible data is stored but can also be applied to data analysis methods or their results. Thereby, they enable an adversary to learn potentially sensitive attributes of the data used for the analyses. This lecture aims at presenting common privacy threat spaces in data analysis methods with a special focus on machine learning. Next to a general view on privacy preservation and threat models, some very specific attacks against machine learning privacy are introduced (e.g. model inversion, membership inference). Additionally, a range of privacy-preservation methods for machine learning, such as differential privacy, homomorphic encryption, etc., are presented. Finally, their adequate application is discussed with respect to common threat spaces.

The video of my lecture can be found on YouTube.