Talks and presentations

A Survey on Model Watermarking Neural Networks

August 21, 2021

Conference Workshop Talk, IJCAI 2021 Workshop: Toward Intellectual Property Protection on Deep Learning as a Services, Virtual

I held the presentation at the IJCAI 2021 Conference Workshop: “Toward Intellectual Property Protection on Deep Learning as a Services”.

Privacy-Preservation in Machine Learning: Threats and Solutions

March 02, 2021

Meetup Talk, Advanced Machine Learning Study Group (Berlin & Remote), Berlin, Germany

Abstract: Neural networks are increasingly being applied in sensitive domains and on private data. For a long time, no thought was given to what this means for the privacy of the data used for their training. Only in recent years has there emerged an awareness that the process of converting training data into a model is not irreversible as previously thought. Since then, several specific attacks against privacy in neural networks have been developed. Of these, we will discuss two specific ones, namely membership inference and model inversion attacks. First, we will focus on how they retrieve potentially sensitive information from trained models. Then, we will look into several factors that influence the success of both attacks. At the end, we will discuss Differential Privacy as a possible protection measure.

Privacy-Preservation in Machine Learning: Threats and Solutions

February 01, 2021

Guest Lecture, Course *Human-Centered Data Science*, Freie University Berlin, Berlin, Germany

Abstract: In recent years, privacy threats against user data have become more diverse. Attacks are no longer solely directed against databases where sensible data is stored but can also be applied to data analysis methods or their results. Thereby, they enable an adversary to learn potentially sensitive attributes of the data used for the analyses. This lecture aims at presenting common privacy threat spaces in data analysis methods with a special focus on machine learning. Next to a general view on privacy preservation and threat models, some very specific attacks against machine learning privacy are introduced (e.g. model inversion, membership inference). Additionally, a range of privacy-preservation methods for machine learning, such as differential privacy, homomorphic encryption, etc., are presented. Finally, their adequate application is discussed with respect to common threat spaces.

Bringing Privacy-Preserving Machine Learning Methods into Real-World Use

January 12, 2021

Guest Lecture, Course *Usable Privacy and Security*, Freie University Berlin, Berlin, Germany

Abstract: Nowadays, there exist several privacy-preserving machine learning methods. Most of them are made available to potential users through tools or programming libraries. However, in order to thoroughly protect privacy, these tools need to be applied in the correct scenarios with the correct setting. This lecture covers the identification of concrete threat spaces concerning privacy in machine learning, the choice of adequare protection measures, and their practical application. Especially the latter point is discussed in class with respect to general usability and design patterns.

Privacy-preserving Machine Learning with Differential Privacy

November 02, 2020

Meetup Invited Talk, ML * Privacy * 2 meetup of the Berlin Machine Learning Meetup Group, Berlin, Germany

Abstract: With the growing amount of data being collected about individuals, ever more complex machine learning models can be trained based on those individuals’ characteristics and behaviors. Methods for extracting private information from the trained models become more and more sophisticated, such that individual privacy is threatened. In this talk, I will introduce some powerful methods for training neural networks with privacy guarantees. I will also show how to apply those methods effectively in order to achieve a good trade-off between utility and privacy.

The Long and Winding Road of Secure and Private Machine Learning

January 18, 2020

Meetup Invited Talk, 16th Machine Learning in Healthcare Berlin, Berlin, Germany

Abstract: Nowadays, machine learning (ML) is used everywhere, including in sectors that deal with extremely sensitive data, like health or finance. And while most companies do not deploy a single line of code anymore without being tested somehow, ML models are often let out into the wild without being checked or secured. In my talk, I will guide you through the long road of possible threats and attacks that your ML models might be experiencing out there, and give an overview what countermeasures might be worth considering.

50 Shades of Privacy

December 07, 2019

Science Slam, TWIN UNI Slam Tomsk , Tomsk, Russia

A science slam is a different way to present your research, which only has two rules: 1) You need to present something you are personally researching on, 2) The talk shall not exeed 10 minutes, 3) The talk should be entaining and understandable. I presented my research on Differential Privacy. Find my talk here.