Home

I am Franziska Boenisch, tenure track faculty at the CISPA Helmholtz Center for Information Security. At CISPA, I am co-leading the SprintML lab for Secure, Private, Robust, INterpretable, and Trustworthy Machine Learning. Before, I was a Postdoctoral Fellow at the Vector Institute for Artificial Intelligence supervised by Prof. Dr. Nicolas Papernot. Prior to joining Vector, I was a PhD candidate at Freie University Berlin and a research assosiate at the Fraunhofer Institute for Applied and Integrated Security (AISEC).

I am Hiring!

Currently, I am looking for PhDs, Postdocs, and Research Interns. If you are excited about working on trustworthy ML, please drop me a mail with your CV, current transcript, and a motivation why you want to apply to my group.

Research

My research focus lies at the intersection of Trustworthy Machine Learning (ML) and Privacy from the perspective of individual users and data owners.

Research has shown that trained ML models do not necessarily provide privacy for the underlying training datasets, as some attacks allow to restore (aspects of) the training data from the model parameters (e.g. model inversion attacks), or others allow to find out if an individual data point was included in the training dataset or not (membership inference attacks). Both can be harmful for the privacy of the individuals whose data is represented in the training dataset. Therefore, protecting privacy in ML models is a crucial task. I’m currently mainly researching in the area of Differential Privacy, a mathematical framework that provides formal privacy guarantees. I’m also looking into the practical evaluation of privacy loss and into the identification of potential sources for privacy leakage in privacy preserving technologies. Identifying such pain points allows us to develop a better understanding on why practical privacy stays behind the strong theoretical guarantees. This helps in adapting and extending theoretical frameworks, their implementations and their integration into real-worlds systems for enhanced privacy in practice.

Furthermore, I am investigating the impact that ML privacy has on other aspects of trustworthy ML, such as robustness, fairness, and biases. So far, research suggests that training with privacy guarantees has a negative impact on such other desirable properties of ML models. Therefore, I consider it of high importance to study the different aspects together in order to build an understanding on the reasons behind negative inferences. By then developing methods that jointly optimize for different aspects, I believe that we will be able to deploy more trustworthy and private ML systems.

News