A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.





Feature engineering and probabilistic tracking on honey bee trajectories

Published in Bachelor Thesis, Freie Universität Berlin, 2017

Feature engineering and model tuning to perform visual tracking of marked honey bees on a hive.

Recommended citation: Boenisch, Franziska. (2017). "Feature engineering and probabilistic tracking on honey bee trajectories." Bachelor Thesis. Freie Universität Berlin.

Tracking all members of a honey bee colony over their lifetime using learned models of correspondence

Published in Frontiers in Robotics and AI, 2018

Probabilistic object tracking framework to perform large-scale tracking of several thousand honey bees

Recommended citation: Boenisch, Franziska, et al. (2018). "Tracking all members of a honey bee colony over their lifetime using learned models of correspondence." Frontiers in Robotics and AI. 5(35).

Differential Privacy: General Survey and Analysis of Practicabilityin the Context of Machine Learning

Published in Master Thesis, Freie Universität Berlin, 2019

Introduction and literature review on Differential Privacy. Implementation and performance evaluation several Differentially Privacy linear regression models.

Recommended citation: Boenisch, Franziska. (2019). "Differential Privacy: General Survey and Analysis of Practicabilityin the Context of Machine Learning." Master Thesis. Freie Universität Berlin.


50 Shades of Privacy


A science slam is a different way to present your research, which only has two rules: 1) You need to present something you are personally researching on, 2) The talk shall not exeed 10 minutes, 3) The talk should be entaining and understandable. I presented my research on Differential Privacy. Find my talk here.

The Long and Winding Road of Secure and Private Machine Learning


Abstract: Nowadays, machine learning (ML) is used everywhere, including in sectors that deal with extremely sensitive data, like health or finance. And while most companies do not deploy a single line of code anymore without being tested somehow, ML models are often let out into the wild without being checked or secured. In my talk, I will guide you through the long road of possible threats and attacks that your ML models might be experiencing out there, and give an overview what countermeasures might be worth considering.

Privacy-preserving Machine Learning with Differential Privacy


Abstract: With the growing amount of data being collected about individuals, ever more complex machine learning models can be trained based on those individuals’ characteristics and behaviors. Methods for extracting private information from the trained models become more and more sophisticated, such that individual privacy is threatened. In this talk, I will introduce some powerful methods for training neural networks with privacy guarantees. I will also show how to apply those methods effectively in order to achieve a good trade-off between utility and privacy.

Bringing Privacy-Preserving Machine Learning Methods into Real-World Use


Abstract: Nowadays, there exist several privacy-preserving machine learning methods. Most of them are made available to potential users through tools or programming libraries. However, in order to thoroughly protect privacy, these tools need to be applied in the correct scenarios with the correct setting. This lecture covers the identification of concrete threat spaces concerning privacy in machine learning, the choice of adequare protection measures, and their practical application. Especially the latter point is discussed in class with respect to general usability and design patterns.

Privacy-Preservation in Machine Learning: Threats and Solutions


Abstract: In recent years, privacy threats against user data have become more diverse. Attacks are no longer solely directed against databases where sensible data is stored but can also be applied to data analysis methods or their results. Thereby, they enable an adversary to learn potentially sensitive attributes of the data used for the analyses. This lecture aims at presenting common privacy threat spaces in data analysis methods with a special focus on machine learning. Next to a general view on privacy preservation and threat models, some very specific attacks against machine learning privacy are introduced (e.g. model inversion, membership inference). Additionally, a range of privacy-preservation methods for machine learning, such as differential privacy, homomorphic encryption, etc., are presented. Finally, their adequate application is discussed with respect to common threat spaces.


Security Protocols and Infrastructures

Lecture, Freie Universität Berlin, Department of Computer Science, 2019

Worked as teaching assistant for the Master level course Security Protocols and Infrastructures. The course treated security protocols (e.g. TLS, PACE, EAC), ASN.1, certificates and related norms such as X.509/RFC5280, and public key infrastructures (PKI).

Machine Learning and IT Security

Seminary, Freie Universität Berlin, Department of Computer Science, 2020

Held a Master level seminary about Machine Learning and IT Security. The seminary covered topics about securing digital infrastructure through ML assistance, as well as protecting ML models against security and privacy violations.

Hello (brand new data) world

Seminary, Universität Bayreuth, Department of Philosophy, 2020

Held a Bachelor level invited seminary about the ethical implications of ML on society. The seminary consisted of a technical / computer science as well as a philosophical part. In the technical part, theoretical background as well as implementation details of ML algorithms were presented. The philosophy part treated subjects as the Turing test, the Chinese room argument, and discussions about dataism, surveillance, autonomous driving and autonomous weapon systems.