Privacy-preserving machine learning is now at the forefront of academic research and the tech industry. For instance, Google has published several tools quite recently that aim at protecting the privacy of user data, which may be used to train machine learning models. The tools implement concepts like federated learning, differential privacy and secure multiparty computation. Those ensure that only the user has access to their data and that it can’t be leaked from trained models. However, privacy comes with costs of accuracy. Therefore, one of the biggest issues to solve in order to make these tools practicable is optimizing the trade-off between privacy and accuracy.

In this seminar students will investigate new concepts and implementations of privacy-preserving machine learning through comprehensive literature reviews. Findings need to be summarized and presented to the class. The seminar will have a kick-off meeting at the beginning of the semester, a mid-term meeting to evaluate progress and a presentation event at the end of the semester.