Overview:
Continual learning (CL), also known as lifelong learning, aims to enable machine learning models to learn continuously from a stream of data without forgetting previous knowledge. Federated learning (FL) is a distributed approach where multiple clients collaboratively train a model while keeping their data decentralized and local/private. Combining these two paradigms presents unique challenges and opportunities, such as handling non-IID (Independent and Identically Distributed) data, mitigating catastrophic forgetting, and ensuring privacy and security. Research Questions:
Potential research questions for theses lie at the intersection of distributed systems and ML:
Prerequisites:
Work in this research area requires good knowledge or interest of machine learning and solid knowledge of distributed systems. In addition, the information on how to conduct theses at our department must be read and considered.
Start: Immediately
Contact: Patrick Wilhelm (patrick.wilhelm ∂ tu-berlin.de)
References:
H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in AISTATS, 2016.
https://imerit.net/blog/a-complete-introduction-to-continual-learning/
https://www.amazon.science/blog/continual-learning-in-the-federated-learning-context
https://www.visual-intelligence.no/publications/federated-learning-under-covariate-shifts-with-generalization-guarantees
F. Lai, X. Zhu, H. V. Madhyastha, and M. Chowdhury, “Oort: Efficient federated learning via guided participant selection,” in USENIX OSDI, 2021.
Y. Jee Cho, J. Wang, and G. Joshi, “Towards understanding biased client selection in federated learning,” in AISTATS, 2022.
C. Zhu, Z. Xu, M. Chen, J. Konečný, A. Hard, and T. Goldstein, “Diurnal or nocturnal? federated learning of multi-branch networks from periodically shifting distributions,” in ICLR, 2022.