Continual Learning in Federated Learning

Overview:

Continual learning (CL), also known as lifelong learning, aims to enable machine learning models to learn continuously from a stream of data without forgetting previous knowledge. Federated learning (FL) is a distributed approach where multiple clients collaboratively train a model while keeping their data decentralized and local/private. Combining these two paradigms presents unique challenges and opportunities, such as handling non-IID (Independent and Identically Distributed) data, mitigating catastrophic forgetting, and ensuring privacy and security. Research Questions:

Potential research questions for theses lie at the intersection of distributed systems and ML:

  1. Investigate current methodologies and challenges in continual learning.
  2. Explore the integration of continual learning mechanisms in federated learning settings.
  3. Develop and evaluate algorithms that address continual learning in federated environments.
  4. Analyze the performance and robustness of these algorithms under various distribution shifts and data heterogeneity

Prerequisites:

Work in this research area requires good knowledge or interest of machine learning and solid knowledge of distributed systems. In addition, the information on how to conduct theses at our department must be read and considered.

Start: Immediately

Contact: Patrick Wilhelm (patrick.wilhelm ∂ tu-berlin.de)

References:

  1. H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in AISTATS, 2016.

  2. https://imerit.net/blog/a-complete-introduction-to-continual-learning/

  3. https://www.amazon.science/blog/continual-learning-in-the-federated-learning-context

  4. https://www.visual-intelligence.no/publications/federated-learning-under-covariate-shifts-with-generalization-guarantees

  5. F. Lai, X. Zhu, H. V. Madhyastha, and M. Chowdhury, “Oort: Efficient federated learning via guided participant selection,” in USENIX OSDI, 2021.

  6. Y. Jee Cho, J. Wang, and G. Joshi, “Towards understanding biased client selection in federated learning,” in AISTATS, 2022.

  7. C. Zhu, Z. Xu, M. Chen, J. Konečný, A. Hard, and T. Goldstein, “Diurnal or nocturnal? federated learning of multi-branch networks from periodically shifting distributions,” in ICLR, 2022.