Ensuring Fairness and Addressing Sources of Bias in Federated Learning

Description:

Federated Learning (FL) is a decentralized machine learning approach where local models are trained on distributed clients, allowing privacy-preserving collaboration by sharing model updates instead of raw data.

In Federated Learning, the selection of clients participating in the training round is often called client selection. Client selection in FL is performed by the FL server, which selects clients to send the global model for AI training each round. The client selection is critical to an FL in terms of time to accuracy, energy consumption, the final model’s accuracy, as well as fairness. From the perspective of a global model owner, the selection decision in each round could have a profound impact on the model’s training time, convergence speed, training stability, and the final achieved accuracy.

The client selection approaches that select clients based on certain criteria such as compute, energy, speed, etc., introduce training procedure biases towards clients with limited availability of these resources. These training procedure biases can result in poor model performance as clients with limited resources barely participate in the training. This is especially problematic when data is not independent and identically distributed among clients.

Scope of the Thesis:

Required Skills:

References:

Contact Person:

Agrawal, Pratik (pratikkumar.vijaykumar.agrawal ∂ tu-berlin.de)