Heterogeneous Hardware Testbed of Federated Learning


Federated Learning (FL) is a decentralized machine learning approach where local models are trained on distributed clients, allowing privacy-preserving collaboration by sharing model updates instead of raw data. However, the added communication overhead and increased training time caused by heterogeneous data distributions results in higher energy consumption and carbon emissions for achieving similar model performance than traditional machine learning. At the same time, efficient usage of available energy is an important requirement for battery-constrained devices.

Because of this, many different approaches on energy-efficient and carbon-efficient FL scheduling and client selection have been published in recent years. However, most of this research oversimplifies power-performance characteristics of clients by assuming that they always require the same amount of energy per processed sample throughout training. This overlooks real-world effects arising from operating devices under different power modes or the side effects of running other workloads in parallel.


The primary objectives of this thesis would be to:

  1. Create a hardware testbed consisting of Android and iOS devices for executing federated learning trainings in realistic resource constraint scenarios.
  2. Analyze the impact of such factors and discuss how better power-performance estimates can improve energy-efficient FL scheduling.
  3. Compare effects of IID and Non-IID data distribution on runtime, energy, memory, and other hardware resource consumption.
  4. Compare and contrast the accuracy, speed, performance, cost, etc.
  5. Quantify cost vs. accuracy trade-off.

Required Skills:

Note: Hardware devices would be provided.

Contact Person: Agrawal, Pratik (pratikkumar.vijaykumar.agrawal ∂ tu-berlin.de)