Resource Allocation for Distributed Processing

Choosing and configuring cluster resources for distributed data processing jobs can be a challenging task. Even expert users often do not fully understand system and workload dynamics, also just because there is usually no full information for all the factors that influence the performance of processing jobs. At the same time, configuring cluster resources so that jobs execute without significant bottlenecks and taking into account constraints for the execution time and utilized resources is important.
We, therefore, work on resource allocation methods and tools that take such requirements into account and utilize profiling, monitoring, and performance modeling to select adequate sets of resources.

Moreover, co-locating processing tasks with complementary resource demands in shared infrastructures can further increase the resource utilization and job throughput. We, therefore, aim to answer the following questions for different data processing workloads with our research: What kind of resource should be allocated for a job and its tasks? Which job should be run next when resources become available? Where should a specific task be placed in a particular infrastructure? Should certain tasks be co-located onto shared resources? To answer these questions, we use monitoring data, profiling runs, different performance models, as well as scoring and optimization methods.

Ongoing Research

We currently work on multiple topics in this area:

People

Publications