Toward Efficient Online Scheduling for Distributed Machine Learning Systems

Kavli Affiliate: Jia Liu

| First 5 Authors: Menglu Yu, Jia Liu, Chuan Wu, Bo Ji, Elizabeth S. Bentley

| Summary:

Recent years have witnessed a rapid growth of distributed machine learning
(ML) frameworks, which exploit the massive parallelism of computing clusters to
expedite ML training. However, the proliferation of distributed ML frameworks
also introduces many unique technical challenges in computing system design and
optimization. In a networked computing cluster that supports a large number of
training jobs, a key question is how to design efficient scheduling algorithms
to allocate workers and parameter servers across different machines to minimize
the overall training time. Toward this end, in this paper, we develop an online
scheduling algorithm that jointly optimizes resource allocation and locality
decisions. Our main contributions are three-fold: i) We develop a new
analytical model that considers both resource allocation and locality; ii)
Based on an equivalent reformulation and observations on the worker-parameter
server locality configurations, we transform the problem into a mixed packing
and covering integer program, which enables approximation algorithm design;
iii) We propose a meticulously designed approximation algorithm based on
randomized rounding and rigorously analyze its performance. Collectively, our
results contribute to the state of the art of distributed ML system
optimization and algorithm design.

| Search Query: ArXiv Query: search_query=au:”Jia Liu”&id_list=&start=0&max_results=10

Read More