Loading [MathJax]/extensions/MathZoom.js
Job-aware Communication Scheduling for DML Training in Shared Cluster | IEEE Conference Publication | IEEE Xplore

Job-aware Communication Scheduling for DML Training in Shared Cluster


Abstract:

Distributed machine learning (DML) systems equipped with multiple computing nodes have been widely adopted to accelerate large model training in the industry. To maximize...Show More

Abstract:

Distributed machine learning (DML) systems equipped with multiple computing nodes have been widely adopted to accelerate large model training in the industry. To maximize resource utilization, a critical problem is how to schedule the communication of DML jobs efficiently. However, previous approaches work well only when a job can use the network resources exclusively. Training multiple jobs in shared cluster without scheduling will bring significant performance degradation since network contention. In this paper, we propose JCS, a job-aware communication scheduler to overcome the above problems. JCS profiles the priority with a novel metric among jobs and schedule communication of jobs according to both computation and communication information. To demonstrate the effectiveness of our algorithm, we perform extensive simulations with DML job traces. The simulation results show that our algorithm can reduce average job completion time by 19%, 39% and 46% over RRSP, SCF and LCoF.
Date of Conference: 14-16 December 2020
Date Added to IEEE Xplore: 26 April 2021
ISBN Information:
Conference Location: Yanuca Island, Cuvu, Fiji

Funding Agency:

No metrics found for this document.

I. Introduction

Machine learning (ML) technology has drawn huge attention due to the great potential in various application areas. Training large ML models is compute-intensive and may involve a large amount of training data. Reducing the training time is significant for ML applications and directly affects the profit of a company [1] [2]. To this end, distributed machine learning (DML) was proposed. Typically, DML partitions the training data, and uses a set of workers to perform the training process in parallel. Worker instances are placed on GPU exclusively in different servers. The parameters trained by each worker are aggregated and synchronized periodically. As such, DML accelerates the training process by utilizing compute resources efficiently. With the development of data centers and cloud computing, DML is currently widely used in the industry.

Usage
Select a Year
2025

View as

Total usage sinceApr 2021:159
01234567JanFebMarAprMayJunJulAugSepOctNovDec161000000000
Year Total:8
Data is updated monthly. Usage includes PDF downloads and HTML views.
Contact IEEE to Subscribe

References

References is not available for this document.