Sync-Switch: Hybrid Parameter Synchronization for Distributed Deep Learning | IEEE Conference Publication | IEEE Xplore

Sync-Switch: Hybrid Parameter Synchronization for Distributed Deep Learning


Abstract:

Stochastic Gradient Descent (SGD) has become the de facto way to train deep neural networks in distributed clusters. A critical factor in determining the training through...Show More

Abstract:

Stochastic Gradient Descent (SGD) has become the de facto way to train deep neural networks in distributed clusters. A critical factor in determining the training throughput and model accuracy is the choice of the parameter synchronization protocol. For example, while Bulk Synchronous Parallel (BSP) often achieves better converged accuracy, the corresponding training throughput can be negatively impacted by stragglers. In contrast, Asynchronous Parallel (ASP) can have higher throughput, but its convergence and accuracy can be impacted by stale gradients. To improve the performance of synchronization protocol, recent work often focuses on designing new protocols with a heavy reliance on hard-to-tune hyper-parameters. In this paper, we design a hybrid synchronization approach that exploits the benefits of both BSP and ASP, i.e., reducing training time while simultaneously maintaining the converged accuracy. Based on extensive empirical profiling, we devise a collection of adaptive policies that determine how and when to switch between synchronization protocols. Our policies include both offline ones that target recurring jobs and online ones for handling transient stragglers. We implement the proposed policies in a prototype system, called Sync-Switch, on top of TensorFlow, and evaluate the training performance with popular deep learning models and datasets. Our experiments show that Sync-Switch can achieve ASP level training speedup while maintaining similar converged accuracy when comparing to BSP. Moreover, Sync-Switch's elastic-based policy can adequately mitigate the impact from transient stragglers.
Date of Conference: 07-10 July 2021
Date Added to IEEE Xplore: 04 October 2021
ISBN Information:

ISSN Information:

Conference Location: DC, USA

Funding Agency:


I. Introduction

We are witnessing the increasingly widespread adoption of deep learning in a plethora of application domains. The unprecedented success of deep learning is, in large part, powered by rapid model innovations, which in turn critically depend on algorithms and systems support for training. One of these innovations, distributed deep learning-training deep neural networks on a cluster of GPU servers-is increasingly leveraged to train complex models on larger datasets. In particular, SGD-based optimization has emerged as the de facto way to perform distributed training and provides the basis for parallelizing training jobs, allowing deep learning practitioners to evaluate different model variants quickly.

Contact IEEE to Subscribe

References

References is not available for this document.