Loading [MathJax]/extensions/MathMenu.js
Sync-Switch: Hybrid Parameter Synchronization for Distributed Deep Learning | IEEE Conference Publication | IEEE Xplore

Sync-Switch: Hybrid Parameter Synchronization for Distributed Deep Learning


Abstract:

Stochastic Gradient Descent (SGD) has become the de facto way to train deep neural networks in distributed clusters. A critical factor in determining the training through...Show More

Abstract:

Stochastic Gradient Descent (SGD) has become the de facto way to train deep neural networks in distributed clusters. A critical factor in determining the training throughput and model accuracy is the choice of the parameter synchronization protocol. For example, while Bulk Synchronous Parallel (BSP) often achieves better converged accuracy, the corresponding training throughput can be negatively impacted by stragglers. In contrast, Asynchronous Parallel (ASP) can have higher throughput, but its convergence and accuracy can be impacted by stale gradients. To improve the performance of synchronization protocol, recent work often focuses on designing new protocols with a heavy reliance on hard-to-tune hyper-parameters. In this paper, we design a hybrid synchronization approach that exploits the benefits of both BSP and ASP, i.e., reducing training time while simultaneously maintaining the converged accuracy. Based on extensive empirical profiling, we devise a collection of adaptive policies that determine how and when to switch between synchronization protocols. Our policies include both offline ones that target recurring jobs and online ones for handling transient stragglers. We implement the proposed policies in a prototype system, called Sync-Switch, on top of TensorFlow, and evaluate the training performance with popular deep learning models and datasets. Our experiments show that Sync-Switch can achieve ASP level training speedup while maintaining similar converged accuracy when comparing to BSP. Moreover, Sync-Switch's elastic-based policy can adequately mitigate the impact from transient stragglers.
Date of Conference: 07-10 July 2021
Date Added to IEEE Xplore: 04 October 2021
ISBN Information:

ISSN Information:

Conference Location: DC, USA

Funding Agency:

References is not available for this document.

I. Introduction

We are witnessing the increasingly widespread adoption of deep learning in a plethora of application domains. The unprecedented success of deep learning is, in large part, powered by rapid model innovations, which in turn critically depend on algorithms and systems support for training. One of these innovations, distributed deep learning-training deep neural networks on a cluster of GPU servers-is increasingly leveraged to train complex models on larger datasets. In particular, SGD-based optimization has emerged as the de facto way to perform distributed training and provides the basis for parallelizing training jobs, allowing deep learning practitioners to evaluate different model variants quickly.

Select All
1.
S. Shi et al., "Performance modeling and evaluation of distributed deep learning frameworks on gpus" in DASC/PiCom/DataCom/CyberSciTech, 2018.
2.
S. Li et al., "Speeding up Deep Learning with Transient Servers" in ICAC, 2019.
3.
F. Yan et al., "Performance Modeling and Scalability Optimization of Distributed Deep Learning Systems" in SIGKDD, 2015.
4.
T. Ben-Nun et al., "Demystifying Parallel and Distributed Deep Learning: An In-depth Concurrency Analysis" in ACM Comput. Surv., 2019.
5.
S. Li et al., "Characterizing and Modeling Distributed Training with Transient Cloud GPU Servers" in ICDCS, 2020.
6.
A. V. Gerbessiotis et al., "Direct Bulk-Synchronous Parallel Algorithms" in J. Parallel Distrib. Comput., 1994.
7.
J. Dean et al., "Large scale distributed deep networks" in NeurIPS, 2012.
8.
X. Zhao et al., "Dynamic stale synchronous parallel distributed training for deep learning" in ICDCS, 2019.
9.
W. Jiang et al., "A novel stochastic gradient descent algorithm based on grouping over heterogeneous cluster systems for distributed deep learning" in CCGRID, 2019.
10.
K. Hsieh et al., "Gaia: Geo-distributed machine learning approaching lan speeds" in NSDI, 2017.
11.
S. Dutta et al., "Slow and stale gradients can win the race", arXiv preprint, 2020.
12.
Y. Peng et al., "Optimus: An Efficient Dynamic Resource Scheduler for Deep Learning Clusters" in EuroSys, 2018.
13.
B. Recht et al., "Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent" in NeurIPS, 2011.
14.
A. Or et al., "Resource Elasticity in Distributed Deep Learning", Proceedings of Machine Learning and Systems, 2020.
15.
J. Chen et al., "Revisiting distributed synchronous sgd", arXiv preprint, 2016.
16.
K. He et al., "Deep residual learning for image recognition" in CVP R, 2016.
17.
G. Huang et al., "Densely connected convolutional networks" in CVPR, 2017.
18.
S. Li et al., "Sync-switch: Extended report", arXiv preprint, 2021.
19.
B. Kleinberg et al., "An alternative view: When does sgd escape local minima?" in ICML, 2018.
20.
S. Hochreiter et al., "Simplifying neural nets by discovering flat minima" in NeurIPS, 1995.
21.
N. S. Keskar et al., "On large-batch training for deep learning: Generalization gap and sharp minima" in ICLR, 2017.
22.
Q. Duan, "Cloud service performance evaluation: status challenges and opportunities-a survey from the system modeling perspective" in Digital Communications and Networks, 2017.
23.
A. Li et al., "CloudCmp: comparing public cloud providers" in ACM IMC, 2010.
24.
J. Xie et al., "Improving mapreduce performance through data placement in heterogeneous hadoop clusters" in IPDPSW, 2010.
25.
A. Senior et al., "An empirical study of learning rates in deep neural networks for speech recognition" in ICASSP, 2013.
26.
S. L. Smith et al., "Dont decay the learning rate increase the batch size", arXiv preprint, 2017.
27.
P. Goyal et al., "Accurate large minibatch sgd: Training imagenet in 1 hour", arXiv preprint, 2017.
28.
H. Lin et al., "Dynamic mini-batch sgd for elastic distributed training: learning in the limbo of resources", ar Xiv preprint, 2019.
29.
A. Vaswani et al., "Tensor2tensor for neural machine translation" in CoRR, 2018.
30.
A. Krizhevsky et al., Cifar-10, 2017.
Contact IEEE to Subscribe

References

References is not available for this document.