Loading [MathJax]/extensions/MathMenu.js
Dynamic Stale Synchronous Parallel Distributed Training for Deep Learning | IEEE Conference Publication | IEEE Xplore

Dynamic Stale Synchronous Parallel Distributed Training for Deep Learning


Abstract:

Deep learning is a popular machine learning technique and has been applied to many real-world problems, ranging from computer vision to natural language processing. Howev...Show More

Abstract:

Deep learning is a popular machine learning technique and has been applied to many real-world problems, ranging from computer vision to natural language processing. However, training a deep neural network is very time-consuming, especially on big data. It has become difficult for a single machine to train a large model over large datasets. A popular solution is to distribute and parallelize the training process across multiple machines using the parameter server framework. In this paper, we present a distributed paradigm on the parameter server framework called Dynamic Stale Synchronous Parallel (DSSP) which improves the state-of-the-art Stale Synchronous Parallel (SSP) paradigm by dynamically determining the staleness threshold at the run time. Conventionally to run distributed training in SSP, the user needs to specify a particular stalenes threshold as a hyper-parameter. However, a user does not usually know how to set the threshold and thus often finds a threshold value through trial and error, which is time-consuming. Based on workers' recent processing time, our approach DSSP adaptively adjusts the threshold per iteration at running time to reduce the waiting time of faster workers for synchronization of the globally shared parameters (the weights of the model), and consequently increases the frequency of parameters updates (increases iteration through-put), which speedups the convergence rate. We compare DSSP with other paradigms such as Bulk Synchronous Parallel (BSP), Asynchronous Parallel (ASP), and SSP by running deep neural networks (DNN) models over GPU clusters in both homogeneous and heterogeneous environments. The results show that in a heterogeneous environment where the cluster consists of mixed models of GPUs, DSSP converges to a higher accuracy much earlier than SSP and BSP and performs similarly to ASP. In a homogeneous distributed cluster, DSSP has more stable and slightly better performance than SSP and ASP, and converges much faster than BSP.
Date of Conference: 07-10 July 2019
Date Added to IEEE Xplore: 31 October 2019
ISBN Information:

ISSN Information:

Conference Location: Dallas, TX, USA
No metrics found for this document.

I. Introduction

The parameter server framework [1] [2] has been developed to support distributed training of large-scale machine learning (ML)models (such as deep neural networks [3]–[5])over very large data sets, such as Microsoft COCO [6], ImageNet 1K [3] and ImageNet 22K [7]. Training a deep model using a large-scale cluster with an efficient distributed paradigm reduces the training time from weeks on a single server to days or hours.

Usage
Select a Year
2025

View as

Total usage sinceNov 2019:896
02468101214JanFebMarAprMayJunJulAugSepOctNovDec9313000000000
Year Total:25
Data is updated monthly. Usage includes PDF downloads and HTML views.

Contact IEEE to Subscribe

References

References is not available for this document.