Accelerating recurrent neural network training using sequence bucketing and multi-GPU data parallelization | IEEE Conference Publication | IEEE Xplore

Accelerating recurrent neural network training using sequence bucketing and multi-GPU data parallelization


Abstract:

An efficient algorithm for recurrent neural network training is presented. The approach increases the training speed for tasks where a length of the input sequence may va...Show More

Abstract:

An efficient algorithm for recurrent neural network training is presented. The approach increases the training speed for tasks where a length of the input sequence may vary significantly. The proposed approach is based on the optimal batch bucketing by input sequence length and data parallelization on multiple graphical processing units. The baseline training performance without sequence bucketing is compared with the proposed solution for a different number of buckets. An example is given for the online handwriting recognition task using an LSTM recurrent neural network. The evaluation is performed in terms of the wall clock time, number of epochs, and validation loss value.
Date of Conference: 23-27 August 2016
Date Added to IEEE Xplore: 06 October 2016
ISBN Information:
Conference Location: Lviv, Ukraine

I. Introduction

Deep neural networks have recently proven to be successful in pattern recognition tasks. The Recurrent Neural Network (RNN) is a subclass of neural networks defined by presence of feedback connections.

Contact IEEE to Subscribe

References

References is not available for this document.