Optimizing Pipelined Computation and Communication for Latency-Constrained Edge Learning | IEEE Journals & Magazine | IEEE Xplore

Optimizing Pipelined Computation and Communication for Latency-Constrained Edge Learning


Abstract:

Consider a device that is connected to an edge processor via a communication channel. The device holds local data that is to be offloaded to the edge processor so as to t...Show More

Abstract:

Consider a device that is connected to an edge processor via a communication channel. The device holds local data that is to be offloaded to the edge processor so as to train a machine learning model, e.g., for regression or classification. Transmission of the data to the learning processor, as well as training based on stochastic gradient descent (SGD), must be both completed within a time limit. Assuming that communication and computation can be pipelined, this letter investigates the optimal choice for the packet payload size, given the overhead of each data packet transmission and the ratio between the computation and the communication rates. This amounts to a tradeoff between bias and variance, since communicating the entire data set first reduces the bias of the training process but it may not leave sufficient time for learning. Analytical bounds on the expected optimality gap are derived so as to enable an effective optimization, which is validated in numerical results.
Published in: IEEE Communications Letters ( Volume: 23, Issue: 9, September 2019)
Page(s): 1542 - 1546
Date of Publication: 13 June 2019

ISSN Information:

Funding Agency:


I. Introduction

Edge learning refers to the training of machine learning models on devices that are close to the end users [1]. The proximity to the user is instrumental in facilitating a low-latency response, in enhancing privacy, and in reducing backhaul congestion. Edge learning processors include smart phones and other user-owned devices, as well as edge nodes of a wireless network that provide wireless access and computational resources [1]. As illustrated in Fig. 1, the latter case hinges on the offloading of data from the data-bearing device to the edge processor, and can be seen as an instance of mobile edge computing [2].

An edge computing system, in which training of a model parametrized by vector takes place at an edge processor based on data received from a device using a protocol with timeline illustrated in Fig. 2 (OH = overhead).

Contact IEEE to Subscribe

References

References is not available for this document.