Loading [MathJax]/extensions/MathMenu.js
Data-Free Learning of Student Networks | IEEE Conference Publication | IEEE Xplore

Data-Free Learning of Student Networks


Abstract:

Learning portable neural networks is very essential for computer vision for the purpose that pre-trained heavy deep models can be well applied on edge devices such as mob...Show More

Abstract:

Learning portable neural networks is very essential for computer vision for the purpose that pre-trained heavy deep models can be well applied on edge devices such as mobile phones and micro sensors. Most existing deep neural network compression and speed-up methods are very effective for training compact deep models, when we can directly access the training dataset. However, training data for the given deep network are often unavailable due to some practice problems (\eg privacy, legal issue, and transmission), and the architecture of the given network are also unknown except some interfaces. To this end, we propose a novel framework for training efficient deep neural networks by exploiting generative adversarial networks (GANs). To be specific, the pre-trained teacher networks are regarded as a fixed discriminator and the generator is utilized for derivating training samples which can obtain the maximum response on the discriminator. Then, an efficient network with smaller model size and computational complexity is trained using the generated data and the teacher network, simultaneously. Efficient student networks learned using the proposed Data-Free Learning (DFL) method achieve 92.22% and 74.47% accuracies without any training data on the CIFAR-10 and CIFAR-100 datasets, respectively. Meanwhile, our student network obtains an 80.56% accuracy on the CelebA benchmark.
Date of Conference: 27 October 2019 - 02 November 2019
Date Added to IEEE Xplore: 27 February 2020
ISBN Information:

ISSN Information:

Conference Location: Seoul, Korea (South)

1. Introduction

Deep convolutional neural networks (CNNs) have been successfully used in various computer vision applications such as image classification [24,11], object detection [21] and semantic segmentation [15]. However, launching most of the widely used CNNs requires heavy computation and storage, which can only be used on PCs with modern GPU cards. For example, over 500MB of memory and over multiplications are demanded for processing one image using VGGNet [24], which is almost impossible to be applied on edge devices such as autonomous cars and micro robots. Although these pre-trained CNNs have a number of parameters, Han et al. [6] showed that discarding over 85% of weights in a given neural network would not obviously damage its performance, which demonstrates that there is a significant redundancy in these CNNs.

Contact IEEE to Subscribe

References

References is not available for this document.