I. Introduction
Compared to 2D photometric images, 3D data in the form of mesh surfaces provide more information and are invariant to illumination, out-of-plane rotations and color variations. Further, they provide geometric cues, which enable better separation of the object of interest from its background. Despite being more promising and information-rich, the focus of previous research on representing 3D data has been to design hand-crafted methods of feature description. While automatically learned feature representations in terms of activations of a trained deep neural network have shown their superiority on several tasks using 2D RGB images [1], learning generic shape representations from 3D data is still in its infancy.