I. Introduction
Point Cloud (PC) is a collection of discrete geometric samples of the surface of a physical object in 3D space, useful for a range of imaging applications such as immersive communication and virtual/augmented reality (AR/VR) [1], [2], [3], [4]. With the ubiquity of inexpensive active sensors like Microsoft Kinect and Intel RealSense, one common method to generate a PC is to deploy one or more sensors at multiple viewpoints to capture depth measurements (in the form of images) of an object, then project these measurements to 3D space to synthesize a PC [5], [6]. However, limitations in the depth acquisition process mean that the acquired depth measurements suffer from both imprecision (due to quantization) and additive noise. This results in a noisy synthesized PC, and previous works focus on denoising PCs using a variety of methods: low-rank prior, low-dimensional manifold model (LDMM), surface smoothness priors expressed as graph total variation (GTV), graph Laplacian regularizer (GLR) and feature graph Laplacian regularizer (FGLR), Moving Robust Principal Components Analysis (MRPCA), data-driven learning, etc [7], [8], [9], [10], [11], [12].