1 Introduction
Point cloud uses a great number of unconstrained 3D points to render 3D objects and scenes realistically, in which each point is formed using the coordinate (x, y, z) and associated attributes such as the color (RGB or YCoCg), normal, and reflectance. Nowadays, point clouds have been massively used in networked applications including Augmented and Virtual Reality, Autonomous Machinery, etc., making the desire for efficient Point Cloud Compression (PCC) more and more indispensable. In addition to those rules-based PCC solutions, such as Geometry-based PCC (GPCC) or Video-based PCC (V-PCC) standardized under the ISO/IEC MPEG committee [1], learning-based PCC approaches have attracted worldwide attention and demonstrated noticeable compression gains [2] in Point Cloud Geometry Compression (PCGC). Among them, our earlier multiscale sparse representation-based PCGC has reported state-of-the-art performance [3], [4] on a variety of point clouds (e.g., dense object and sparse LiDAR data). This work, furthermore, extends the multiscale approach for lossless Point Cloud Attribute Compression (PCAC). Following the convention [5], losslessly compressed geometry is assumed to study the PCAC.