1. Introduction
Machine learning (ML) algorithms are often trained on private or sensitive data, such as face images, medical records, and financial information. Unfortunately, since ML models tend to memorize information about training data, even when stored and processed securely, privacy information can still be exposed through the access to the models [20]. Indeed, the prior study of privacy attacks has demonstrated the possibility of exposing training data at different granularities, ranging from “coarse-grained” information, such as determining whether a certain point participates in training [10], [14], [16], [21] or whether a training dataset satisfies certain properties [9], [15], to more “fine-grained” information, such as reconstructing the raw data [2], [3], [7], [24].