1. Introduction
The artificial intelligence (AI) paradigm is rapidly gaining momentum and machine learning (ML) has been playing a core role in modern IT services (e.g., product recommendation [1]–[3], personalized medicine [4]–[6], destination prediction [7]–[9]). In many of these services, ML models make some predictions using the personal data of users as input. Since the personal data often include sensitive information, which the users may want to keep confidential (e.g., purchase logs of sensitive items, genetic markers, home addresses), the handling of personal data in the ML model is raising concerns in relation to privacy issues. To this extent, analyzing the vulnerability (or security) of ML systems has attracted much active research.