I. Introduction
As the use of machine learning becomes widespread, use cases using personal data for training data are also increasing. However, when a model is constructed using personal data in machine learning, this gives rise to two privacy threats. One is data leakage by a model constructor. The constructor can easily obtain personal data since the constructor needs to access the training dataset, including personal data, in order to train a model. The other threat is data leakage by model users. For example, training data can be inferred by model inversion attack [1]. Providing or publicizing the constructed model to others may result in the leaking of information from the training data remaining in the model.