I. Introduction
With the development of Internet of Things (IoT), the smart IoT devices have been integrated into all aspects of our lives (e.g., smart cameras). In order to make better use of collected data and provide more intelligent decision making, AI technology has been applied to IoT applications to achieve the real-time edge intelligence. However, the increasing awareness of data privacy (e.g., faces, identities, and behavioral habits) motivates users to keep data locally on their own devices. Privacy concern on large-scale data aggregation also leads to administrative policies, such as general data protection regulation (GDPR) in European Union and California Privacy Act (CPA) in USA. Together with the greatly enhanced computation capability of end devices, collaborative learning (termed as federated learning as well) emerges as a vastly developed learning scheme in real-world IoT applications. Collaborative learning only requires gradients rather than raw data from participants for model training; hence, data privacy protection comes naturally with little cost. Unfortunately, researchers [1]–[3] found that an attacker could still infer private information of participants in collaborative learning merely from shared knowledge, such as gradients, empirical loss, or model parameters.