I. Introduction
The Consumer Internet of Things (CIoT) and Machine Learning (ML) have advanced quickly [1]. This progress allows numerous smart CIoT devices to connect and create ML models that offer personalized AI services [2]. However, this deployment raises significant security and privacy concerns. These concerns are mainly from the fact that ML models typically use sensitive user data during training [3], [4]. To address these concerns, Federated Learning (FL) [5] emerges as a promising solution. FL facilitates collaborative training of ML models by multiple users in a decentralized manner, eliminating the need to share personal data. Through FL, the data generated by consumer electronics remains confined in users’ local environment [6]. Although FL provides basic privacy protection [7], the frequent transmission model weights by CIoT devices [8] amplifies the vulnerability of these models, potentially leading to the exploitation of privacy information. Xiong et al. [9] and Barroso et al. [10] indicated that FL is exposed to various privacy inference attacks for compromising user sensitive data. This revelation highlights the necessity for constant awareness and enhancement in privacy protection measures within CIoT.