I. Introduction
The proliferation of edge devices and small-scale local servers available off-the-shelf nowadays generated an astonishing trove of data, to be used in several areas, including smart homes, e-health, etc. For several of these scenarios, the data being generated is highly sensitive. While the deployment of data-driven machine learning (ML) algorithms to train models over such data is becoming prevalent, one must take special care to prevent privacy leaks. In fact, it has been shown how, without proper mitigation mechanisms, sensitive data (i.e., the one used by such ML during training) can be reconstructed. To overcome this problem, an increasingly popular approach is federated learning (FL) [1], [2]. FL is a decentralized machine learning paradigm, where clients share with a trusted server only their local individual updates, rather than the data used to train it, hence protecting by design the privacy of user data. The trusted FL server is known by all nodes. His role is to build a global model by aggregating the updates sent by the nodes. Once aggregated, the server broadcasts back the updated model to all clients. The nodes will update their models locally and use the following updates with a fresh batch of local data (i.e., for inference purposes). This approach prevents user-data from leaving the user devices, as only the local model updates are sent outside the device.