Loading [MathJax]/extensions/MathMenu.js
SecGNN: Privacy-Preserving Graph Neural Network Training and Inference as a Cloud Service | IEEE Journals & Magazine | IEEE Xplore

SecGNN: Privacy-Preserving Graph Neural Network Training and Inference as a Cloud Service


Abstract:

Graphs are widely used to model the complex relationships among entities. As a powerful tool for graph analytics, graph neural networks (GNNs) have recently gained wide a...Show More

Abstract:

Graphs are widely used to model the complex relationships among entities. As a powerful tool for graph analytics, graph neural networks (GNNs) have recently gained wide attention due to its end-to-end processing capabilities. With the proliferation of cloud computing, it is increasingly popular to deploy the services of complex and resource-intensive model training and inference in the cloud due to its prominent benefits. However, GNN training and inference services, if deployed in the cloud, will raise critical privacy concerns about the information-rich and proprietary graph data (and the resulting model). While there has been some work on secure neural network training and inference, they all focus on convolutional neural networks handling images and text rather than complex graph data with rich structural information. In this article, we design, implement, and evaluate SecGNN, the first system supporting privacy-preserving GNN training and inference services in the cloud. SecGNN is built from a synergy of insights on lightweight cryptography and machine learning techniques. We deeply examine the procedure of GNN training and inference, and devise a series of corresponding secure customized protocols to support the holistic computation. Extensive experiments demonstrate that SecGNN achieves comparable plaintext training and inference accuracy, with promising performance.
Published in: IEEE Transactions on Services Computing ( Volume: 16, Issue: 4, 01 July-Aug. 2023)
Page(s): 2923 - 2938
Date of Publication: 02 February 2023

ISSN Information:

Funding Agency:


I. Introduction

Graphs have been widely used to model and manage data in various real-world applications, including recommendation systems [1], social networks [2] and webpage networks [3]. Graph data, however, is highly complex and inherently sparse, making graph analytics challenging [4]. With the rapid advancements in deep learning, Graph Neural Networks (GNNs) [5] have recently gained a lot of traction as a powerful tool for graph analytics due to its end-to-end processing capabilities. GNNs can empower a variety of graph-centric applications such as node classification [6], edge classification [7] and link prediction [8]. With the widespread adoption of cloud computing, it is increasingly popular to deploy machine learning training and inference services in the cloud [9], [10], due to the well-understood benefits [11], [12]. However, GNN training and inference, if deployed in the public cloud, will raise critical severe privacy concerns. Graph data is information-rich and can reveal a considerable amount of sensitive information. For example, in a social network graph, the connections between nodes represent users’ circles of friends and each node's features represent each user's preferences. Meanwhile, the graph data as well as the trained GNN model are the proprietary to the data owner, so revealing them may easily harm the business model. Therefore, security must be embedded in outsourcing GNN training and inference to the cloud.

Contact IEEE to Subscribe

References

References is not available for this document.