Loading [MathJax]/extensions/MathZoom.js
TVFL: Tunable Vertical Federated Learning towards Communication-Efficient Model Serving | IEEE Conference Publication | IEEE Xplore

TVFL: Tunable Vertical Federated Learning towards Communication-Efficient Model Serving


Abstract:

Vertical federated learning (VFL) enables multiple participants with different data features and the same sample ID space to collaboratively train a model in a privacy-pr...Show More

Abstract:

Vertical federated learning (VFL) enables multiple participants with different data features and the same sample ID space to collaboratively train a model in a privacy-preserving way. However, the high computational and communication overheads hinder the adoption of VFL in many resource-limited or delay-sensitive applications. In this work, we focus on reducing the communication cost and delay incurred by the transmission of intermediate results in VFL model serving. We investigate the inference results, and find that a large portion of test samples can be predicted correctly by the active party alone, thus the corresponding communication for federated inference is dispensable. Based on this insight, we theoretically analyze the "dispensable communication" and propose a novel tunable vertical federated learning framework, named TVFL, to avoid "dispensable communication" in model serving as much as possible. TVFL can smartly switch between independent inference and federated inference based on the features of the input sample. We further reveal that such tunability is highly related to the importance of participants’ features. Our evaluations on seven datasets and three typical VFL models show that TVFL can save 57.6% communication cost and reduce 57.1% prediction latency with little performance degradation.
Date of Conference: 17-20 May 2023
Date Added to IEEE Xplore: 29 August 2023
ISBN Information:

ISSN Information:

Conference Location: New York City, NY, USA

Funding Agency:

References is not available for this document.

I. Introduction

There are two main categories of federated learning frameworks, horizontal federated learning (HFL) and vertical federated learning (VFL), based on the distribution of participants’ data in the feature space and sample ID space. In HFL, participants share the same feature space but have different sample IDs [1]–[7]; while in VFL, participants share the same sample ID space but have different data features [1], [8]–[10]. As VFL is being used in various businesses such as insurance assessment and financial risk control, the high computational and communication overheads of VFL hinder its adoption in many resource-limited or delay-sensitive applications, e.g., mobile computing and online advertising.

Select All
1.
Q. Yang, Y. Liu, T. Chen and Y. Tong, "Federated machine learning: Concept and applications", TIST, 2019.
2.
H. B. McMahan, E. Moore, D. Ramage, S. Hampson and B. A. y Arcas, "Communication-efficient learning of deep networks from decentralized data", AISTATS, 2017.
3.
A. Li, L. Zhang, J. Tan, Y. Qin, J. Wang and X.-Y. Li, "Sample-level data selection for federated learning", IEEE INFOCOM 2021-IEEE Conference on Computer Communications, pp. 1-10, 2021.
4.
Z. Shi, L. Zhang, Z. Yao, L. Lyu, C. Chen, L. Wang, et al., "Fedfaim: A model performance-based fair incentive mechanism for federated learning", IEEE Transactions on Big Data, 2022.
5.
A. Li, L. Zhang, J. Wang, J. Tan, F. Han, Y. Qin, et al., "Efficient federated-learning model debugging", 2021 IEEE 37th International Conference on Data Engineering (ICDE), pp. 372-383, 2021.
6.
J. Wang, L. Zhang, A. Li, X. You and H. Cheng, "Efficient participant contribution evaluation for horizontal and vertical federated learning", 2022 IEEE 38th International Conference on Data Engineering (ICDE), pp. 911-923, 2022.
7.
A. Li, L. Zhang, J. Wang, F. Han and X.-Y. Li, "Privacy-preserving efficient federated-learning model debugging", IEEE Transactions on Parallel and Distributed Systems, vol. 33, no. 10, pp. 2291-2303, 2021.
8.
Y. Hu, D. Niu, J. Yang and S. Zhou, "Fdml: A collaborative machine learning framework for distributed features", SIGKDD, 2019.
9.
Q. Zhang, C. Wang, H. Wu, C. Xin and T. V. X. Phuong, "Gelu-net: A globally encrypted locally unencrypted deep neural network for privacy-preserved learning", IJCAI, 2018.
10.
Y. Zhang and H. Zhu, "Additively homomorphical encryption based deep neural network for asymmetrically collaborative machine learning", 2020.
11.
W. Li, Q. Xia, J. Deng, H. Cheng, J. Liu, K. Xue, et al., "Semi-supervised cross-silo advertising with partial knowledge transfer", 2022.
12.
J. Shen, B. Orten, S. C. Geyik, D. Liu, S. Shariat, F. Bian, et al., "From 0.5 million to 2.5 million: Efficiently scaling up real-time bidding", IEEE International Conference on Data Mining, 2015.
13.
S. Yuan, J. Wang and X. Zhao, "Real-time bidding for online advertising: measurement and analysis", 2013.
14.
T. Nishio and R. Yonetani, "Client selection for federated learning with heterogeneous resources in mobile edge", ICC 2019 - 2019 IEEE International Conference on Communications (ICC), pp. 1-7, 2019.
15.
J. Xu and H. Wang, "Client selection and bandwidth allocation in wireless federated learning networks: A long-term perspective", IEEE Transactions on Wireless Communications, 2021.
16.
Y. J. Cho, J. Wang and G. Joshi, "Client selection in federated learning: Convergence analysis and power-of-choice selection strategies", 2020.
17.
T. T. Anh, N. C. Luong, D. T. Niyato, D. I. Kim and L.-C. Wang, "Efficient training management for mobile crowd-machine learning: A deep reinforcement learning approach", IEEE Wireless Communications Letters, vol. 8, pp. 1345-1348, 2019.
18.
A. Reisizadeh, A. Mokhtari, H. Hassani, A. Jadbabaie and R. Pedarsani, "Fedpaq: A communication-efficient federated learning method with periodic averaging and quantization", 2020.
19.
M. M. Amiri, D. Gündüz, S. R. Kulkarni and H. V. Poor, "Federated learning with quantized global model updates", 2020.
20.
N. Shlezinger, M. Chen, Y. C. Eldar, H. V. Poor and S. Cui, "Federated learning with quantization constraints", IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2020.
21.
D. Rothchild, A. Panda, E. Ullah, N. Ivkin, I. Stoica, V. Braverman, et al., "Fetchsgd: Communication-efficient federated learning with sketching", ICML, 2020.
22.
S. Li, Q. Qi, J. Wang, H. Sun, Y. Li and F. R. Yu, "Ggs: General gradient sparsification for federated learning in edge computing*", IEEE International Conference on Communications (ICC), 2020.
23.
T. Castiglia, A. Das, S. Wang and S. Patterson, "Compressed-vfl: Communication-efficient learning with vertically partitioned data", ICML, 2022.
24.
S. M. Lundberg and S.-I. Lee, "A unified approach to interpreting model predictions", 2017.
25.
M. Kamp, L. Adilova, J. Sicking, F. Hüger, P. Schlicht, T. Wirtz, et al., "Efficient decentralized deep learning by dynamic model averaging", ECML/PKDD, 2018.
26.
H. T. Nguyen, V. Sehwag, S. Hosseinalipour, C. G. Brinton, M. Chiang and H. V. Poor, "Fast-convergent federated learning", IEEE Journal on Selected Areas in Communications, vol. 39, pp. 201-218, 2021.
27.
A. Reisizadeh, H. Taheri, A. Mokhtari, H. Hassani and R. Pedarsani, "Robust and communication-efficient collaborative learning", NeurIPS, 2019.
28.
H. Tang, S. Gan, C. Zhang, T. Zhang and J. Liu, "Communication compression for decentralized training", NeurIPS, 2018.
29.
G. E. Hinton, O. Vinyals and J. Dean, "Distilling the knowledge in a neural network", 2015.
30.
S. Han, H. Mao and W. J. Dally, "Deep compression: Compressing deep neural network with pruning trained quantization and huffman coding", 2016.

Contact IEEE to Subscribe

References

References is not available for this document.