Loading [MathJax]/extensions/MathMenu.js
Differentially Private Distributed Online Optimization in Time-varying Networks via Dual Averaging Push | IEEE Conference Publication | IEEE Xplore

Differentially Private Distributed Online Optimization in Time-varying Networks via Dual Averaging Push


Abstract:

Distributed online optimization allows several nodes in the network to collaboratively minimize the sum of time-varying local cost functions at each moment through inform...Show More

Abstract:

Distributed online optimization allows several nodes in the network to collaboratively minimize the sum of time-varying local cost functions at each moment through information exchange. However, frequent interactions are prone to leakage of sensitive information. Meanwhile, the actual communication networks can be time-varying and unbalanced. To address such a problem, we proposed the differentially private online distributed dual averaging push algorithm (DP-ODDAP), where push-sum protocol and differential privacy are employed on the basis of dual averaging method. By introducing the differential privacy, the data privacy of individuals is protected, which means that malicious nodes gain little sensitive information. Moreover, the impact of the imbalance caused by time-varying directed networks is eliminated by applying push-sum protocol, thus DP-ODDAP can reach the sub-linear expected individual regret bound with order of O(✓T). In addition, the tradeoff between privacy level and accuracy is revealed. Finally, simulation experiments is provided to verify the effectiveness of above results.
Date of Conference: 22-24 May 2021
Date Added to IEEE Xplore: 30 November 2021
ISBN Information:

ISSN Information:

Conference Location: Kunming, China

Funding Agency:


1 Introduction

Distributed optimization has received extensive attention in recent years [1]–[3], where the nodes in networks collaboratively minimize the global cost function through local information transformation. This technology has been researched and developed in various fields, such as machine learning [4], game theory [5], and resource allocation [6]. However, the actual environment can be dynamic and full of uncertainty, and the setting of fixed local cost functions has limitations. To overcome this bottleneck, online optimization is widely studied, where the cost function is revealed only after the individual chooses its action. The delay in the exposure of the cost function produce a cost, so the concept of regret is introduced to evaluate the performance of the class of online optimization algorithms. Regret measures the gap between accumulated cost and the cost caused by the optimal solution with all cost functions known in advance. If an online algorithm can achieve a regret bound that grows sub-linearly with the num of iterations T, then it is called effective. Initially, Zinkevich proposed a centralized online optimization [7] and the sublinear regret bound of is obtained. Benefit from the advantages of distributed algorithms in processing big data, it was later developed into distributed online optimization [8]–[10].

Contact IEEE to Subscribe

References

References is not available for this document.