1. INTRODUCTION
In this paper we solve convex optimization problems of the form \begin{equation*}\mathop {\min }\limits_{{\mathbf{\bar x}} \in {{\mathbb{R}}^p}} \sum\limits_{i = 1}^n {{f_i}} ({\mathbf{\bar x}})\tag{1}\end{equation*}
in a decentralized manner within a network of n agents. Agent i possesses exclusive knowledge of its private objective function fi. Problems of the form of (1) naturally arise in applications such as distributed estimation and control [7], [8], [9], [10], decentralized source localization [11], power grids [12], [13] and distributed learning [14], [15], [16]. Methods to solve problems of the form (1) in a decentralized manner over static networks have received a lot of attention in the last few years. If the individual objective functions, fi, are strongly convex and have Lipschitz continuous gradients, linearly converging methods such as the decentralized exact first-order algorithm (EX-TRA) [17] have previously been proposed. In some applications, the agents may be communicating via wireless links. Due to the random nature of the wireless channel the communication links can be unreliable. This justifies the need for procedures according to which the agents cooperate to solve the optimization problem in (1) over time varying networks. DIGing [2] is the first known method to have established linear convergence over time-varying networks. However, DIGing requires the exchange of both the primal variables and the gradients at each iteration.