Loading [MathJax]/extensions/MathMenu.js
Sparsity-Aware Distributed Learning for Gaussian Processes With Linear Multiple Kernel | IEEE Journals & Magazine | IEEE Xplore

Sparsity-Aware Distributed Learning for Gaussian Processes With Linear Multiple Kernel


Abstract:

Gaussian processes (GPs) stand as crucial tools in machine learning and signal processing, with their effectiveness hinging on kernel design and hyperparameter optimizati...Show More

Abstract:

Gaussian processes (GPs) stand as crucial tools in machine learning and signal processing, with their effectiveness hinging on kernel design and hyperparameter optimization. This article presents a novel GP linear multiple kernel (LMK) and a generic sparsity-aware distributed learning framework to optimize the hyperparameters. The newly proposed grid spectral mixture product (GSMP) kernel is tailored for multidimensional data, effectively reducing the number of hyperparameters while maintaining good approximation capability. We further demonstrate that the associated hyperparameter optimization of this kernel yields sparse solutions. To exploit the inherent sparsity of the solutions, we introduce the sparse linear multiple kernel learning (SLIM-KL) framework. The framework incorporates a quantized alternating direction method of multipliers (ADMMs) scheme for collaborative learning among multiple agents, where the local optimization problem is solved using a distributed successive convex approximation (DSCA) algorithm. SLIM-KL effectively manages large-scale hyperparameter optimization for the proposed kernel, simultaneously ensuring data privacy and minimizing communication costs. The theoretical analysis establishes convergence guarantees for the learning framework, while experiments on diverse datasets demonstrate the superior prediction performance and efficiency of our proposed methods.
Page(s): 1 - 15
Date of Publication: 28 January 2025

ISSN Information:

PubMed ID: 40031302

Funding Agency:


Contact IEEE to Subscribe