Loading [MathJax]/extensions/MathZoom.js
Modulated Periodic Activations for Generalizable Local Functional Representations | IEEE Conference Publication | IEEE Xplore

Modulated Periodic Activations for Generalizable Local Functional Representations


Abstract:

Multi-Layer Perceptrons (MLPs) make powerful functional representations for sampling and reconstruction problems involving low-dimensional signals like images, shapes and...Show More

Abstract:

Multi-Layer Perceptrons (MLPs) make powerful functional representations for sampling and reconstruction problems involving low-dimensional signals like images, shapes and light fields. Recent works have significantly improved their ability to represent high-frequency content by using periodic activations or positional encodings. This often came at the expense of generalization: modern methods are typically optimized for a single signal. We present a new representation that generalizes to multiple instances and achieves state-of-the-art fidelity. We use a dual-MLP architecture to encode the signals. A synthesis network creates a functional mapping from a low-dimensional input (e.g. pixel-position) to the output domain (e.g. RGB color). A modulation network maps a latent code corresponding to the target signal to parameters that modulate the periodic activations of the synthesis network. We also propose a local-functional representation which enables generalization. The signal’s domain is partitioned into a regular grid, with each tile represented by a latent code. At test time, the signal is encoded with high-fidelity by inferring (or directly optimizing) the latent code-book. Our approach produces generalizable functional representations of images, videos and shapes, and achieves higher reconstruction quality than prior works that are optimized for a single signal.
Date of Conference: 10-17 October 2021
Date Added to IEEE Xplore: 28 February 2022
ISBN Information:

ISSN Information:

Conference Location: Montreal, QC, Canada

1. Introduction

Functional neural representations using Multi-Layer Perceptrons (MLPs) have garnered renewed interest for their conceptual simplicity and ability to approximate complex signals like images, videos, audio recordings [38], light-fields [26] and implicitly-defined 3D shapes [8], [32], [2]. They have shown to be more compact and efficient than their discrete counterparts [21], [39]. While recent contributions have focused on improving the accuracy of these representations, in particular to model complex signals with high-frequency details [38], [44], [26], it is still challenging to generalize them to unseen signals. Recent approaches typically require training a separate MLP for each signal [26], [9]. Previous efforts sought to improve generalization by imposing priors on the functional space spanned by the MLP parameterization [32], [35], using hypernetworks [14], [38], or via meta-learning [37]. But multi-instance generalization still causes significant degradations in quality.

Contact IEEE to Subscribe

References

References is not available for this document.