1. Introduction
Functional neural representations using Multi-Layer Perceptrons (MLPs) have garnered renewed interest for their conceptual simplicity and ability to approximate complex signals like images, videos, audio recordings [38], light-fields [26] and implicitly-defined 3D shapes [8], [32], [2]. They have shown to be more compact and efficient than their discrete counterparts [21], [39]. While recent contributions have focused on improving the accuracy of these representations, in particular to model complex signals with high-frequency details [38], [44], [26], it is still challenging to generalize them to unseen signals. Recent approaches typically require training a separate MLP for each signal [26], [9]. Previous efforts sought to improve generalization by imposing priors on the functional space spanned by the MLP parameterization [32], [35], using hypernetworks [14], [38], or via meta-learning [37]. But multi-instance generalization still causes significant degradations in quality.