Deciphering ‘What’ and ‘Where’ Visual Pathways from Spectral Clustering of Layer-Distributed Neural Representations | IEEE Conference Publication | IEEE Xplore

Deciphering ‘What’ and ‘Where’ Visual Pathways from Spectral Clustering of Layer-Distributed Neural Representations


Abstract:

We present an approach for analyzing grouping information contained within a neural network's activations, per-mitting extraction of spatial layout and semantic segmentat...Show More

Abstract:

We present an approach for analyzing grouping information contained within a neural network's activations, per-mitting extraction of spatial layout and semantic segmentation from the behavior of large pre-trained vision models. Unlike prior work, our method conducts a who lis tic analysis of a network's activation state, leveraging features from all layers and obviating the need to guess which part of the model contains relevant information. Motivated by classic spectral clustering, we formulate this analysis in terms of an optimization objective involving a set of affinity matrices, each formed by comparing features within a different layer. Solving this optimization problem using gradient descent allows our technique to scale from single images to dataset-level analysis, including, in the latter, both intra-and inter-image relationships. Analyzing a pre-trained generative transformer provides insight into the computational strategy learned by such models. Equating affinity with key-query similarity across attention layers yields eigenvectors encoding scene spatial layout, whereas defining affinity by value vector similarity yields eigenvectors encoding object identity. This result suggests that key and query vectors co-ordinate attentional information flow according to spatial proximity (a ‘where’ pathway), while value vectors refine a semantic category representation (a ‘what’ pathway).
Date of Conference: 16-22 June 2024
Date Added to IEEE Xplore: 16 September 2024
ISBN Information:

ISSN Information:

Conference Location: Seattle, WA, USA
No metrics found for this document.

1. Introduction

An explosion in self-supervised learning techniques, including adversarial [23], [31], [32], contrastive [11], [12], [26], [72], reconstructive [34], [66], and denoising [29], [60] approaches, combined with the focus on training large-scale foundation models [4] on vast collections of image data has produced deep neural networks exhibiting dramatic new capabilities. Recent examples of such models include CLIP [51], DINO [8], MAE [27], and Stable Diffusion [53]. As training is no longer primarily driven by annotated data, there is a critical need to understand what these models have learned, provide interpretable insight into how they work, and develop techniques for porting their learned representations for use in accomplishing additional tasks.

Our novel optimization procedure, resembling spectral clustering, leverages features throughout layers of a pre-trained model to extract dense structural representations of images. Shown are results of applying our method to Stable Diffusion [53]. Left: Analyzing internal feature affinity for a single input image yields region grouping. Right: Extending the affinity graph across images yields coherent dataset-level segmentation and reveals ‘what’ (object identity) and ‘where’ (spatial location) pathways, depending on the feature source.

Contact IEEE to Subscribe

References

References is not available for this document.