I. Introduction
Data/Information fusion is an enabling theory for numerous fields, e.g., machine learning, signal/image processing, big data, Internet-of-Things, bioinformatics, and cyber security, to name a few. In this paper, we focus specifically on aggregation as the term fusion has been elusive definition wise (either too vague or overly specific). In general, the idea is to combine different inputs in such a way that the overall result (typically a reduction from inputs to one result) is somehow better than the outcome acquired using just the individuals by themselves. First, it is up to the user to define “better.” For example, maybe the idea is to combine a set of inputs to create a single result that can be more easily visualized. The idea could also be to reduce (summarize) data so it is more manageable. In machine learning, better may mean achieving more generalizable decision boundaries for classifiers. The point is, “better” is a concept that needs to be specified relative to some task at hand. Next, focus shifts to how to combine these inputs. To date, most mathematical approaches have focused on combining inputs relative to the assumption of independence between them (which is advantageous tractability wise). However, often there are rich interactions (e.g., correlations) between inputs that should be exploited. But for inputs, there are possible subsets to consider. As grows, tractability is of utmost concern. The focus of this paper is a new tractable way to identify, model, and exploit nonredundant data supported interactions. The ideas are presented at an abstract level as to not muddle the theory with any one particular application.