1. Introduction
Action segmentation is the task of predicting which action is occurring at each frame in untrimmed videos of com-plex and semantically structured human activities [18], [32]. While conventional methods for human action understanding focus on classification of short video clips [6], [27], [34], action segmentation models have to learn the semantics of all action classes as well as their temporal boundaries and contextual relations, which is challenging and requires the design of efficient strategies to capture long range temporal information and inter-action correlations.
Different paradigms for multi -source data fusion via (a) early fusion, (b) disentanglement of modality-shared and modality-specific representations (our model) and (c) late fusion; (d) example from 50salads highlighting shared and private infor-mation that can be extracted from video and accelerometer data. While both modalities can detect the activation of relevant tools and common motion cues, RGB videos additionally capture funda-mental details about objects without acceleration sensors and their state (e.g. Chopped tomatoes), the overall spatial configuration and the localization of motion in the scene. Accelerometer signals, on the other hand, contain explicit and complementary information about 3D fine motion patterns of activated objects and their co-occurrence. In the presence of noise (e.g. Video occlusions) or other variability factors, some shared attributes could become part of the private space of the uncorrupted modality.