I. Introduction
Assuring safety in robot applications, specifically in human-involved systems, is of central importance. Often, the development of such systems is bound to finding a quantifiable measure that reflects the criticality of situations at system run-time. For instance, a robot operating in an environment with humans must reduce its speed once the detection performance of humans or objects falls under a cer-tain threshold. In particular for industrial robot applications, the compliance with safety limits (e. g., ISO 13849) requires the knowledge of the probability for dangerous failures per hour (PFDH). However, an appropriate metric that indicates the potential occurrence of this probability online and can directly be used to initiate the adaptation of robot parameters (e. g., speed reduction) is missing to date. In this work, we present a novel system uncertainty quantification technique by engaging measurement uncertainties with the probabilistic robustness of neural networks (NNs). Basically, we argue that undesired distortions in data due to technical limits, environmental disturbances or a lack of system knowledge are one dominant source for unexpected system behavior. Overview: To derive a system uncertainty metric, we draw analogies between measurement uncertainties and the probabilistic robustness of neural networks. Building upon state-of-the-art literature, we assign practical meanings to all parameters.