I. Introduction
As Artificial Intelligence (AI) based systems permeate various aspects of modern life, from healthcare to autonomous vehicles, ensuring their safe operation in deployment has become a paramount challenge. Safety in AI-based systems implies endowing them with the capability to identify and withstand hazards of different nature, from adversarial attacks, long tails and distribution shifts to systemic risks and inherent pitfalls of these systems, including machine ethics and their alignment with human goals and values [1]. Making AI-based systems robust against such hazards has engaged significant interest within the community in recent times [2], [3].