I. Introduction
Deep neural networks (DNNs) have been known to be vulnerable to adversarial example attacks: by feeding the DNN with slightly perturbed inputs, the attack alters the prediction output. The attack can be fatal in performance-critical systems such as autonomous vehicles. A classifier is robust when it can resist such an attack that, as long as the range of the perturbation is not too large (usually invisible by humans), the classifier produces an expected output despite of the specific perturbation. A certifiably robust classifier is one whose prediction at any point \$x\$ is verifiably constant within some set around \$x\$.