1. Introduction
Evasion attacks producing adversarial examples— slightly but strategically manipulated variants of benign samples inducing misclassifications—have emerged as a technically deep challenge posing risk to safety- and security-critical deployments of machine learning (ML) [6]. For example, adversaries may inconspicuously manipulate their appearance to circumvent face-recognition systems [15], [41], [42]. As another example, attackers may introduce seemingly innocuous stickers to traffic signs, leading traffic-sign recognition models to err [20]. Such adversarial examples have also become the de facto means for assessing ML models’ robustness (i.e., ability to withstand inference-time attacks) in adversarial settings [6], [38]. Nowadays, numerous critical applications employ ML models on tabular data, including for medical diagnosis, malware detection, fraud detection, and credit scoring [7]. Still, adversarial examples against such models remain underexplored.