I. Introduction
Explainability in machine learning classifiers is important for using AI in high-stakes decision-making. In the situations where the explainability is critical, white-box models are needed to provide explanations that can be interpreted by humans. One of the approaches is to develop expert systems based on the expert-generated logical rules [1]. For example, a white-box model concludes that the object is a car if it has four wheels and a handle. The elements of such logical rules are predicates, which are boolean functions that return true if a particular condition holds for a given instance (e.g., a predicate “ has four wheels” returns true if a taxi is given for , but it returns false if a bicycle is given for ).