I. Introduction
In our daily life, Artificial Intelligence plays a significant role in our choices and in our actions. Intelligent systems are able to perceive, represent and decide almost instantly with a very high accuracy. The models are often so complex that humans may have difficulties understanding their internal behaviors: it highlights a problem of trust in the system from a human point of view, especially when their control can only be exercised a posteriori. Indeed, when Artificial Intelligence is used in risky environments in which respect for human life can be engaged, it must be able to explain and inform its decisions, assumptions and reasoning. Ideally, a form of symmetrical communication may arise between intelligent systems and humans, whether they are experts or not. The interaction between the system and the end users can take advantage of the progress made in natural language processing: intelligent systems need to offer a dialog in which they can describe their reasoning without a lack of precision, and be able to explain each concept they use in a human understandable way. Fuzzy Inference System (FIS) seems to be a good candidate for the explainable Artificial Intelligence field [1], as fuzzy logic is able to deal with the vagueness of natural language and to reason under uncertainty. Moreover, Zadeh introduced with fuzzy logic the methodology for "Computing with words" (CWW) [2].