I. Introduction
Numerical applications are ever more complex, and the amount of floating-point computations is ever increasing. To design such applications, developers must choose between different formats. Indeed the IEEE 754–2019 standard defines four floating-point formats, i.e. binary16, binary32, binary64, and binary128 [1]. Various non IEEE formats also exist, like bfloat16 or Posit, for example. Yet, most of these applications use the binary64 floating-point format, that is the most accurate floating-point format directly available in hardware on most modern architectures, and the others remain underused. This choice, very often made independently of the expected result accuracy, leads to an underuse of the architecture features, and consequently to a possible loss of performance.