I. Introduction
Multiplying a floating-point number by a real constant (such as \$\pi\$ or \$\ln (2)\$), or multiplying or dividing it by a correctly rounded function of one or more variables (such as \$\sqrt{x}\$, \$x+y\$, or \$xy\$), or dividing a constant or correctly-rounded function by a floating-point number are very frequent operations in numerical computing. When last-bit accuracy is desired, one can use specifically-designed solutions (see for instance [1] for multiplication by a constant). However, here, we are interested by the error of the straightforward approach. For instance, when a programmer writes the statement \begin{equation*} \mathtt {s = x * pi;} \end{equation*} s=x*pi;he or she most probably wants to compute \$x\cdot \pi\$ as accurately as possible. However the variable \$\mathtt {pi}\$ is already a floating-point approximation of the real \$\pi\$, and, as a result, the error in this computation is larger than the mere approximation caused by the floating-point multiplication. We are also interested in calculations such as \begin{equation*} \mathtt {s = x / sqrt(y);} \end{equation*} s=x/sqrt(y);or \begin{equation*} \mathtt {s = (x+y) * (z+t);} \end{equation*} s=(x+y)*(z+t);