Loading [a11y]/accessibility-menu.js
FPGA implementation of vedic floating point multiplier | IEEE Conference Publication | IEEE Xplore

FPGA implementation of vedic floating point multiplier


Abstract:

Most of the scientific operation involve floating point computations. It is necessary to implement faster multipliers occupying less area and consuming less power. Multip...Show More

Abstract:

Most of the scientific operation involve floating point computations. It is necessary to implement faster multipliers occupying less area and consuming less power. Multipliers play a critical role in any digital design. Even though various multiplication algorithms have been in use, the performance of Vedic multipliers has not drawn a wider attention. Vedic mathematics involves application of 16 sutras or algorithms. One among these, the Urdhva tiryakbhyam sutra for multiplication has been considered in this work. An IEEE-754 based Vedic multiplier has been developed to carry out both single precision and double precision format floating point operations and its performance has been compared with Booth and Karatsuba based floating point multipliers. Xilinx FPGA has been made use of while implementing these algorithms and a resource utilization and timing performance based comparison has also been made.
Date of Conference: 19-21 February 2015
Date Added to IEEE Xplore: 23 April 2015
ISBN Information:
Conference Location: Kozhikode, India

I. Introduction

Fixed point and floating point number representations are being widely used by various applications as in the design of Digital Signal Processors (DSPs). High speed computation with high degree of accuracy are quite essential in a broad range of applications form basic consumer electronics to sophisticated industrial instrumentation. When compared to a fixed point representation, floating point can represent very small and very large numbers, thereby increasing the range of representation. Dynamic range and precision considerations determine whether fixed point or floating point representations are to be used for a specific application. An example, where dynamic range requirements demand the usage of floating point representation is matrix inversion. Various floating point arithmetic operations are extensively supported by all the microprocessors and computer systems. Among various floating point arithmetic operations, multiplication is more frequently used in many applications. The efficient FPGA implementation of complex floating point functions requires the use of efficient multiplication algorithms. An efficient multiplication algorithm, which facilitates optimized utilization of resources and minimum time delay must be used for effective implementation of floating point processors. A floating point multiplier is the most commonly used component in many digital applications such as digital filters, data processors and DSPs. About 37 % of the floating point instructions in benchmark applications constitute floating point multiplications [1].

References

References is not available for this document.