Usage
  • 7 views
  • 18 downloads

Hardware-Efficient Logarithmic Floating-Point Multipliers for Error-Tolerant Applications

  • Author / Creator
    Niu, Zijing
  • The deployment of computation-intensive applications, such as digital signal processing (DSP) and machine-learning (ML), on resource-constrained applications is spurring research that aims to increase the power efficiency of the arithmetic circuits that perform a huge amount of computation.
    Approximate computing is one of the emerging strategies to reach this objective.
    Neural networks (NNs) involve extensive multiply-and-accumulate computations, especially in the training process, where iterative computations incur significant energy consumption. DSP applications, such as image processing, also demands intensive arithmetic computations. In this research project, we focus on the design of hardware-efficient floating-point (FP) multipliers for the energy-efficient implementation of error-tolerant computation-intensive applications using approximate computing techniques.

    As an alternative to the conventional FP representation, the logarithmic representation of FP numbers has been considered for the acceleration of NNs. We show that the FP representation is naturally suited for the binary logarithm of numbers and, thus, logarithmic arithmetic. In this thesis, two logarithmic approximation methods are proposed to generate double-sided error distributions that mitigate the accumulative effect of errors introduced in both the logarithm conversion and the anti-logarithm conversion. Hardware-efficient logarithmic FP multipliers are then proposed by using simple operators, such as adders and multiplexers, to replace complex conventional FP multipliers. The radix-4 logarithm is considered to further reduce the hardware complexity.
    The proposed multipliers provide superior trade-offs between accuracy and hardware, with up to 30.8% higher accuracy than a recent logarithmic FP design or up to 68x less energy than the conventional FP multiplier.
    Using the proposed FP logarithmic multipliers in JPEG image compression achieves higher image quality than the recent multiplier design with up to 4.7 dB larger peak signal noise ratio. For training in benchmark NN applications, including a 922-neuron model for the MNIST dataset, the proposed FP multipliers can slightly improve the classification accuracy while achieving 4.2x less energy and 2.2x smaller area than a state-of-the-art design.

  • Subjects / Keywords
  • Graduation date
    Spring 2023
  • Type of Item
    Thesis
  • Degree
    Master of Science
  • DOI
    https://doi.org/10.7939/r3-0zt1-zw50
  • License
    This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.