Nvidia corporation (20240311626). ASYNCHRONOUS ACCUMULATOR USING LOGARITHMIC-BASED ARITHMETIC simplified abstract
ASYNCHRONOUS ACCUMULATOR USING LOGARITHMIC-BASED ARITHMETIC
Organization Name
Inventor(s)
William James Dally of Incline Village NV (US)
Rangharajan Venkatesan of San Jose CA (US)
Brucek Kurdo Khailany of Austin TX (US)
Stephen G. Tell of Chapel Hill NC (US)
ASYNCHRONOUS ACCUMULATOR USING LOGARITHMIC-BASED ARITHMETIC - A simplified explanation of the abstract
This abstract first appeared for US patent application 20240311626 titled 'ASYNCHRONOUS ACCUMULATOR USING LOGARITHMIC-BASED ARITHMETIC
The abstract of the patent application discusses the use of logarithmic format values for addition operations in neural networks, which can be more energy efficient compared to other formats.
- Logarithmic format values are straightforward and energy efficient for multiplication operations in neural networks.
- Addition operations on logarithmic format values are traditionally complex, involving conversion to integers and back to logarithmic format.
- The innovation proposes a method to add logarithmic format values by decomposing exponents, sorting components, summing with an asynchronous accumulator, and multiplying to produce a sum.
Potential Applications: - This technology can be applied in neural networks to improve energy efficiency and performance in convolution layers. - It can be used in various machine learning applications that require efficient multiplication and addition operations.
Problems Solved: - Simplifies addition operations in neural networks using logarithmic format values. - Improves energy efficiency in convolution layers by optimizing multiplication and addition processes.
Benefits: - Energy efficiency in neural networks. - Improved performance in convolution layers. - Simplified addition operations for logarithmic format values.
Commercial Applications: Title: Energy-Efficient Logarithmic Format Addition for Neural Networks This technology can be commercialized in AI hardware development, data centers, and machine learning applications to enhance performance and reduce energy consumption.
Prior Art: Further research can be conducted on logarithmic format operations in neural networks to explore prior art related to this technology.
Frequently Updated Research: Stay updated on advancements in logarithmic format operations in neural networks to leverage the latest innovations in energy-efficient computing.
Questions about Logarithmic Format Addition for Neural Networks: 1. How does the use of logarithmic format values impact the overall energy efficiency of neural networks? 2. What are the potential challenges in implementing this innovation in existing neural network architectures?
Original Abstract Submitted
neural networks, in many cases, include convolution layers that are configured to perform many convolution operations that require multiplication and addition operations. compared with performing multiplication on integer, fixed-point, or floating-point format values, performing multiplication on logarithmic format values is straightforward and energy efficient as the exponents are simply added. however, performing addition on logarithmic format values is more complex. conventionally, addition is performed by converting the logarithmic format values to integers, computing the sum, and then converting the sum back into the logarithmic format. instead, logarithmic format values may be added by decomposing the exponents into separate quotient and remainder components, sorting the quotient components based on the remainder components, summing the sorted quotient components using an asynchronous accumulator to produce partial sums, and multiplying the partial sums by the remainder components to produce a sum. the sum may then be converted back into the logarithmic format.
(Ad) Transform your business with AI in minutes, not months
Trusted by 1,000+ companies worldwide