18537570. NEURAL NETWORK ACCELERATOR USING LOGARITHMIC-BASED ARITHMETIC simplified abstract (NVIDIA Corporation)
Contents
- 1 NEURAL NETWORK ACCELERATOR USING LOGARITHMIC-BASED ARITHMETIC
- 1.1 Organization Name
- 1.2 Inventor(s)
- 1.3 NEURAL NETWORK ACCELERATOR USING LOGARITHMIC-BASED ARITHMETIC - A simplified explanation of the abstract
- 1.4 Simplified Explanation
- 1.5 Potential Applications
- 1.6 Problems Solved
- 1.7 Benefits
- 1.8 Potential Commercial Applications
- 1.9 Possible Prior Art
- 1.10 Original Abstract Submitted
NEURAL NETWORK ACCELERATOR USING LOGARITHMIC-BASED ARITHMETIC
Organization Name
Inventor(s)
William James Dally of Incline Village CA (US)
Rangharajan Venkatesan of San Jose CA (US)
Brucek Kurdo Khailany of Austin TX (US)
NEURAL NETWORK ACCELERATOR USING LOGARITHMIC-BASED ARITHMETIC - A simplified explanation of the abstract
This abstract first appeared for US patent application 18537570 titled 'NEURAL NETWORK ACCELERATOR USING LOGARITHMIC-BASED ARITHMETIC
Simplified Explanation
The abstract describes a method for adding logarithmic format values efficiently, which is particularly useful in neural networks with convolution layers. Instead of converting values to integers for addition, the method decomposes exponents into quotient and remainder components, sorts them, and then performs addition by multiplying partial sums with remainder components.
- Efficient addition of logarithmic format values in neural networks:
- Decompose exponents into quotient and remainder components - Sort quotient components based on remainder components - Sum sorted quotient components to produce partial sums - Multiply partial sums by remainder components to obtain the final sum
Potential Applications
The technology can be applied in various fields such as: - Signal processing - Image recognition - Speech recognition
Problems Solved
The technology addresses the following issues: - Energy efficiency in neural networks - Complex addition operations in logarithmic format
Benefits
The benefits of this technology include: - Improved energy efficiency - Faster computation in neural networks
Potential Commercial Applications
Optimized addition of logarithmic format values can be utilized in: - AI hardware development - Edge computing devices
Possible Prior Art
One potential prior art could be the use of logarithmic format values in signal processing applications to improve efficiency.
Unanswered Questions
How does this method compare to traditional integer addition in terms of speed and accuracy?
The method of adding logarithmic format values is more energy-efficient, but it is essential to evaluate its speed and accuracy compared to traditional integer addition methods.
Are there any limitations to implementing this method in real-world applications?
It is crucial to understand any potential challenges or constraints that may arise when implementing this method in practical neural network systems.
Original Abstract Submitted
Neural networks, in many cases, include convolution layers that are configured to perform many convolution operations that require multiplication and addition operations. Compared with performing multiplication on integer, fixed-point, or floating-point format values, performing multiplication on logarithmic format values is straightforward and energy efficient as the exponents are simply added. However, performing addition on logarithmic format values is more complex. Conventionally, addition is performed by converting the logarithmic format values to integers, computing the sum, and then converting the sum back into the logarithmic format. Instead, logarithmic format values may be added by decomposing the exponents into separate quotient and remainder components, sorting the quotient components based on the remainder components, summing the sorted quotient components to produce partial sums, and multiplying the partial sums by the remainder components to produce a sum. The sum may then be converted back into the logarithmic format.