Apple inc. (20240329929). PROCESSING OF ASYMMETRICALLY QUANTIZED INPUT AND KERNEL COEFFICIENTS IN NEURAL NETWORK PROCESSOR simplified abstract
Contents
- 1 PROCESSING OF ASYMMETRICALLY QUANTIZED INPUT AND KERNEL COEFFICIENTS IN NEURAL NETWORK PROCESSOR
- 1.1 Organization Name
- 1.2 Inventor(s)
- 1.3 PROCESSING OF ASYMMETRICALLY QUANTIZED INPUT AND KERNEL COEFFICIENTS IN NEURAL NETWORK PROCESSOR - A simplified explanation of the abstract
- 1.4 Simplified Explanation
- 1.5 Potential Applications
- 1.6 Problems Solved
- 1.7 Benefits
- 1.8 Commercial Applications
- 1.9 Questions about the Technology
- 1.10 Original Abstract Submitted
PROCESSING OF ASYMMETRICALLY QUANTIZED INPUT AND KERNEL COEFFICIENTS IN NEURAL NETWORK PROCESSOR
Organization Name
Inventor(s)
Lei Wang of San Carlos CA (US)
Kenneth W. Waters of San Jose CA (US)
Michael L. Liu of Palo Alto CA (US)
Ji Liang Song of Cupertino CA (US)
Youchang Kim of Cupertino CA (US)
PROCESSING OF ASYMMETRICALLY QUANTIZED INPUT AND KERNEL COEFFICIENTS IN NEURAL NETWORK PROCESSOR - A simplified explanation of the abstract
This abstract first appeared for US patent application 20240329929 titled 'PROCESSING OF ASYMMETRICALLY QUANTIZED INPUT AND KERNEL COEFFICIENTS IN NEURAL NETWORK PROCESSOR
Simplified Explanation
The patent application describes a method for performing multiply-accumulator operations on asymmetrically quantized input data and kernel data in a neural processor. Instead of adjusting the input data at the multiply-accumulator, an adjusted bias is computed beforehand and stored in the multiply-accumulator. Kernel coefficients derived from the kernel data are adjusted at the multiply-accumulator to account for the asymmetric quantization, reducing computational complexity and increasing efficiency in convolution operations.
- Adjusted bias computed and stored in the multiply-accumulator for input data.
- Kernel coefficients adjusted at the multiply-accumulator for asymmetric quantization.
- Reduces computational complexity and increases efficiency in convolution operations.
Potential Applications
This technology can be applied in various fields such as artificial intelligence, machine learning, image and signal processing, and neural network applications.
Problems Solved
1. Reduces computational complexity associated with asymmetric quantization. 2. Increases efficiency of convolution operations in neural processors.
Benefits
1. Improved performance in neural processors. 2. Enhanced accuracy in multiply-accumulator operations. 3. Reduction in computational complexity.
Commercial Applications
This technology can be utilized in industries such as computer vision, autonomous vehicles, robotics, and IoT devices to enhance the efficiency and accuracy of neural processing tasks.
Questions about the Technology
How does this technology improve the efficiency of convolution operations in neural processors?
The technology reduces computational complexity by adjusting bias for input data and kernel coefficients for asymmetric quantization, leading to increased efficiency in multiply-accumulator operations.
What are the potential applications of this technology beyond neural processors?
This technology can be applied in various fields such as artificial intelligence, machine learning, image and signal processing, and neural network applications.
Original Abstract Submitted
embodiments relate to performing multiply-accumulator operation on asymmetrically quantized input data and kernel data in a neural processor. instead of adjusting to the input data at a multiply-accumulator to account for the asymmetric quantization of the input data, an adjusted bias for the multiply-accumulator operation is computed beforehand and stored in the multiply-accumulator. on the other hand, kernel coefficients derived from the kernel data are adjusted at the multiply-accumulator to account for the asymmetric quantization. in this way, computational complexity associated with asymmetric quantization may be reduced while increasing the efficiency of the convolution operations at the neural processor.