17986303. METHOD AND APPARATUS WITH QUANTIZATION SCHEME IMPLEMENTATION OF ARTIFICIAL NEURAL NETWORK simplified abstract (SAMSUNG ELECTRONICS CO., LTD.)
Contents
METHOD AND APPARATUS WITH QUANTIZATION SCHEME IMPLEMENTATION OF ARTIFICIAL NEURAL NETWORK
Organization Name
Inventor(s)
METHOD AND APPARATUS WITH QUANTIZATION SCHEME IMPLEMENTATION OF ARTIFICIAL NEURAL NETWORK - A simplified explanation of the abstract
This abstract first appeared for US patent application 17986303 titled 'METHOD AND APPARATUS WITH QUANTIZATION SCHEME IMPLEMENTATION OF ARTIFICIAL NEURAL NETWORK
Simplified Explanation
The patent application describes a method and apparatus for implementing a neural network quantization scheme using a processor. Here are the key points:
- The method involves receiving input data and weight parameters, both represented as vectors.
- The input data is encoded into bit streams using a predetermined quantization scheme.
- Similarly, the weight parameters are encoded into bit streams using the same quantization scheme.
- These bit streams are then applied to a binary neural network operator.
- For each combination of layers in the input data and weight parameter bit streams, a dot product result is obtained by shifting and accumulating the binary neural network operation result.
- Finally, the dot product result is quantized using the same quantization scheme.
Potential applications of this technology:
- Artificial intelligence and machine learning systems
- Image and speech recognition
- Natural language processing
- Robotics and automation
Problems solved by this technology:
- Efficient representation and processing of neural network data
- Reduction in memory and computational requirements
- Improved performance and speed of neural network operations
Benefits of this technology:
- Improved efficiency and speed in neural network processing
- Reduced memory and computational requirements
- Enhanced performance and accuracy in AI and machine learning applications
Original Abstract Submitted
A processor-implemented artificial neural network quantization scheme implementation method and apparatus are provided. The method includes receiving input data corresponding to a first M-dimensional vector, receiving a weight parameter corresponding to a second M-dimensional vector, encoding the input data into first bit streams, each having “N” layers, with a predetermined quantization scheme, encoding the weight parameter into second bit streams, each having “N” layers, with the quantization scheme, applying corresponding first and second bit streams to a binary neural network operator, for each of possible combinations between layers of the first bit streams and layers of the second bit streams, receiving a dot product result output based on a result obtained by shifting a BNN operation result corresponding to each of the combinations by a number of corresponding bits and accumulating the shifted BNN operation result, from the BNN operator, and quantizing the dot product result using the quantization scheme.