17883844. METHOD AND DEVICE WITH NEURAL NETWORK MODEL simplified abstract (SAMSUNG ELECTRONICS CO., LTD.)
Contents
METHOD AND DEVICE WITH NEURAL NETWORK MODEL
Organization Name
Inventor(s)
METHOD AND DEVICE WITH NEURAL NETWORK MODEL - A simplified explanation of the abstract
This abstract first appeared for US patent application 17883844 titled 'METHOD AND DEVICE WITH NEURAL NETWORK MODEL
Simplified Explanation
The abstract describes a device that is part of a neural network system. The device includes several components such as a comparator, an exclusive-NOR (XNOR) gate, an accumulator, and a multiplication and accumulation (MAC) operator. These components are used to drive a basic block of the neural network, which consists of a series of operations including batch normalization, quantization, convolution, activation, and residual connection.
- The device includes a comparator, XNOR gate, accumulator, and MAC operator.
- The neural network system consists of basic blocks.
- Each basic block includes a series of operations: batch normalization, quantization, convolution, activation, and residual connection.
- The device drives the basic block by performing these operations in a specific order.
Potential applications of this technology:
- Artificial intelligence and machine learning systems
- Image and speech recognition
- Natural language processing
- Autonomous vehicles
- Robotics
Problems solved by this technology:
- Efficient processing of neural networks
- Improved accuracy and performance of AI systems
- Reduction of computational resources required for neural network operations
Benefits of this technology:
- Faster and more efficient neural network processing
- Improved accuracy and performance of AI systems
- Reduced power consumption and computational resources required
- Potential for real-time processing in applications like autonomous vehicles
Original Abstract Submitted
A device includes: a comparator; an exclusive-NOR (XNOR) gate; an accumulator; and a multiplication and accumulation (MAC) operator, wherein a basic block of a neural network comprises a first batch normalization layer, a quantization layer, a convolution layer, an active layer, and a second batch normalization layer, and wherein the basic block is driven by the device by a combination of a first batch normalization operation, a sign function operation, a bitwise convolution operation, an activation function operation, a second batch normalization operation, and a residual connection operation.