17883817. METHOD AND DEVICE WITH CALCULATION FOR DRIVING NEURAL NETWORK MODEL simplified abstract (SAMSUNG ELECTRONICS CO., LTD.)

From WikiPatents
Jump to navigation Jump to search

METHOD AND DEVICE WITH CALCULATION FOR DRIVING NEURAL NETWORK MODEL

Organization Name

SAMSUNG ELECTRONICS CO., LTD.

Inventor(s)

Sungjoo Yoo of Seoul (KR)

Seungyeop Kang of Seoul (KR)

METHOD AND DEVICE WITH CALCULATION FOR DRIVING NEURAL NETWORK MODEL - A simplified explanation of the abstract

This abstract first appeared for US patent application 17883817 titled 'METHOD AND DEVICE WITH CALCULATION FOR DRIVING NEURAL NETWORK MODEL

Simplified Explanation

The patent application describes a device that drives a neural network model by performing operations on basic blocks and transition blocks of the model. The device includes processors that perform batch normalization, quantization, convolution, activation function application, and batch normalization again to generate output data.

  • The device includes one or more processors that drive a neural network model.
  • The processors perform a first operation on basic blocks and a second operation on transition blocks of the model.
  • The first operation involves batch normalization, quantization, convolution, activation function application, and batch normalization again.
  • The output data is determined by applying an activation function to the result of the convolution operation.

Potential Applications

  • Artificial intelligence and machine learning systems
  • Image and speech recognition
  • Natural language processing
  • Autonomous vehicles
  • Robotics

Problems Solved

  • Efficiently driving a neural network model
  • Improving the performance and accuracy of neural networks
  • Reducing computational complexity and memory requirements

Benefits

  • Faster and more accurate neural network processing
  • Improved efficiency and resource utilization
  • Reduced memory usage and computational complexity
  • Enhanced performance in various applications


Original Abstract Submitted

A device includes: one or more processors configured to perform a first operation for driving one or more basic blocks of a neural network model and a second operation for driving one or more transition blocks of the neural network model to drive the neural network model, wherein, for the performing of the first operation, the one or more processors are configured to: perform first batch normalization on input data; quantize the first batch normalized input data; perform a convolution operation based on the quantized input data; determine output data by applying an activation function to a result of the convolution operation; and perform the first operation by performing second batch normalization on the output data.