18489209. METHOD AND DEVICE WITH NEURAL NETWORK IMPLEMENTATION simplified abstract (SAMSUNG ELECTRONICS CO., LTD.)

From WikiPatents
Jump to navigation Jump to search

METHOD AND DEVICE WITH NEURAL NETWORK IMPLEMENTATION

Organization Name

SAMSUNG ELECTRONICS CO., LTD.

Inventor(s)

Hyeongseok Yu of Seoul (KR)

Hyeonuk Sim of Iksan-si (KR)

Jongeun Lee of Ulsan (KR)

METHOD AND DEVICE WITH NEURAL NETWORK IMPLEMENTATION - A simplified explanation of the abstract

This abstract first appeared for US patent application 18489209 titled 'METHOD AND DEVICE WITH NEURAL NETWORK IMPLEMENTATION

Simplified Explanation

The abstract describes a neural network device that includes an on-chip buffer memory, a computational circuit, and a controller. The on-chip buffer memory stores an input feature map of a first layer of a neural network. The computational circuit receives the input feature map of the first layer through a single port of the on-chip buffer memory and performs a neural network operation on it to output an output feature map of the first layer. The controller then transmits the output feature map of the first layer back to the on-chip buffer memory through the single port, storing both the output feature map and the input feature map of the first layer together in the on-chip buffer memory.

  • The device includes an on-chip buffer memory to store input and output feature maps of a neural network layer.
  • A computational circuit performs neural network operations on the input feature map to generate an output feature map.
  • The output feature map is transmitted back to the on-chip buffer memory for storage.
  • The input and output feature maps of the first layer are stored together in the on-chip buffer memory.

Potential applications of this technology:

  • Accelerating neural network computations by storing feature maps on-chip, reducing the need for off-chip memory access.
  • Enabling real-time processing of large neural networks by minimizing data transfer between memory and computational circuits.

Problems solved by this technology:

  • Reducing memory access latency by storing feature maps on-chip, improving overall neural network performance.
  • Minimizing data transfer between memory and computational circuits, reducing energy consumption and improving efficiency.

Benefits of this technology:

  • Faster neural network computations due to reduced memory access latency.
  • Improved energy efficiency by minimizing data transfer between memory and computational circuits.
  • Enables real-time processing of large neural networks by reducing the need for off-chip memory access.


Original Abstract Submitted

A neural network device including an on-chip buffer memory that stores an input feature map of a first layer of a neural network, a computational circuit that receives the input feature map of the first layer through a single port of the on-chip buffer memory and performs a neural network operation on the input feature map of the first layer to output an output feature map of the first layer corresponding to the input feature map of the first layer, and a controller that transmits the output feature map of the first layer to the on-chip buffer memory through the single port to store the output feature map of the first layer and the input feature map of the first layer together in the on-chip buffer memory.