17668345. NEURAL NETWORK TRAINING WITH ACCELERATION simplified abstract (SAMSUNG ELECTRONICS CO., LTD.)

From WikiPatents
Jump to navigation Jump to search

NEURAL NETWORK TRAINING WITH ACCELERATION

Organization Name

SAMSUNG ELECTRONICS CO., LTD.

Inventor(s)

Shiyu Li of Durham NC (US)

Krishna T. Malladi of San Jose CA (US)

Andrew Chang of Los Altos CA (US)

Yang Seok Ki of Palo Alto CA (US)

NEURAL NETWORK TRAINING WITH ACCELERATION - A simplified explanation of the abstract

This abstract first appeared for US patent application 17668345 titled 'NEURAL NETWORK TRAINING WITH ACCELERATION

Simplified Explanation

The abstract describes a system and method for training a neural network using a computational storage device. The device includes a backing store where an embedding table for a neural network operation is stored. The device can receive an index vector and retrieve corresponding rows from the embedding table to calculate an embedded vector.

  • The system utilizes a computational storage device with a backing store to train a neural network.
  • The device stores an embedding table used for a specific neural network operation.
  • It can receive an index vector consisting of two indices.
  • The device retrieves the corresponding rows from the embedding table based on the indices.
  • It calculates a first embedded vector using the retrieved rows.

Potential Applications

  • This technology can be applied in various fields where neural networks are used, such as machine learning, artificial intelligence, and data analysis.
  • It can enhance the efficiency and speed of training neural networks by utilizing the computational capabilities of the storage device.

Problems Solved

  • Traditional neural network training methods often require transferring large amounts of data between storage and processing units, leading to increased latency and reduced performance.
  • This technology solves the problem of data transfer bottleneck by performing computations directly within the computational storage device, reducing the need for data movement.

Benefits

  • The use of a computational storage device with a backing store allows for faster and more efficient training of neural networks.
  • By storing the embedding table within the device, it eliminates the need for frequent data transfers, reducing latency and improving overall performance.
  • The system can handle large-scale neural network training tasks effectively, enabling more complex and accurate models to be trained.


Original Abstract Submitted

A system and method for training a neural network. In some embodiments, the system includes a computational storage device including a backing store. The computational storage device may be configured to: store, in the backing store, an embedding table for a neural network embedding operation; receive a first index vector including a first index and a second index; retrieve, from the backing store: a first row of the embedding table, corresponding to the first index, and a second row of the embedding table, corresponding to the second index; and calculate a first embedded vector based on the first row and the second row.