18108292. ACCELERATOR FOR DEEP NEURAL NETWORKS simplified abstract (Samsung Electronics Co., Ltd.)

From WikiPatents
Jump to navigation Jump to search

ACCELERATOR FOR DEEP NEURAL NETWORKS

Organization Name

Samsung Electronics Co., Ltd.

Inventor(s)

Patrick Judd of Toronto (CA)

Jorge Albericio of San Jose CA (US)

Alberto Delmas Lascorz of Toronto (CA)

Andreas Moshovos of Toronto (CA)

Sayeh Sharify of Toronto (CA)

ACCELERATOR FOR DEEP NEURAL NETWORKS - A simplified explanation of the abstract

This abstract first appeared for US patent application 18108292 titled 'ACCELERATOR FOR DEEP NEURAL NETWORKS

Simplified Explanation

The abstract describes a system for bit-serial computation in a neural network. The system includes bit-serial tiles that perform bit-serial computations, an activation memory for storing neurons, and a dispatcher that communicates neurons and synapses to the bit-serial tiles.

  • The system is designed for bit-serial computation in a neural network.
  • It can be implemented on an integrated circuit.
  • The system includes bit-serial tiles that receive input neurons and synapses and communicate output neurons.
  • An activation memory is used to store the neurons.
  • A dispatcher reads neurons and synapses from memory and communicates them to the bit-serial tiles.
  • Neurons or synapses can be communicated either bit-serially or bit-parallelly to the bit-serial tiles.

Potential Applications

  • Neural network computation
  • Artificial intelligence
  • Machine learning
  • Pattern recognition

Problems Solved

  • Efficient computation in neural networks
  • Handling large amounts of data in a bit-serial manner
  • Communication of neurons and synapses in a neural network

Benefits

  • Improved performance and efficiency in neural network computation
  • Reduced memory requirements
  • Faster communication of neurons and synapses


Original Abstract Submitted

A system for bit-serial computation in a neural network is described. The system may be embodied on an integrated circuit and include one or more bit-serial tiles for performing bit-serial computations in which each bit-serial tile receives input neurons and synapses, and communicates output neurons. Also included is an activation memory for storing the neurons and a dispatcher. The dispatcher reads neurons and synapses from memory and communicates either the neurons or the synapses bit-serially to the one or more bit-serial tiles. The other of the neurons or the synapses are communicated bit-parallelly to the one or more bit-serial tiles, or according to a further embodiment, may also be communicated bit-serially to the one or more bit-serial tiles.