18505743. NEURAL NETWORK COMPUTE TILE simplified abstract (GOOGLE LLC)

From WikiPatents
Revision as of 16:58, 11 July 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

NEURAL NETWORK COMPUTE TILE

Organization Name

GOOGLE LLC

Inventor(s)

Olivier Temam of Antony (FR)

Ravi Narayanaswami of San Jose CA (US)

Harshit Khaitan of San Jose CA (US)

Dong Hyuk Woo of San Jose CA (US)

NEURAL NETWORK COMPUTE TILE - A simplified explanation of the abstract

This abstract first appeared for US patent application 18505743 titled 'NEURAL NETWORK COMPUTE TILE

The abstract describes a computing unit with two memory banks, one for input activations and the other for parameters used in computations. The unit includes cells with multiply-accumulate operators that perform computations using parameters from the second memory bank.

  • The computing unit has a first memory bank for input activations and a second memory bank for parameters.
  • It includes cells with multiply-accumulate operators for computations.
  • A first traversal unit controls the input activations provided to the data bus accessible by the multiply-accumulate operator.
  • Computations are performed on elements of a data array using the multiply-accumulate operator.

Potential Applications: - This technology can be used in artificial intelligence systems for deep learning algorithms. - It can be applied in image and speech recognition systems. - The computing unit can enhance the performance of neural networks and machine learning models.

Problems Solved: - Efficient storage and retrieval of input activations and parameters for computations. - Streamlined processing of data arrays using multiply-accumulate operators.

Benefits: - Improved speed and accuracy in performing computations. - Enhanced efficiency in handling complex mathematical operations. - Increased performance of artificial intelligence systems.

Commercial Applications: Title: Advanced Computing Unit for AI Systems This technology can be utilized in data centers for high-performance computing tasks. It can be integrated into autonomous vehicles for real-time decision-making processes. The computing unit can find applications in medical imaging for faster analysis of diagnostic images.

Prior Art: Researchers can explore prior patents related to computing units with multiply-accumulate operators and memory banks for storing parameters and input activations.

Frequently Updated Research: Stay updated on advancements in deep learning algorithms and neural network architectures that could benefit from this computing unit.

Questions about the Technology: 1. How does this computing unit improve the efficiency of neural networks? 2. What are the key advantages of using multiply-accumulate operators in computational tasks?


Original Abstract Submitted

A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.