18423203. SHARED SCRATCHPAD MEMORY WITH PARALLEL LOAD-STORE simplified abstract (Google LLC)

From WikiPatents
Revision as of 08:05, 24 May 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

SHARED SCRATCHPAD MEMORY WITH PARALLEL LOAD-STORE

Organization Name

Google LLC

Inventor(s)

Thomas Norrie of Mountain View CA (US)

Andrew Everett Phelps of Middleton WI (US)

Norman Paul Jouppi of Palo Alto CA (US)

Matthew Leever Hedlund of Sun Prairie WI (US)

SHARED SCRATCHPAD MEMORY WITH PARALLEL LOAD-STORE - A simplified explanation of the abstract

This abstract first appeared for US patent application 18423203 titled 'SHARED SCRATCHPAD MEMORY WITH PARALLEL LOAD-STORE

Simplified Explanation

The abstract describes a hardware circuit designed to implement a neural network, with features such as multiple processor cores, shared memory, and direct memory access paths.

  • The hardware circuit includes a first memory, first and second processor cores, and shared memory.
  • The first memory provides data for computations in a neural network layer.
  • Each core has a vector memory for storing derived vector values.
  • The shared memory has direct memory access and load-store data paths for data routing between cores and memory.

Potential Applications

This technology could be applied in various fields such as artificial intelligence, machine learning, robotics, and autonomous systems.

Problems Solved

This technology helps in improving the efficiency and speed of neural network computations by utilizing shared memory and direct data access paths.

Benefits

The benefits of this technology include faster processing speeds, reduced latency, improved data transfer efficiency, and enhanced performance of neural network operations.

Potential Commercial Applications

One potential commercial application of this technology could be in the development of advanced AI systems for industries such as healthcare, finance, autonomous vehicles, and cybersecurity.

Possible Prior Art

One possible prior art for this technology could be the use of shared memory and direct memory access paths in hardware circuits for parallel processing and data transfer.

Unanswered Questions

How does this technology compare to existing neural network hardware implementations in terms of performance and efficiency?

This article does not provide a direct comparison with existing neural network hardware implementations.

What are the potential limitations or challenges in implementing this hardware circuit in practical applications?

The article does not address any potential limitations or challenges in implementing this hardware circuit in practical applications.


Original Abstract Submitted

Methods, systems, and apparatus, including computer-readable media, are described for a hardware circuit configured to implement a neural network. The circuit includes a first memory, respective first and second processor cores, and a shared memory. The first memory provides data for performing computations to generate an output for a neural network layer. Each of the first and second cores include a vector memory for storing vector values derived from the data provided by the first memory. The shared memory is disposed generally intermediate the first memory and at least one core and includes: i) a direct memory access (DMA) data path configured to route data between the shared memory and the respective vector memories of the first and second cores and ii) a load-store data path configured to route data between the shared memory and respective vector registers of the first and second cores.