Jump to content

Micron technology, inc. (20240428853). Caching Techniques for Deep Learning Accelerator

From WikiPatents

Caching Techniques for Deep Learning Accelerator

Organization Name

micron technology, inc.

Inventor(s)

Aliasger Tayeb Zaidy of Seattle WA (US)

Patrick Alan Estep of Rowlett TX (US)

David Andrew Roberts of Wellesley MA (US)

Caching Techniques for Deep Learning Accelerator

This abstract first appeared for US patent application 20240428853 titled 'Caching Techniques for Deep Learning Accelerator



Original Abstract Submitted

systems, devices, and methods related to a deep learning accelerator and memory are described. for example, the accelerator can have processing units to perform at least matrix computations of an artificial neural network via execution of instructions. the processing units have a local memory store operands of the instructions. the accelerator can access a random access memory via a system buffer, or without going through the system buffer. a fetch instruction can request an item, available at a memory address in the random access memory, to be loaded into the local memory at a local address. the fetch instruction can include a hint for the caching of the item in the system buffer. during execution of the instruction, the hint can be used to determine whether to load the item through the system buffer or to bypass the system buffer in loading the item.

(Ad) Transform your business with AI in minutes, not months

Custom AI strategy for your specific industry
Step-by-step implementation with clear ROI
5-minute setup - no technical skills needed
Get your AI playbook
Cookies help us deliver our services. By using our services, you agree to our use of cookies.