US Patent Application 18310015. GRAPHICS ARCHITECTURE INCLUDING A NEURAL NETWORK PIPELINE simplified abstract
Contents
GRAPHICS ARCHITECTURE INCLUDING A NEURAL NETWORK PIPELINE
Organization Name
Inventor(s)
HUGUES Labbe of Granite Bay CA (US)
DARREL Palke of Portland OR (US)
SHERINE Abdelhak of BEAVERTON OR (US)
JILL Boyce of PORTLAND OR (US)
VARGHESE George of FOLSOM CA (US)
ZHIJUN Lei of HILLSBORO OR (US)
ZHENGMIN Li of HILLSBORO OR (US)
MIKE Macpherson of PORTLAND OR (US)
CARL Marshall of PORTLAND OR (US)
SELVAKUMAR Panneer of PORTLAND OR (US)
PRASOONKUMAR Surti of FOLSOM CA (US)
KARTHIK Veeramani of HILLSBORO OR (US)
DEEPAK Vembar of PORTLAND OR (US)
VALLABHAJOSYULA SRINIVASA Somayazulu of PORTLAND OR (US)
GRAPHICS ARCHITECTURE INCLUDING A NEURAL NETWORK PIPELINE - A simplified explanation of the abstract
This abstract first appeared for US patent application 18310015 titled 'GRAPHICS ARCHITECTURE INCLUDING A NEURAL NETWORK PIPELINE
Simplified Explanation
The patent application describes a graphics processor with a programmable neural network unit.
- The graphics processor includes execution resources, cache memory, and a cache memory prefetcher.
- The programmable neural network unit has circuitry to perform neural network operations and activation operations for a layer of a neural network.
- The programmable neural network unit can be accessed by cores within the graphics processor.
- The neural network hardware block within the programmable neural network unit is configured to determine a prefetch pattern for the cache memory prefetcher.
Original Abstract Submitted
One embodiment provides a graphics processor comprising a block of execution resources, a cache memory, a cache memory prefetcher, and circuitry including a programmable neural network unit, the programmable neural network unit comprising a network hardware block including circuitry to perform neural network operations and activation operations for a layer of a neural network, the programmable neural network unit addressable by cores within the block of graphics cores and the neural network hardware block configured to perform operations associated with a neural network configured to determine a prefetch pattern for the cache memory prefetcher.