18511074. MEMORY PREFETCHING IN MULTIPLE GPU ENVIRONMENT simplified abstract (Intel Corporation)

From WikiPatents
Jump to navigation Jump to search

MEMORY PREFETCHING IN MULTIPLE GPU ENVIRONMENT

Organization Name

Intel Corporation

Inventor(s)

Joydeep Ray of Folsom CA (US)

Aravindh Anantaraman of Folsom CA (US)

Valentin Andrei of San Jose CA (US)

Abhishek R. Appu of El Dorado Hills CA (US)

Nicolas Galoppo Von Borries of Portland OR (US)

Varghese George of Folsom CA (US)

Altug Koker of El Dorado Hills CA (US)

Elmoustapha Ould-ahmed-vall of Chandler AZ (US)

Mike Macpherson of Portland OR (US)

Subramaniam Maiyuran of Gold River CA (US)

MEMORY PREFETCHING IN MULTIPLE GPU ENVIRONMENT - A simplified explanation of the abstract

This abstract first appeared for US patent application 18511074 titled 'MEMORY PREFETCHING IN MULTIPLE GPU ENVIRONMENT

Simplified Explanation

Embodiments are generally directed to memory prefetching in multiple GPU environment. An embodiment of an apparatus includes multiple processors including a host processor and multiple graphics processing units (GPUs) to process data, each of the GPUs including a prefetcher and a cache; and a memory for storage of data, the memory including a plurality of memory elements, wherein the prefetcher of each of the GPUs is to prefetch data from the memory to the cache of the GPU; and wherein the prefetcher of a GPU is prohibited from prefetching from a page that is not owned by the GPU or by the host processor.

  • Multiple processors including a host processor and multiple GPUs are used to process data.
  • Each GPU has a prefetcher and a cache to improve data processing efficiency.
  • The prefetcher of each GPU is responsible for fetching data from memory to the cache of the GPU.
  • The prefetcher of a GPU is restricted from prefetching data from a page not owned by the GPU or the host processor.

Potential Applications

This technology can be applied in:

  • High-performance computing
  • Graphics rendering
  • Machine learning and AI applications

Problems Solved

  • Improved data processing efficiency
  • Reduced latency in accessing data
  • Better utilization of GPU resources

Benefits

  • Faster data processing
  • Enhanced performance in GPU-intensive tasks
  • Efficient utilization of memory resources

Potential Commercial Applications

Optimized Memory Prefetching in Multiple GPU Environment for Enhanced Performance

Possible Prior Art

One possible prior art is the use of prefetching techniques in single GPU systems to improve data processing efficiency.

What is the impact of this technology on GPU performance in real-world applications?

This article does not specifically address the impact of this technology on GPU performance in real-world applications.

How does this technology compare to existing memory prefetching techniques in terms of efficiency and effectiveness?

This article does not provide a direct comparison between this technology and existing memory prefetching techniques in terms of efficiency and effectiveness.


Original Abstract Submitted

Embodiments are generally directed to memory prefetching in multiple GPU environment. An embodiment of an apparatus includes multiple processors including a host processor and multiple graphics processing units (GPUs) to process data, each of the GPUs including a prefetcher and a cache; and a memory for storage of data, the memory including a plurality of memory elements, wherein the prefetcher of each of the GPUs is to prefetch data from the memory to the cache of the GPU; and wherein the prefetcher of a GPU is prohibited from prefetching from a page that is not owned by the GPU or by the host processor.