Intel corporation (20240161226). MEMORY PREFETCHING IN MULTIPLE GPU ENVIRONMENT simplified abstract

From WikiPatents
Jump to navigation Jump to search

MEMORY PREFETCHING IN MULTIPLE GPU ENVIRONMENT

Organization Name

intel corporation

Inventor(s)

Joydeep Ray of Folsom CA (US)

Aravindh Anantaraman of Folsom CA (US)

Valentin Andrei of San Jose CA (US)

Abhishek R. Appu of El Dorado Hills CA (US)

Nicolas Galoppo Von Borries of Portland OR (US)

Varghese George of Folsom CA (US)

Altug Koker of El Dorado Hills CA (US)

Elmoustapha Ould-ahmed-vall of Chandler AZ (US)

Mike Macpherson of Portland OR (US)

Subramaniam Maiyuran of Gold River CA (US)

MEMORY PREFETCHING IN MULTIPLE GPU ENVIRONMENT - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240161226 titled 'MEMORY PREFETCHING IN MULTIPLE GPU ENVIRONMENT

Simplified Explanation

The patent application is about memory prefetching in a multiple GPU environment. It involves an apparatus with multiple processors, including a host processor and multiple GPUs, each with a prefetcher and a cache, to process data from memory. The prefetcher of each GPU is responsible for fetching data from memory to the GPU's cache, with a restriction on prefetching from pages not owned by the GPU or host processor.

  • Explanation of the patent/innovation:

- Apparatus with multiple processors, including host processor and GPUs - Each GPU has a prefetcher and cache for processing data - Prefetcher fetches data from memory to GPU's cache - Restriction on prefetching from pages not owned by GPU or host processor

  • Potential applications of this technology:

- High-performance computing - Graphics rendering - Machine learning applications

  • Problems solved by this technology:

- Efficient data processing in multiple GPU environments - Minimizing data transfer delays - Optimizing memory usage

  • Benefits of this technology:

- Improved system performance - Reduced latency in data processing - Enhanced overall efficiency

  • Potential commercial applications of this technology:

- Data centers - Gaming industry - AI and deep learning applications

  • Possible prior art:

- Previous patents related to memory prefetching in GPU environments - Research papers on data processing in parallel computing systems

Questions:

1. How does this technology compare to existing memory prefetching techniques in GPU environments? 2. What specific algorithms or methods are used by the prefetchers in this patent application to optimize data fetching and caching processes?


Original Abstract Submitted

embodiments are generally directed to memory prefetching in multiple gpu environment. an embodiment of an apparatus includes multiple processors including a host processor and multiple graphics processing units (gpus) to process data, each of the gpus including a prefetcher and a cache; and a memory for storage of data, the memory including a plurality of memory elements, wherein the prefetcher of each of the gpus is to prefetch data from the memory to the cache of the gpu; and wherein the prefetcher of a gpu is prohibited from prefetching from a page that is not owned by the gpu or by the host processor.