17958120. PUSHED PREFETCHING IN A MEMORY HIERARCHY simplified abstract (ADVANCED MICRO DEVICES, INC.)

From WikiPatents
Jump to navigation Jump to search

PUSHED PREFETCHING IN A MEMORY HIERARCHY

Organization Name

ADVANCED MICRO DEVICES, INC.

Inventor(s)

JAGADISH B. Kotra of AUSTIN TX (US)

JOHN Kalamatianos of BOXBOROUGH MA (US)

PAUL Moyer of FORT COLLINS CO (US)

GABRIEL H. Loh of BELLEVUE WA (US)

PUSHED PREFETCHING IN A MEMORY HIERARCHY - A simplified explanation of the abstract

This abstract first appeared for US patent application 17958120 titled 'PUSHED PREFETCHING IN A MEMORY HIERARCHY

Simplified Explanation

The patent application describes systems and methods for pushed prefetching in a multi-core complex system with multiple caches and shared memory. The prefetcher monitors memory traffic between caches and shared memory to initiate data prefetching to improve performance.

  • Multiple core complexes with multiple cores and caches
  • Memory hierarchy with multiple levels
  • Interconnect device connecting core complexes and shared memory
  • Push-based prefetcher monitoring memory traffic and initiating data prefetching

Potential Applications

This technology can be applied in high-performance computing systems, data centers, and cloud computing environments where efficient data prefetching can improve overall system performance.

Problems Solved

1. Improving memory access latency by proactively fetching data before it is needed. 2. Optimizing cache utilization by prefetching data based on memory traffic patterns.

Benefits

1. Enhanced system performance and responsiveness. 2. Reduced memory access latency. 3. Efficient utilization of cache memory.

Potential Commercial Applications

Optimizing data access in large-scale databases, accelerating machine learning algorithms, and improving the performance of real-time analytics systems.

Possible Prior Art

One possible prior art could be the use of hardware prefetchers in processors to anticipate memory access patterns and prefetch data into cache memory to reduce latency.

Unanswered Questions

How does the pushed prefetching mechanism handle cache coherence in a multi-core complex system?

The patent application does not provide detailed information on how cache coherence is maintained when prefetching data to multiple caches in a multi-core complex system.

What impact does the pushed prefetching mechanism have on power consumption in the system?

The patent application does not address the potential impact of pushed prefetching on power consumption, especially in scenarios where data is prefetched frequently.


Original Abstract Submitted

Systems and methods for pushed prefetching include: multiple core complexes, each core complex having multiple cores and multiple caches, the multiple caches configured in a memory hierarchy with multiple levels; an interconnect device coupling the core complexes to each other and coupling the core complexes to shared memory, the shared memory at a lower level of the memory hierarchy than the multiple caches; and a push-based prefetcher having logic to: monitor memory traffic between caches of a first level of the memory hierarchy and the shared memory; and based on the monitoring, initiate a prefetch of data to a cache of the first level of the memory hierarchy.