18516716. SYSTEMS AND METHODS FOR UPDATING MEMORY SIDE CACHES IN A MULTI-GPU CONFIGURATION simplified abstract (Intel Corporation)

From WikiPatents
Jump to navigation Jump to search

SYSTEMS AND METHODS FOR UPDATING MEMORY SIDE CACHES IN A MULTI-GPU CONFIGURATION

Organization Name

Intel Corporation

Inventor(s)

Altug Koker of El Dorado Hills CA (US)

Joydeep Ray of Folsom CA (US)

Aravindh Anantaraman of Folsom CA (US)

Valentin Andrei of San Jose CA (US)

Abhishek Appu of El Dorado Hills CA (US)

Sean Coleman of Folsom CA (US)

Nicolas Galoppo Von Borries of Portland OR (US)

Varghese George of Folsom CA (US)

Pattabhiraman K of Bangalore (IN)

SungYe Kim of Folsom CA (US)

Mike Macpherson of Portland OR (US)

Subramaniam Maiyuran of Gold River CA (US)

Elmoustapha Ould-ahmed-vall of Chandler AZ (US)

Vasanth Ranganathan of El Dorado Hills CA (US)

James Valerio of North Plains OR (US)

SYSTEMS AND METHODS FOR UPDATING MEMORY SIDE CACHES IN A MULTI-GPU CONFIGURATION - A simplified explanation of the abstract

This abstract first appeared for US patent application 18516716 titled 'SYSTEMS AND METHODS FOR UPDATING MEMORY SIDE CACHES IN A MULTI-GPU CONFIGURATION

Simplified Explanation

The abstract describes systems and methods for updating remote memory side caches in a multi-GPU configuration. In this embodiment, a graphics processor for a multi-tile architecture includes multiple GPUs with their own memory, memory side cache memory, communication fabrics, and memory management units. The memory management units control memory requests, update memory content, update memory side cache content, and determine whether to update content in other memory side cache memories.

  • Graphics processor for multi-tile architecture
  • Multiple GPUs with individual memory, memory side cache memory, communication fabrics, and memory management units
  • Memory management units control memory requests, update memory content, update memory side cache content, and determine whether to update content in other memory side cache memories

Potential Applications

The technology described in this patent application could be applied in:

  • High-performance computing
  • Data centers
  • Virtual reality systems

Problems Solved

This technology helps in:

  • Improving memory access speed
  • Enhancing data processing efficiency
  • Optimizing GPU performance in multi-GPU configurations

Benefits

The benefits of this technology include:

  • Faster data processing
  • Reduced latency
  • Improved overall system performance

Potential Commercial Applications

Optimizing Memory Side Cache Updates in Multi-GPU Configurations for Enhanced Performance

Unanswered Questions

How does this technology impact power consumption in multi-GPU configurations?

The article does not address the potential impact of this technology on power consumption in multi-GPU configurations.

Are there any limitations to the scalability of this technology in large-scale GPU setups?

The article does not discuss any limitations to the scalability of this technology in large-scale GPU setups.


Original Abstract Submitted

Systems and methods for updating remote memory side caches in a multi-GPU configuration are disclosed herein. In one embodiment, a graphics processor for a multi-tile architecture includes a first graphics processing unit (GPU) having a first memory, a first memory side cache memory, a first communication fabric, and a first memory management unit (MMU). The graphics processor includes a second graphics processing unit (GPU) having a second memory, a second memory side cache memory, a second memory management unit (MMU), and a second communication fabric that is communicatively coupled to the first communication fabric. The first MMU is configured to control memory requests for the first memory, to update content in the first memory, to update content in the first memory side cache memory, and to determine whether to update the content in the second memory side cache memory.