18597568. DISTRIBUTED USER MODE PROCESSING simplified abstract (ADVANCED MICRO DEVICES, INC.)

From WikiPatents
Jump to navigation Jump to search

DISTRIBUTED USER MODE PROCESSING

Organization Name

ADVANCED MICRO DEVICES, INC.

Inventor(s)

Rex Eldon Mccrary of Orlando FL (US)

DISTRIBUTED USER MODE PROCESSING - A simplified explanation of the abstract

This abstract first appeared for US patent application 18597568 titled 'DISTRIBUTED USER MODE PROCESSING

The abstract describes a patent application for a system where a graphics processing unit (GPU) executes commands and a scheduler schedules these commands for execution. The commands are received from a user mode driver in a central processing unit (CPU). The scheduler also schedules second commands for execution after completing the first commands, without notifying the CPU. Additionally, a direct memory access (DMA) engine in the GPU writes blocks of information to memory based on the commands received.

  • Graphics processing unit (GPU) pipelines execute commands and a scheduler schedules these commands.
  • Commands are received from a user mode driver in a central processing unit (CPU).
  • Scheduler schedules second commands for execution without notifying the CPU.
  • Direct memory access (DMA) engine in the GPU writes blocks of information to memory based on the commands received.
  • Second commands program the DMA engine to write blocks of information including results generated by executing the first commands.

Potential Applications: - High-performance computing - Real-time rendering - Video processing

Problems Solved: - Efficient command execution and scheduling - Seamless communication between GPU and CPU - Optimized memory access

Benefits: - Improved system performance - Reduced latency - Enhanced data transfer speeds

Commercial Applications: Title: "Advanced GPU-CPU Communication System for High-Performance Computing" This technology can be utilized in industries such as gaming, artificial intelligence, and scientific research for faster and more efficient data processing.

Questions about the technology: 1. How does the scheduler in the GPU manage the execution of commands from the CPU? 2. What are the advantages of using a DMA engine for memory access in this system?


Original Abstract Submitted

A first processing unit such as a graphics processing unit (GPU) pipelines that execute commands and a scheduler to schedule one or more first commands for execution by one or more of the pipelines. The one or more first commands are received from a user mode driver in a second processing unit such as a central processing unit (CPU). The scheduler schedules one or more second commands for execution in response to completing execution of the one or more first commands and without notifying the second processing unit. In some cases, the first processing unit includes a direct memory access (DMA) engine that writes blocks of information from the first processing unit to a memory. The one or more second commands program the DMA engine to write a block of information including results generated by executing the one or more first commands.