18307728. SCHEDULING JOBS ON GRAPHICAL PROCESSING UNITS simplified abstract (HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP)

From WikiPatents
Jump to navigation Jump to search

SCHEDULING JOBS ON GRAPHICAL PROCESSING UNITS

Organization Name

HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP

Inventor(s)

Diman Zad Tootaghaj of Milpitas CA (US)

Junguk Cho of Milpitas CA (US)

Puneet Sharma of Milpitas CA (US)

SCHEDULING JOBS ON GRAPHICAL PROCESSING UNITS - A simplified explanation of the abstract

This abstract first appeared for US patent application 18307728 titled 'SCHEDULING JOBS ON GRAPHICAL PROCESSING UNITS

Simplified Explanation

The patent application relates to scheduling jobs for multiple GPUs to provide concurrent processing by virtual GPUs. The computing system receives requests to schedule new jobs, allocates them to virtual GPUs, updates existing job allocations, and minimizes operational and migration costs.

  • Efficient scheduling of jobs for multiple GPUs
  • Concurrent processing by virtual GPUs
  • Minimization of operational and migration costs
  • Updating existing job allocations on virtual GPUs

Potential Applications

The technology can be applied in various fields such as:

  • High-performance computing
  • Data processing and analytics
  • Machine learning and AI training

Problems Solved

The technology addresses the following issues:

  • Efficient utilization of multiple GPUs
  • Minimization of operational and migration costs
  • Optimal scheduling of jobs for improved performance

Benefits

The technology offers the following benefits:

  • Increased processing power and speed
  • Cost-effective operation of multiple GPUs
  • Enhanced job scheduling and allocation efficiency

Potential Commercial Applications

The technology can be utilized in industries such as:

  • Cloud computing services
  • Video rendering and editing
  • Scientific research and simulations

Possible Prior Art

One possible prior art could be the use of job scheduling algorithms in parallel computing systems to optimize resource utilization and performance.

Unanswered Questions

How does the technology handle job prioritization among multiple GPUs?

The article does not delve into the specifics of how job prioritization is managed among the virtual GPUs.

What impact does the technology have on overall system scalability?

The scalability implications of implementing this technology in large-scale computing systems are not discussed in the article.


Original Abstract Submitted

Example implementations relate to scheduling of jobs for a plurality of graphics processing units (GPUs) providing concurrent processing by a plurality of virtual GPUs. According to an example, a computing system including one or more GPUs receives a request to schedule a new job to be executed by the computing system. The new job is allocated to one or more vGPUs. Allocations of existing jobs are updated to one or more vGPUs. Operational cost of operating the one or more GPUs and migration cost of allocating the new job are minimized and allocations of the existing jobs on the one or more vGPUs is updated. The new job and the existing jobs are processed by the one or more GPUs in the computing system.