18036506. PREEMPTION IN A MACHINE LEARNING HARDWARE ACCELERATOR simplified abstract (GOOGLE LLC)

From WikiPatents
Jump to navigation Jump to search

PREEMPTION IN A MACHINE LEARNING HARDWARE ACCELERATOR

Organization Name

GOOGLE LLC

Inventor(s)

Temitayo Fadelu of San Francisco CA (US)

Ravi Narayanaswami of San Jose CA (US)

JiHong Min of San Francisco CA (US)

Dongdong Li of Mountain View CA (US)

Suyog Gupta of Sunnyvale CA (US)

Jason Jong Kyu Park of San Jose CA (US)

PREEMPTION IN A MACHINE LEARNING HARDWARE ACCELERATOR - A simplified explanation of the abstract

This abstract first appeared for US patent application 18036506 titled 'PREEMPTION IN A MACHINE LEARNING HARDWARE ACCELERATOR

Simplified Explanation

The present disclosure describes a system and method for preempting a long-running process with a higher priority process in a machine learning system, such as a hardware accelerator.

  • The system and method are applicable to machine learning hardware accelerators, which are multi-chip systems including semiconductor chips designed for machine learning operations.
  • The hardware accelerator can include application-specific integrated circuits (ASICs), which are customized integrated circuits designed for specific uses.
  • The innovation focuses on preempting a long-running process, which means interrupting and stopping a process that is taking a long time to complete.
  • The preempting is done by introducing a higher priority process, which takes precedence over the long-running process and is given more resources and attention.
  • This preempting mechanism allows for better resource allocation and prioritization in the machine learning system, leading to improved performance and efficiency.

Potential Applications

  • This technology can be applied in various machine learning systems, particularly those that utilize hardware accelerators.
  • It can be used in data centers and cloud computing environments where machine learning operations are performed.
  • The technology can also be implemented in edge devices and IoT devices that require efficient machine learning capabilities.

Problems Solved

  • Long-running processes in machine learning systems can hinder overall system performance and efficiency.
  • Prioritizing and allocating resources to different processes can be challenging in complex machine learning systems.
  • This technology solves these problems by preempting long-running processes and introducing higher priority processes, improving resource allocation and system performance.

Benefits

  • Improved performance and efficiency in machine learning systems.
  • Better resource allocation and prioritization of processes.
  • Enhanced responsiveness and reduced latency in executing machine learning operations.


Original Abstract Submitted

The present disclosure describes a system and method for preempting a long-running process with a higher priority process in a machine learning system, such as a hardware accelerator. The machine learning hardware accelerator can be a multi-chip system including semiconductor chips that can be application-specific integrated circuits (ASIC) designed to perform machine learning operations. An ASIC is an integrated circuit (IC) that is customized for a particular use.