18575621. EFFICIENT HARDWARE ACCELERATOR CONFIGURATION EXPLORATION simplified abstract (Google LLC)

From WikiPatents
Jump to navigation Jump to search

EFFICIENT HARDWARE ACCELERATOR CONFIGURATION EXPLORATION

Organization Name

Google LLC

Inventor(s)

Amir Yazdanbakhsh of San Jose CA (US)

Sergey Vladimir Levine of Berkeley CA (US)

Aviral Kumar of Berkeley CA (US)

EFFICIENT HARDWARE ACCELERATOR CONFIGURATION EXPLORATION - A simplified explanation of the abstract

This abstract first appeared for US patent application 18575621 titled 'EFFICIENT HARDWARE ACCELERATOR CONFIGURATION EXPLORATION

Simplified Explanation: The patent application describes methods, systems, and apparatus for training a surrogate neural network to predict the performance of a hardware accelerator on a specific application.

Key Features and Innovation:

  • Training a surrogate neural network to predict the performance of a hardware accelerator.
  • Using the trained neural network in the search process for determining hardware configurations for application-specific hardware accelerators.
  • Enabling deployment of neural networks on hardware accelerators for machine learning tasks.

Potential Applications: The technology can be applied in the development of hardware accelerators for various machine learning tasks, such as image recognition, natural language processing, and autonomous driving systems.

Problems Solved: The technology addresses the need for efficient and accurate prediction of hardware accelerator performance on specific applications, reducing the time and resources required for hardware configuration optimization.

Benefits:

  • Improved efficiency in determining hardware configurations for specific applications.
  • Enhanced performance of hardware accelerators for machine learning tasks.
  • Reduction in time and resources needed for hardware optimization.

Commercial Applications: The technology has potential commercial applications in industries such as artificial intelligence, autonomous vehicles, robotics, and data centers, where hardware accelerators are used for machine learning tasks.

Prior Art: Readers can explore prior research on surrogate neural networks, hardware accelerator optimization, and machine learning hardware design to understand the background of this technology.

Frequently Updated Research: Stay updated on advancements in surrogate neural networks, hardware accelerator optimization techniques, and machine learning hardware design to enhance the application of this technology.

Questions about Hardware Accelerator Optimization: 1. How does the surrogate neural network training process improve hardware accelerator performance predictions? 2. What are the key advantages of using neural networks in hardware accelerator optimization?


Original Abstract Submitted

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a surrogate neural network configured to determine a predicted performance measure of a hardware accelerator having a target hardware configuration on a target application. The trained instance of the surrogate neural network can be used. in addition to or in place of hardware simulation, during a search process for determining hardware configurations for application-specific hardware accelerators. i.e., hardware accelerators on which one or more neural networks can be deployed to perform one or more target machine learning tasks.