20240028881. DEEP NEURAL NETWORK (DNN) COMPUTE LOADING AND TRAFFIC-AWARE POWER MANAGEMENT FOR MULTI-CORE ARTIFICIAL INTELLIGENCE (AI) PROCESSING SYSTEM simplified abstract (MEDIATEK INC.)
DEEP NEURAL NETWORK (DNN) COMPUTE LOADING AND TRAFFIC-AWARE POWER MANAGEMENT FOR MULTI-CORE ARTIFICIAL INTELLIGENCE (AI) PROCESSING SYSTEM
Organization Name
Inventor(s)
Chih-Chung Cheng of Hsinchu (TW)
DEEP NEURAL NETWORK (DNN) COMPUTE LOADING AND TRAFFIC-AWARE POWER MANAGEMENT FOR MULTI-CORE ARTIFICIAL INTELLIGENCE (AI) PROCESSING SYSTEM - A simplified explanation of the abstract
This abstract first appeared for US patent application 20240028881 titled 'DEEP NEURAL NETWORK (DNN) COMPUTE LOADING AND TRAFFIC-AWARE POWER MANAGEMENT FOR MULTI-CORE ARTIFICIAL INTELLIGENCE (AI) PROCESSING SYSTEM
Simplified Explanation
The present disclosure describes a method for controlling a processing device to execute an application that runs on a neural network (NN) using a network-on-chip (NoC) architecture. The method involves obtaining compiler information related to the application and the NoC, and controlling the processing device to employ a first routing scheme to process the application when the compiler information does not meet a predefined requirement. When the compiler information meets the predefined requirement, the processing device is controlled to employ a second routing scheme to process the application.
- The method controls a processing device to execute an application on a neural network using a network-on-chip architecture.
- Compiler information related to the application and the network-on-chip is obtained.
- The processing device is controlled to employ a first routing scheme when the compiler information does not meet a predefined requirement.
- The processing device is controlled to employ a second routing scheme when the compiler information meets the predefined requirement.
Potential applications of this technology:
- Efficient execution of applications on neural networks using a network-on-chip architecture.
- Improved performance and resource utilization in processing devices.
- Optimization of routing schemes based on compiler information.
Problems solved by this technology:
- Inefficient execution of applications on neural networks.
- Suboptimal resource utilization in processing devices.
- Lack of flexibility in routing schemes for processing applications.
Benefits of this technology:
- Improved performance and efficiency in executing applications on neural networks.
- Enhanced resource utilization in processing devices.
- Flexibility in choosing routing schemes based on compiler information.
Original Abstract Submitted
aspects of the present disclosure provide a method for controlling a processing device to execute an application that runs on a neural network (nn). the processing device can include a plurality of processing units that are arranged in a network-on-chip (noc) architecture. for example, the method can include obtaining compiler information relating the application and the noc, controlling the processing device to employ a first routing scheme to process the application when the compiler information does not meet a predefined requirement, and controlling the processing device to employ a second routing scheme to process the application when the compiler information meets the predefined requirement.