Intel corporation (20240220785). Schedule-Aware Tensor Distribution Module simplified abstract
Contents
Schedule-Aware Tensor Distribution Module
Organization Name
Inventor(s)
Gautham Chinya of Sunnyvale CA (US)
Huichu Liu of Santa Clara CA (US)
Arnab Raha of San Jose CA (US)
Debabrata Mohapatra of San Jose CA (US)
Cormac Brick of San Francisco CA (US)
Lance Hacking of Spanish Fork UT (US)
Schedule-Aware Tensor Distribution Module - A simplified explanation of the abstract
This abstract first appeared for US patent application 20240220785 titled 'Schedule-Aware Tensor Distribution Module
The abstract describes a neural network system with a neural network accelerator that includes multiple processing engines for arithmetic operations to support deep neural network inference. The accelerator also features schedule-aware tensor data distribution circuitry to manage data loading, extraction, reorganization, and storage.
- Neural network system with a neural network accelerator
- Multiple processing engines for arithmetic operations
- Schedule-aware tensor data distribution circuitry
- Data loading, extraction, reorganization, and storage capabilities
- Support for deep neural network inference
Potential Applications: - Artificial intelligence - Machine learning - Data processing - Image recognition - Natural language processing
Problems Solved: - Accelerating neural network inference - Efficient data distribution and management - Optimizing arithmetic operations - Enhancing overall system performance
Benefits: - Faster inference processing - Improved efficiency in data handling - Enhanced performance of deep neural networks - Potential for real-time applications - Scalability for large datasets
Commercial Applications: Title: "Enhanced Neural Network Accelerator for AI Applications" This technology can be utilized in various industries such as healthcare, finance, autonomous vehicles, and cybersecurity for faster and more efficient data processing, leading to improved decision-making and productivity.
Questions about Neural Network Accelerator: 1. How does the neural network accelerator improve the performance of deep neural networks? - The accelerator enhances performance by utilizing multiple processing engines for arithmetic operations and efficient data distribution. 2. What are the potential commercial applications of this technology? - The technology can be applied in various industries such as healthcare, finance, autonomous vehicles, and cybersecurity for faster and more efficient data processing.
Original Abstract Submitted
methods and systems include a neural network system that includes a neural network accelerator comprising. the neural network accelerator includes multiple processing engines coupled together to perform arithmetic operations in support of an inference performed using the deep neural network system. the neural network accelerator also includes a schedule-aware tensor data distribution circuitry or software that is configured to load tensor data into the multiple processing engines in a load phase, extract output data from the multiple processing engines in an extraction phase, reorganize the extracted output data, and store the reorganized extracted output data to memory.