18408716. Schedule-Aware Tensor Distribution Module simplified abstract (Intel Corporation)

From WikiPatents
Jump to navigation Jump to search

Schedule-Aware Tensor Distribution Module

Organization Name

Intel Corporation

Inventor(s)

Gautham Chinya of Sunnyvale CA (US)

Huichu Liu of Santa Clara CA (US)

Arnab Raha of San Jose CA (US)

Debabrata Mohapatra of San Jose CA (US)

Cormac Brick of San Francisco CA (US)

Lance Hacking of Spanish Fork UT (US)

Schedule-Aware Tensor Distribution Module - A simplified explanation of the abstract

This abstract first appeared for US patent application 18408716 titled 'Schedule-Aware Tensor Distribution Module

Simplified Explanation:

The patent application describes a neural network system with a neural network accelerator that includes multiple processing engines for arithmetic operations to support deep neural network inference. The accelerator also has schedule-aware tensor data distribution circuitry to manage data loading, extraction, reorganization, and storage.

  • Key Features and Innovation:
   - Neural network system with a neural network accelerator
   - Multiple processing engines for arithmetic operations
   - Schedule-aware tensor data distribution circuitry
   - Data loading, extraction, reorganization, and storage capabilities

Potential Applications: This technology can be applied in various fields such as: - Artificial intelligence - Machine learning - Data analytics - Image and speech recognition - Autonomous vehicles

Problems Solved: - Efficient processing of neural network inferences - Optimized data distribution and management - Accelerated deep learning tasks

Benefits: - Faster inference processing - Improved performance of deep neural networks - Enhanced efficiency in data handling

Commercial Applications: Potential commercial uses include: - AI-powered applications - Cloud computing services - Autonomous systems development - Data processing and analysis companies

Prior Art: Readers can explore prior art related to neural network accelerators, deep learning systems, and data distribution technologies in the field of artificial intelligence and machine learning.

Frequently Updated Research: Stay updated on the latest advancements in neural network accelerators, deep learning algorithms, and data distribution techniques to enhance the performance of AI systems.

Questions about Neural Network Accelerators: 1. How do neural network accelerators improve the efficiency of deep learning tasks? 2. What are the key components of a neural network accelerator and how do they work together to enhance inference processing?


Original Abstract Submitted

Methods and systems include a neural network system that includes a neural network accelerator comprising. The neural network accelerator includes multiple processing engines coupled together to perform arithmetic operations in support of an inference performed using the deep neural network system. The neural network accelerator also includes a schedule-aware tensor data distribution circuitry or software that is configured to load tensor data into the multiple processing engines in a load phase, extract output data from the multiple processing engines in an extraction phase, reorganize the extracted output data, and store the reorganized extracted output data to memory.