18534566. DEEP LEARNING HARDWARE simplified abstract (Intel Corporation)
Contents
- 1 DEEP LEARNING HARDWARE
DEEP LEARNING HARDWARE
Organization Name
Inventor(s)
Horace H. Lau of Mountain View CA (US)
Prashant Arora of Fremont CA (US)
Olivia K. Wu of Los Altos CA (US)
Tony L. Werner of Los Altos CA (US)
Carey K. Kloss of Los Altos CA (US)
Amir Khosrowshahi of San Diego CA (US)
Andrew Yang of Cupertino CA (US)
Aravind Kalaiah of San Jose CA (US)
Vijay Anand R. Korthikanti of Milpitas CA (US)
DEEP LEARNING HARDWARE - A simplified explanation of the abstract
This abstract first appeared for US patent application 18534566 titled 'DEEP LEARNING HARDWARE
Simplified Explanation
The abstract describes a patent application for a network of matrix processing units (MPUs) on a device that perform matrix multiplication operations. A master control central processing unit (MCC) receives instructions from a host device, invokes operations on the MPUs based on the instructions, and generates a result in the form of a tensor value.
- Matrix processing units (MPUs) connected in a network on a device
- MPUs perform matrix multiplication operations
- Computer memory stores tensor data
- Master control central processing unit (MCC) receives instructions from a host device
- MCC invokes operations on MPUs based on instructions
- Result generated as a tensor value
Potential Applications
The technology described in this patent application could be applied in various fields such as:
- Artificial intelligence
- Machine learning
- Data analysis
- Scientific research
Problems Solved
This technology helps in solving the following problems:
- Efficient matrix multiplication operations
- Handling large amounts of tensor data
- Streamlining data processing tasks
Benefits
The benefits of this technology include:
- Faster matrix multiplication operations
- Improved data processing capabilities
- Enhanced performance in complex computations
Potential Commercial Applications
The potential commercial applications of this technology could be seen in:
- High-performance computing systems
- Data centers
- Cloud computing services
Possible Prior Art
One possible prior art for this technology could be the use of parallel processing units in supercomputers for matrix operations.
Unanswered Questions
How does this technology compare to existing matrix processing units on the market?
This article does not provide a direct comparison with existing matrix processing units in terms of performance, efficiency, or cost.
What are the specific technical specifications of the matrix processing units described in the patent application?
The article does not delve into the technical specifications of the matrix processing units, such as processing speed, memory capacity, or connectivity options.
Original Abstract Submitted
A network of matrix processing units (MPUs) is provided on a device, where each MPU is connected to at least one other MPU in the network, and each MPU is to perform matrix multiplication operations. Computer memory stores tensor data and a master control central processing unit (MCC) is provided on the device to receive an instruction from a host device, where the instruction includes one or more tensor operands based on the tensor data. The MCC invokes a set of operations on one or more of the MPUs based on the instruction, where the set of operations includes operations on the tensor operands. A result is generated from the set of operations, the result embodied as a tensor value.
- Intel Corporation
- Horace H. Lau of Mountain View CA (US)
- Prashant Arora of Fremont CA (US)
- Olivia K. Wu of Los Altos CA (US)
- Tony L. Werner of Los Altos CA (US)
- Carey K. Kloss of Los Altos CA (US)
- Amir Khosrowshahi of San Diego CA (US)
- Andrew Yang of Cupertino CA (US)
- Aravind Kalaiah of San Jose CA (US)
- Vijay Anand R. Korthikanti of Milpitas CA (US)
- G06N3/063
- G06F17/16
- G06N3/04
- G06N3/08