Intel corporation (20240112006). DEEP LEARNING HARDWARE simplified abstract

From WikiPatents
Jump to navigation Jump to search

DEEP LEARNING HARDWARE

Organization Name

intel corporation

Inventor(s)

Horace H. Lau of Mountain View CA (US)

Prashant Arora of Fremont CA (US)

Olivia K. Wu of Los Altos CA (US)

Tony L. Werner of Los Altos CA (US)

Carey K. Kloss of Los Altos CA (US)

Amir Khosrowshahi of San Diego CA (US)

Andrew Yang of Cupertino CA (US)

Aravind Kalaiah of San Jose CA (US)

Vijay Anand R. Korthikanti of Milpitas CA (US)

DEEP LEARNING HARDWARE - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240112006 titled 'DEEP LEARNING HARDWARE

Simplified Explanation

A network of matrix processing units (MPUs) is provided on a device, where each MPU is connected to at least one other MPU in the network, and each MPU is to perform matrix multiplication operations. Computer memory stores tensor data, and a Master Control Central Processing Unit (MCC) is provided on the device to receive an instruction from a host device, where the instruction includes one or more tensor operands based on the tensor data. The MCC invokes a set of operations on one or more of the MPUs based on the instruction, where the set of operations includes operations on the tensor operands. A result is generated from the set of operations, the result embodied as a tensor value.

  • Network of matrix processing units (MPUs) on a device
  • MPUs connected to each other in the network
  • MPUs perform matrix multiplication operations
  • Computer memory stores tensor data
  • Master Control Central Processing Unit (MCC) receives instructions from a host device
  • MCC invokes operations on MPUs based on the instruction
  • Result generated as a tensor value

Potential Applications

The technology can be applied in:

  • Artificial intelligence
  • Machine learning
  • Data processing

Problems Solved

This technology helps in:

  • Efficient matrix multiplication
  • Handling large datasets
  • Accelerating computational tasks

Benefits

The benefits of this technology include:

  • Faster processing speeds
  • Improved performance
  • Scalability

Potential Commercial Applications

The technology can be utilized in:

  • Data centers
  • Supercomputers
  • High-performance computing systems

Possible Prior Art

One possible prior art for this technology is the use of parallel processing units in supercomputers for matrix operations.

Unanswered Questions

How does this technology compare to traditional matrix multiplication methods?

This technology offers faster processing speeds and improved performance compared to traditional methods, but the exact speedup and efficiency gains need to be quantified through performance benchmarks and comparisons.

What are the limitations of this technology in terms of scalability and compatibility with existing systems?

The scalability of this technology in handling extremely large datasets and its compatibility with different hardware configurations and software environments need to be further explored and evaluated to understand its limitations.


Original Abstract Submitted

a network of matrix processing units (mpus) is provided on a device, where each mpu is connected to at least one other mpu in the network, and each mpu is to perform matrix multiplication operations. computer memory stores tensor data and a master control central processing unit (mcc) is provided on the device to receive an instruction from a host device, where the instruction includes one or more tensor operands based on the tensor data. the mcc invokes a set of operations on one or more of the mpus based on the instruction, where the set of operations includes operations on the tensor operands. a result is generated from the set of operations, the result embodied as a tensor value.