17900471. ARTIFICIAL INTELLIGENCE ACCELERATOR DEVICE simplified abstract (Taiwan Semiconductor Manufacturing Company, Ltd.)

From WikiPatents
Jump to navigation Jump to search

ARTIFICIAL INTELLIGENCE ACCELERATOR DEVICE

Organization Name

Taiwan Semiconductor Manufacturing Company, Ltd.

Inventor(s)

Xiaoyu Sun of San Jose CA (US)

Xiaochen Peng of San Jose CA (US)

Murat Kerem Akarvardar of Hsinchu (TW)

ARTIFICIAL INTELLIGENCE ACCELERATOR DEVICE - A simplified explanation of the abstract

This abstract first appeared for US patent application 17900471 titled 'ARTIFICIAL INTELLIGENCE ACCELERATOR DEVICE

Simplified Explanation

The abstract describes an artificial intelligence (AI) accelerator device with on-chip mini buffers associated with a processing element (PE) array. Partitioning the on-chip buffer into mini buffers can reduce size and complexity, leading to reduced wire routing complexity, latency, and access energy for the device. This can increase operating efficiency and performance, as well as overall bandwidth for data transfer to and from the PE array.

  • Mini buffers associated with PE array
  • Partitioning on-chip buffer reduces size and complexity
  • Reduces wire routing complexity, latency, and access energy
  • Increases operating efficiency and performance
  • Increases overall bandwidth for data transfer

Potential Applications

- AI accelerators - Edge computing devices - IoT devices

Problems Solved

- Reduced wire routing complexity - Reduced latency - Reduced access energy consumption

Benefits

- Increased operating efficiency - Improved performance - Increased overall bandwidth - Reduced size and complexity of on-chip buffer


Original Abstract Submitted

An artificial intelligence (AI) accelerator device may include a plurality of on-chip mini buffers that are associated with a processing element (PE) array. Each mini buffer is associated with a subset of rows or a subset of columns of the PE array. Partitioning an on-chip buffer of the AI accelerator device into the mini buffers described herein may reduce the size and complexity of the on-chip buffer. The reduced size of the on-chip buffer may reduce the wire routing complexity of the on-chip buffer, which may reduce latency and may reduce access energy for the AI accelerator device. This may increase the operating efficiency and/or may increase the performance of the AI accelerator device. Moreover, the mini buffers may increase the overall bandwidth that is available for the mini buffers to transfer data to and from the PE array.