18311124. CONFIGURABLE COMPUTE-IN-MEMORY CIRCUIT AND METHOD simplified abstract (TAIWAN SEMICONDUCTOR MANUFACTURING COMPANY, LTD.)

From WikiPatents
Jump to navigation Jump to search

CONFIGURABLE COMPUTE-IN-MEMORY CIRCUIT AND METHOD

Organization Name

TAIWAN SEMICONDUCTOR MANUFACTURING COMPANY, LTD.

Inventor(s)

Xiaoyu Sun of Hsinchu (TW)

Murat Kerem Akarvardar of Hsinchu (TW)

CONFIGURABLE COMPUTE-IN-MEMORY CIRCUIT AND METHOD - A simplified explanation of the abstract

This abstract first appeared for US patent application 18311124 titled 'CONFIGURABLE COMPUTE-IN-MEMORY CIRCUIT AND METHOD

Simplified Explanation

The circuit described in the patent application includes a data buffer that outputs two sets of bits, a plurality of memory macros, and a distribution network that connects the data buffer to the memory macros.

  • The distribution network divides the first set of bits into subsets and sends each subset to a corresponding memory macro.
  • The distribution network then either sends the entire second set of bits to each memory macro, or divides the second set into subsets and sends each subset to one or more memory macros.
  • Each memory macro multiplies the corresponding first subset with either the entire second set of bits or the corresponding second subset, and outputs the result.

---

      1. Potential Applications
  • This technology could be used in high-speed data processing systems where efficient distribution of data is crucial.
  • It could also be applied in memory systems where parallel processing is required.
      1. Problems Solved
  • Efficient distribution of data to memory macros in a circuit.
  • Simplifying the process of multiplying subsets of data in memory macros.
      1. Benefits
  • Improved performance and speed in data processing systems.
  • Reduced complexity in memory systems design.


Original Abstract Submitted

A circuit includes a data buffer configured to sequentially output first and second pluralities of bits, a plurality of memory macros having a total number, and a distribution network coupled between the data buffer and the plurality of memory macros. The distribution network separates the first plurality of bits into the total number of first subsets, and outputs each first subset to a corresponding memory macro, and either outputs an entirety of the second plurality of bits to each memory macro, or separates the second plurality of bits into a number of second subsets less than or equal to the total number, and outputs each second subset to one or more corresponding memory macros. Each memory macro outputs a product of the corresponding first subset and the one of the entirety of the second plurality of bits or the corresponding second subset of the second plurality of bits.