17816285. DIGITAL COMPUTE IN MEMORY simplified abstract (QUALCOMM Incorporated)

From WikiPatents
Jump to navigation Jump to search

DIGITAL COMPUTE IN MEMORY

Organization Name

QUALCOMM Incorporated

Inventor(s)

Zhongze Wang of San Diego CA (US)

Mustafa Badaroglu of Leuven (BE)

DIGITAL COMPUTE IN MEMORY - A simplified explanation of the abstract

This abstract first appeared for US patent application 17816285 titled 'DIGITAL COMPUTE IN MEMORY

Simplified Explanation

The abstract describes a patent application related to computation-in-memory architectures and operations for machine learning tasks. The patent application proposes a circuit for in-memory computation, which includes multiple bit-lines, multiple word-lines, an array of compute-in-memory cells, and a plurality of accumulators.

  • The circuit is designed to perform machine learning tasks using computation-in-memory techniques.
  • It includes multiple bit-lines and word-lines to facilitate data storage and retrieval.
  • The array of compute-in-memory cells stores weight bits of a neural network, which are essential for machine learning algorithms.
  • Each accumulator is connected to a respective bit-line, allowing for efficient accumulation of computation results.
  • The circuit aims to improve the performance and efficiency of machine learning tasks by enabling computation within the memory itself.

Potential applications of this technology:

  • Machine learning and artificial intelligence systems
  • Neural network training and inference
  • Pattern recognition and image processing
  • Natural language processing and speech recognition
  • Data analytics and predictive modeling

Problems solved by this technology:

  • Improves the performance and efficiency of machine learning tasks by reducing data movement and energy consumption.
  • Enables faster and more parallel computation by performing operations within the memory itself.
  • Facilitates the implementation of complex machine learning algorithms by storing weight bits in the compute-in-memory cells.

Benefits of this technology:

  • Faster and more efficient machine learning algorithms
  • Reduced energy consumption and improved battery life in AI systems
  • Higher computational throughput and parallelism
  • Improved scalability and flexibility in neural network training and inference.


Original Abstract Submitted

Certain aspects generally relate to performing machine learning tasks, and in particular, to computation-in-memory architectures and operations. One aspect provides a circuit for in-memory computation. The circuit generally includes multiple bit-lines, multiple word-lines, an array of compute-in-memory cells, and a plurality of accumulators, each accumulator being coupled to a respective one of the multiple bit-lines. Each compute-in-memory cell is coupled to one of the bit-lines and to one of the word-lines and is configured to store a weight bit of a neural network.