17980544. WEIGHT-SPARSE NPU WITH FINE-GRAINED STRUCTURED SPARSITY simplified abstract (SAMSUNG ELECTRONICS CO., LTD.)

From WikiPatents
Jump to navigation Jump to search

WEIGHT-SPARSE NPU WITH FINE-GRAINED STRUCTURED SPARSITY

Organization Name

SAMSUNG ELECTRONICS CO., LTD.

Inventor(s)

Jong Hoon Shin of San Jose CA (US)

Ardavan Pedram of Santa Clara CA (US)

Joseph Hassoun of Los Gatos CA (US)

WEIGHT-SPARSE NPU WITH FINE-GRAINED STRUCTURED SPARSITY - A simplified explanation of the abstract

This abstract first appeared for US patent application 17980544 titled 'WEIGHT-SPARSE NPU WITH FINE-GRAINED STRUCTURED SPARSITY

Simplified Explanation

The abstract describes a patent application for a reconfigurable neural processing unit capable of processing fine-grain structured sparsity weight arrangements.

  • Weight buffer stores weight values
  • Weight multiplexer array selects weight values based on sparsity weight arrangement
  • Activation buffer stores activation values
  • Activation multiplexer array selects activation values based on weight sparsity
  • Multiplier array calculates product values for operand pairs

Potential Applications

This technology could be applied in:

  • Neural networks
  • Machine learning algorithms
  • Image and speech recognition systems

Problems Solved

This innovation addresses:

  • Efficient processing of sparse weight arrangements
  • Reconfigurability for different sparsity patterns

Benefits

The benefits of this technology include:

  • Improved performance in neural network computations
  • Flexibility to adapt to various sparsity configurations

Potential Commercial Applications

The potential commercial applications of this technology include:

  • AI hardware development
  • Cloud computing services for machine learning tasks

Possible Prior Art

One possible prior art for this technology could be:

  • Reconfigurable neural processing units with fixed weight arrangements

Unanswered Questions

How does this technology compare to traditional neural processing units in terms of performance?

The abstract does not provide a direct comparison between this technology and traditional neural processing units in terms of performance.

Are there any limitations to the reconfigurability of the neural processing unit?

The abstract does not mention any potential limitations to the reconfigurability of the neural processing unit.


Original Abstract Submitted

A neural processing unit is reconfigurable to process a fine-grain structured sparsity weight arrangement selected from N:M=1:4, 2:4, 2:8 and 4:8 fine-grain structured weight sparsity arrangements. A weight buffer stores weight values and a weight multiplexer array outputs one or more weight values stored in the weight buffer as first operand values based on a selected fine-grain structured sparsity weight arrangement. An activation buffer stores activation values and an activation multiplexer array outputs one or more activation values stored in the activation buffer as second operand values based on the selected fine-grain structured weight sparsity in which each respective second operand value and a corresponding first operand value forms an operand value pair. A multiplier array outputs a product value for each operand value pair.