17932527. DESPARSIFIED CONVOLUTION FOR SPARSE TENSORS simplified abstract (QUALCOMM Incorporated)

From WikiPatents
Jump to navigation Jump to search

DESPARSIFIED CONVOLUTION FOR SPARSE TENSORS

Organization Name

QUALCOMM Incorporated

Inventor(s)

Jamie Menjay Lin of San Diego CA (US)

Jian Shen of San Diego CA (US)

DESPARSIFIED CONVOLUTION FOR SPARSE TENSORS - A simplified explanation of the abstract

This abstract first appeared for US patent application 17932527 titled 'DESPARSIFIED CONVOLUTION FOR SPARSE TENSORS

Simplified Explanation

Certain aspects of the present disclosure provide techniques for desparsified convolution. A weight tensor having unstructured sparsity is accessed, and a densified weight tensor is generated based on the weight tensor by directionally squeezing the weight tensor to remove sparse values, and generating a sparsity map based on the directional squeezing. The densified weight tensor and sparsity map are output for use in a convolutional neural network.

  • Weight tensor with unstructured sparsity is accessed.
  • Densified weight tensor is generated by directionally squeezing the weight tensor.
  • Sparsity map is generated based on the directional squeezing.
  • Output includes densified weight tensor and sparsity map for use in a convolutional neural network.

Potential Applications

The technology could be applied in:

  • Image recognition systems
  • Natural language processing algorithms

Problems Solved

This technology helps in:

  • Improving the efficiency of convolutional neural networks
  • Reducing computational resources required for neural network training

Benefits

The benefits of this technology include:

  • Enhanced performance of convolutional neural networks
  • Reduction in memory usage during neural network operations

Potential Commercial Applications

A potential commercial application could be in:

  • Developing advanced AI systems for various industries

Possible Prior Art

One possible prior art could be:

  • Techniques for sparsity optimization in neural networks

Unanswered Questions

How does this technology compare to existing methods for sparsity optimization in neural networks?

The article does not provide a direct comparison with existing methods for sparsity optimization in neural networks.

What are the limitations of this technology in terms of scalability to larger neural network models?

The article does not address the scalability of this technology to larger neural network models.


Original Abstract Submitted

Certain aspects of the present disclosure provide techniques for desparsified convolution. A weight tensor having unstructured sparsity is accessed, and a densified weight tensor is generated based on the weight tensor by directionally squeezing the weight tensor to remove sparse values, and generating a sparsity map based on the directional squeezing. The densified weight tensor and sparsity map are output for use in a convolutional neural network.