Qualcomm incorporated (20240095493). DESPARSIFIED CONVOLUTION FOR SPARSE TENSORS simplified abstract

From WikiPatents
Jump to navigation Jump to search

DESPARSIFIED CONVOLUTION FOR SPARSE TENSORS

Organization Name

qualcomm incorporated

Inventor(s)

Jamie Menjay Lin of San Diego CA (US)

Jian Shen of San Diego CA (US)

DESPARSIFIED CONVOLUTION FOR SPARSE TENSORS - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240095493 titled 'DESPARSIFIED CONVOLUTION FOR SPARSE TENSORS

Simplified Explanation

The present disclosure provides techniques for desparsified convolution in a convolutional neural network. A weight tensor with unstructured sparsity is accessed, and a densified weight tensor is generated by directionally squeezing the weight tensor to remove sparse values and generating a sparsity map based on the directional squeezing. The densified weight tensor and sparsity map are output for use in a convolutional neural network.

  • Weight tensor with unstructured sparsity is accessed
  • Densified weight tensor is generated by directionally squeezing the weight tensor
  • Sparsity map is generated based on the directional squeezing
  • Densified weight tensor and sparsity map are output for use in a convolutional neural network

Potential Applications

This technology can be applied in various fields such as image recognition, natural language processing, and signal processing.

Problems Solved

This technology helps in improving the efficiency and accuracy of convolutional neural networks by reducing sparsity in weight tensors.

Benefits

The benefits of this technology include enhanced performance of convolutional neural networks, improved training speed, and better utilization of computational resources.

Potential Commercial Applications

One potential commercial application of this technology is in developing advanced computer vision systems for autonomous vehicles.

Possible Prior Art

One possible prior art could be techniques for sparsity optimization in neural networks using pruning algorithms.

Unanswered Questions

How does this technology compare to existing methods for desparsified convolution?

This article does not provide a direct comparison with existing methods for desparsified convolution.

What are the limitations of this technology in real-world applications?

This article does not discuss the potential limitations of implementing this technology in real-world applications.


Original Abstract Submitted

certain aspects of the present disclosure provide techniques for desparsified convolution. a weight tensor having unstructured sparsity is accessed, and a densified weight tensor is generated based on the weight tensor by directionally squeezing the weight tensor to remove sparse values, and generating a sparsity map based on the directional squeezing. the densified weight tensor and sparsity map are output for use in a convolutional neural network.