18147330. VECTORIZED SPARSE CONVOLUTION simplified abstract (QUALCOMM Incorporated)

From WikiPatents
Jump to navigation Jump to search

VECTORIZED SPARSE CONVOLUTION

Organization Name

QUALCOMM Incorporated

Inventor(s)

Nilaykumar Kantibhai Patel of Nadiad (IN)

Michael Castelloe of San Diego CA (US)

VECTORIZED SPARSE CONVOLUTION - A simplified explanation of the abstract

This abstract first appeared for US patent application 18147330 titled 'VECTORIZED SPARSE CONVOLUTION

The present disclosure introduces techniques and apparatus for vectorized sparse convolution, where an input tensor is accessed for a convolution operation using a convolution kernel, generating aggregated output values for valid elements in the input tensor based on the convolution kernel.

  • Input tensor accessed for convolution operation with convolution kernel
  • Aggregated output values generated for valid elements based on convolution kernel
  • Intermediate values calculated for affected output elements
  • Accumulation of intermediate values to generate aggregated output value
  • Output of aggregated output value

Potential Applications: - Image processing - Signal processing - Machine learning algorithms

Problems Solved: - Efficient computation of convolution operations - Handling sparse data in convolution operations

Benefits: - Faster processing of convolution operations - Reduced computational resources required - Improved performance in machine learning tasks

Commercial Applications: - AI and machine learning systems - Image and video processing software - Signal processing applications

Questions about Vectorized Sparse Convolution: 1. How does vectorized sparse convolution improve computational efficiency? Vectorized sparse convolution improves computational efficiency by reducing the number of calculations needed for convolution operations, particularly when dealing with sparse data.

2. What are the key differences between traditional convolution and vectorized sparse convolution? Traditional convolution processes every element in the input tensor, while vectorized sparse convolution focuses only on valid elements, leading to faster computation and reduced resource usage.


Original Abstract Submitted

Certain aspects of the present disclosure provide techniques and apparatus for vectorized sparse convolution. An input tensor for a convolution operation using a convolution kernel is accessed, where the input tensor comprises a set of valid elements. An aggregated output value is generated for a valid element of the set of valid elements in the input tensor, by determining a set of one or more affected output elements based on the convolution kernel; generating, for each respective affected output element of the set of one or more affected output elements, a respective intermediate value based on the convolution kernel and the valid element; and accumulating the respective intermediate values to generate the aggregated output value. The aggregated output value is output.