Intel corporation (20240134786). METHODS AND APPARATUS FOR SPARSE TENSOR STORAGE FOR NEURAL NETWORK ACCELERATORS simplified abstract
Contents
- 1 METHODS AND APPARATUS FOR SPARSE TENSOR STORAGE FOR NEURAL NETWORK ACCELERATORS
- 1.1 Organization Name
- 1.2 Inventor(s)
- 1.3 METHODS AND APPARATUS FOR SPARSE TENSOR STORAGE FOR NEURAL NETWORK ACCELERATORS - A simplified explanation of the abstract
- 1.4 Simplified Explanation
- 1.5 Potential Applications
- 1.6 Problems Solved
- 1.7 Benefits
- 1.8 Potential Commercial Applications
- 1.9 Possible Prior Art
- 1.10 Unanswered Questions
- 1.11 Original Abstract Submitted
METHODS AND APPARATUS FOR SPARSE TENSOR STORAGE FOR NEURAL NETWORK ACCELERATORS
Organization Name
Inventor(s)
Martin-Thomas Grymel of Leixlip (IE)
David Bernard of Kilcullen (IE)
Cormac Brick of San Francisco CA (US)
METHODS AND APPARATUS FOR SPARSE TENSOR STORAGE FOR NEURAL NETWORK ACCELERATORS - A simplified explanation of the abstract
This abstract first appeared for US patent application 20240134786 titled 'METHODS AND APPARATUS FOR SPARSE TENSOR STORAGE FOR NEURAL NETWORK ACCELERATORS
Simplified Explanation
The patent application describes methods, apparatus, systems, and articles of manufacture for sparse tensor storage for neural network accelerators. The apparatus includes circuitry to generate a sparsity map, control static storage, and compress storage elements to efficiently store tensors in memory.
- Sparsity map generating circuitry: Generates a map indicating whether a data point of the tensor is zero.
- Static storage controlling circuitry: Divides the tensor into storage elements for organization.
- Compressor: Performs compression of storage elements to remove zero points and store them contiguously in memory.
Potential Applications
This technology could be applied in:
- Neural network accelerators
- Machine learning systems
- Data processing applications
Problems Solved
- Efficient storage of sparse tensors
- Optimization of memory usage in neural network accelerators
Benefits
- Reduced memory footprint
- Faster data processing
- Improved performance of neural network accelerators
Potential Commercial Applications
"Optimizing Memory Usage in Neural Network Accelerators"
Possible Prior Art
There may be prior art related to:
- Sparse tensor storage techniques
- Compression methods for memory optimization
Unanswered Questions
How does this technology compare to existing methods for sparse tensor storage?
This article does not provide a direct comparison to existing methods for sparse tensor storage. Further research or testing may be needed to determine the advantages and limitations of this technology in comparison to others.
What impact could this technology have on the efficiency of neural network accelerators in real-world applications?
The article does not discuss the real-world impact of this technology on the efficiency of neural network accelerators. Additional studies or case studies may be necessary to evaluate the practical benefits of implementing this innovation.
Original Abstract Submitted
methods, apparatus, systems and articles of manufacture are disclosed for sparse tensor storage for neural network accelerators. an example apparatus includes sparsity map generating circuitry to generate a sparsity map corresponding to a tensor, the sparsity map to indicate whether a data point of the tensor is zero, static storage controlling circuitry to divide the tensor into one or more storage elements, and a compressor to perform a first compression of the one or more storage elements to generate one or more compressed storage elements, the first compression to remove zero points of the one or more storage elements based on the sparsity map and perform a second compression of the one or more compressed storage elements, the second compression to store the one or more compressed storage elements contiguously in memory.