18539955. METHODS AND APPARATUS FOR SPARSE TENSOR STORAGE FOR NEURAL NETWORK ACCELERATORS simplified abstract (Intel Corporation)
Contents
- 1 METHODS AND APPARATUS FOR SPARSE TENSOR STORAGE FOR NEURAL NETWORK ACCELERATORS
- 1.1 Organization Name
- 1.2 Inventor(s)
- 1.3 METHODS AND APPARATUS FOR SPARSE TENSOR STORAGE FOR NEURAL NETWORK ACCELERATORS - A simplified explanation of the abstract
- 1.4 Simplified Explanation
- 1.5 Potential Applications
- 1.6 Problems Solved
- 1.7 Benefits
- 1.8 Potential Commercial Applications
- 1.9 Possible Prior Art
- 1.10 Unanswered Questions
- 1.11 Original Abstract Submitted
METHODS AND APPARATUS FOR SPARSE TENSOR STORAGE FOR NEURAL NETWORK ACCELERATORS
Organization Name
Inventor(s)
Martin-Thomas Grymel of Leixlip (IE)
David Bernard of Kilcullen (IE)
Cormac Brick of San Francisco CA (US)
METHODS AND APPARATUS FOR SPARSE TENSOR STORAGE FOR NEURAL NETWORK ACCELERATORS - A simplified explanation of the abstract
This abstract first appeared for US patent application 18539955 titled 'METHODS AND APPARATUS FOR SPARSE TENSOR STORAGE FOR NEURAL NETWORK ACCELERATORS
Simplified Explanation
The patent application describes a method for sparse tensor storage for neural network accelerators, which includes generating a sparsity map, dividing the tensor into storage elements, and performing compression to remove zero points and store the data contiguously in memory.
- Sparsity map generating circuitry: Generates a map indicating whether a data point of the tensor is zero.
- Static storage controlling circuitry: Divides the tensor into storage elements.
- Compressor: Performs compression to remove zero points and store the data contiguously in memory.
Potential Applications
This technology could be applied in:
- Neural network accelerators
- Data compression systems
- Memory optimization tools
Problems Solved
- Efficient storage of sparse tensors
- Faster processing of neural networks
- Reduced memory usage
Benefits
- Improved performance of neural network accelerators
- Reduced memory footprint
- Enhanced data processing speed
Potential Commercial Applications
- AI hardware manufacturers
- Cloud computing providers
- Data analytics companies
Possible Prior Art
One possible prior art could be the use of compression algorithms in data storage systems to optimize memory usage and improve processing speed.
Unanswered Questions
How does this technology compare to existing methods of sparse tensor storage?
Answer: The article does not provide a direct comparison with existing methods, leaving the reader to wonder about the advantages and disadvantages of this approach.
What impact could this technology have on the overall efficiency of neural network accelerators?
Answer: The article does not delve into the potential improvements in efficiency that could be achieved by implementing this technology, leaving a gap in understanding its full impact.
Original Abstract Submitted
Methods, apparatus, systems and articles of manufacture are disclosed for sparse tensor storage for neural network accelerators. An example apparatus includes sparsity map generating circuitry to generate a sparsity map corresponding to a tensor, the sparsity map to indicate whether a data point of the tensor is zero, static storage controlling circuitry to divide the tensor into one or more storage elements, and a compressor to perform a first compression of the one or more storage elements to generate one or more compressed storage elements, the first compression to remove zero points of the one or more storage elements based on the sparsity map and perform a second compression of the one or more compressed storage elements, the second compression to store the one or more compressed storage elements contiguously in memory.