17988739. STRUCTURED SPARSE MEMORY HIERARCHY FOR DEEP LEARNING simplified abstract (Samsung Electronics Co., Ltd.)
Contents
- 1 STRUCTURED SPARSE MEMORY HIERARCHY FOR DEEP LEARNING
- 1.1 Organization Name
- 1.2 Inventor(s)
- 1.3 STRUCTURED SPARSE MEMORY HIERARCHY FOR DEEP LEARNING - A simplified explanation of the abstract
- 1.4 Simplified Explanation
- 1.5 Potential Applications
- 1.6 Problems Solved
- 1.7 Benefits
- 1.8 Potential Commercial Applications
- 1.9 Possible Prior Art
- 1.10 Original Abstract Submitted
STRUCTURED SPARSE MEMORY HIERARCHY FOR DEEP LEARNING
Organization Name
Inventor(s)
Ardavan Pedram of Santa Clara CA (US)
Jong Hoon Shin of San Jose CA (US)
Joseph H. Hassoun of Los Gatos CA (US)
STRUCTURED SPARSE MEMORY HIERARCHY FOR DEEP LEARNING - A simplified explanation of the abstract
This abstract first appeared for US patent application 17988739 titled 'STRUCTURED SPARSE MEMORY HIERARCHY FOR DEEP LEARNING
Simplified Explanation
The patent application describes a memory system and method for training a neural network model by decompressing activation and weight tensors to predetermined sparsity densities and processing them to compute results.
- Decompressor unit decompresses activation and weight tensors to predetermined sparsity densities.
- Buffer unit receives the decompressed tensors.
- Neural processing unit computes results based on the sparsity densities of the tensors.
Potential Applications
This technology can be applied in:
- Deep learning models
- Image recognition systems
- Natural language processing tasks
Problems Solved
This technology helps in:
- Efficient training of neural network models
- Reducing memory usage during training
- Improving computational performance
Benefits
The benefits of this technology include:
- Faster training of neural networks
- Reduced memory requirements
- Enhanced accuracy of neural network models
Potential Commercial Applications
A potential commercial application for this technology could be:
- Developing advanced AI systems for various industries
Possible Prior Art
One possible prior art for this technology could be:
- Existing methods for compressing and decompressing tensors in neural networks
Unanswered Questions
How does this technology compare to existing methods for training neural networks?
This technology optimizes memory usage and computational performance by decompressing tensors to predetermined sparsity densities before processing them.
What impact does this technology have on the accuracy of neural network models?
This technology can potentially improve the accuracy of neural network models by ensuring efficient training and computation of results.
Original Abstract Submitted
A memory system and a method are disclosed for training a neural network model. A decompressor unit decompresses an activation tensor to a first predetermined sparsity density based on the activation tensor being compressed, and decompresses an weight tensor to a second predetermined sparsity density based on the weight tensor being compressed. A buffer unit receives the activation tensor at the first predetermined sparsity density and the weight tensor at the second predetermined sparsity density. A neural processing unit receives the activation tensor and the weight tensor from the buffer unit and computes a result for the activation tensor and the weight tensor based on first predetermined sparsity density of the activation tensor and based on the second predetermined sparsity density of the weight tensor.