Qualcomm incorporated (20240095872). MEMORY STORAGE FORMAT FOR SUPPORTING MACHINE LEARNING ACCELERATION simplified abstract

From WikiPatents
Jump to navigation Jump to search

MEMORY STORAGE FORMAT FOR SUPPORTING MACHINE LEARNING ACCELERATION

Organization Name

qualcomm incorporated

Inventor(s)

Colin Beaton Verrilli of Apex NC (US)

Natarajan Vaidhyanathan of Carrboro NC (US)

Matthew Simpson of Durham NC (US)

Geoffrey Carlton Berry of Durham NC (US)

Sandeep Pande of Bengaluru (IN)

MEMORY STORAGE FORMAT FOR SUPPORTING MACHINE LEARNING ACCELERATION - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240095872 titled 'MEMORY STORAGE FORMAT FOR SUPPORTING MACHINE LEARNING ACCELERATION

Simplified Explanation

The abstract describes a processor-implemented method for accelerating machine learning on a computing device by optimizing the storage format of neural network data.

  • The method involves receiving an image in a first layer storage format of a neural network.
  • Addresses are assigned to image pixels of each of three channels of the first layer storage format for accessing the image pixels in a blocked ML storage acceleration format.
  • The image pixels are stored in the blocked ML storage acceleration format according to the assigned addresses.
  • Inference video processing of the image is accelerated based on the assigned addresses for the image pixels corresponding to the blocked ML storage acceleration format.

Potential Applications

This technology can be applied in various fields such as image recognition, video processing, autonomous vehicles, and medical imaging.

Problems Solved

This innovation addresses the challenge of optimizing memory storage formats to accelerate machine learning tasks, improving efficiency and performance.

Benefits

The benefits of this technology include faster inference processing, reduced memory usage, improved accuracy in machine learning tasks, and enhanced overall performance of computing devices.

Potential Commercial Applications

Potential commercial applications of this technology include smart cameras, surveillance systems, medical imaging devices, autonomous vehicles, and industrial automation systems.

Possible Prior Art

One possible prior art could be the use of optimized memory storage formats in the context of accelerating machine learning tasks. Research and patents related to memory storage optimization for neural networks may exist in the field of computer vision and artificial intelligence.

Unanswered Questions

How does this technology compare to existing memory optimization techniques in machine learning acceleration?

This article does not provide a direct comparison with other memory optimization techniques in the context of machine learning acceleration. Further research or comparative studies may be needed to evaluate the effectiveness of this approach.

What are the potential limitations or drawbacks of implementing this storage format optimization method?

The article does not discuss any potential limitations or drawbacks of implementing this storage format optimization method. It would be important to investigate any trade-offs or challenges that may arise from adopting this approach in practical applications.


Original Abstract Submitted

a processor-implemented method for a memory storage format to accelerate machine learning (ml) on a computing device is described. the method includes receiving an image in a first layer storage format of a neural network. the method also includes assigning addresses to image pixels of each of three channels of the first layer storage format for accessing the image pixels in a blocked ml storage acceleration format. the method further includes storing the image pixels in the blocked ml storage acceleration format according to the assigned addresses of the image pixels. the method also includes accelerating inference video processing of the image according to the assigned addresses for the image pixels corresponding to the blocked ml storage acceleration format.