18393320. IMPLEMENTING NEURAL NETWORKS IN HARDWARE simplified abstract (Imagination Technologies Limited)

From WikiPatents
Jump to navigation Jump to search

IMPLEMENTING NEURAL NETWORKS IN HARDWARE

Organization Name

Imagination Technologies Limited

Inventor(s)

Xiran Huang of Hertfordshire (GB)

IMPLEMENTING NEURAL NETWORKS IN HARDWARE - A simplified explanation of the abstract

This abstract first appeared for US patent application 18393320 titled 'IMPLEMENTING NEURAL NETWORKS IN HARDWARE

The abstract describes a method of implementing a neural network in hardware, where the network is divided into layers grouped into layer groups and further into tile groups for evaluation. The method involves pre-fetching input data for a layer group into on-chip memory and releasing the buffer slot after output data has been written.

  • Neural network implemented in hardware
  • Layers grouped into layer groups and tile groups
  • Pre-fetching input data for efficient processing
  • Utilizing on-chip memory effectively
  • Releasing buffer slots after output data is written

Potential Applications: - Accelerated neural network processing - Efficient hardware implementation for AI applications - Real-time data analysis in embedded systems

Problems Solved: - Improving processing speed of neural networks - Optimizing memory usage in hardware implementations

Benefits: - Faster execution of neural networks - Reduced memory overhead - Enhanced performance in AI applications

Commercial Applications: Title: "Efficient Hardware Implementation for Neural Networks" This technology can be utilized in various industries such as: - Autonomous vehicles - Robotics - Medical imaging - Natural language processing

Prior Art: Prior research in hardware acceleration for neural networks and memory optimization techniques can provide insights into similar approaches.

Frequently Updated Research: Stay updated on advancements in hardware acceleration for neural networks and memory optimization techniques for improved efficiency.

Questions about Neural Network Hardware Implementation: 1. How does pre-fetching input data improve the processing speed of neural networks? 2. What are the key advantages of grouping layers into tile groups for evaluation?


Original Abstract Submitted

Methods of implementing a neural network in hardware, the neural network including a plurality of layers and the layers being grouped into a plurality of layer groups, each layer group comprising one or more layers of the neural network that are processed in a single pass through the hardware. The layer groups are grouped into a plurality of tile groups, each tile group comprising a set of layer groups that are evaluated when executing the neural network. The method comprises pre-fetching a portion of the input data for a first layer group in a tile group into a buffer slot in on-chip memory; and subsequently releasing the buffer slot after output data for the first layer group has been written to memory.