18393536. MAPPING NEURAL NETWORKS TO HARDWARE simplified abstract (Imagination Technologies Limited)

From WikiPatents
Jump to navigation Jump to search

MAPPING NEURAL NETWORKS TO HARDWARE

Organization Name

Imagination Technologies Limited

Inventor(s)

Xiran Huang of Hertfordshire (GB)

MAPPING NEURAL NETWORKS TO HARDWARE - A simplified explanation of the abstract

This abstract first appeared for US patent application 18393536 titled 'MAPPING NEURAL NETWORKS TO HARDWARE

The abstract describes a method of mapping a neural network to hardware by grouping layers into tile groups based on memory access patterns and output data criteria.

  • Neural network mapped to hardware by grouping layers into tile groups
  • Tile groups consist of layer groups processed in a single pass through hardware
  • Selection of layer groups for tile groups based on memory access patterns
  • Decision to merge layer groups based on space in on-chip memory and output data criteria
  • Optimization of hardware processing for efficient execution of neural network

Potential Applications: - Accelerated execution of neural networks on hardware - Optimization of memory access patterns for improved performance - Efficient mapping of complex neural networks to specialized hardware

Problems Solved: - Addressing memory access bottlenecks in neural network processing - Improving hardware efficiency for executing complex neural networks

Benefits: - Faster execution of neural networks on specialized hardware - Enhanced performance and efficiency in processing complex neural networks - Optimization of memory access patterns for improved overall performance

Commercial Applications: Title: "Efficient Neural Network Mapping for Hardware Acceleration" This technology can be applied in industries such as: - AI hardware development - Edge computing devices - Autonomous vehicles - Robotics - Cloud computing infrastructure

Prior Art: Researchers can explore prior work on hardware acceleration of neural networks, memory access optimization, and efficient mapping techniques for complex algorithms.

Frequently Updated Research: Stay updated on advancements in hardware acceleration techniques for neural networks, memory optimization strategies, and innovations in efficient processing of complex algorithms.

Questions about Neural Network Mapping for Hardware Acceleration: 1. How does this method improve the efficiency of executing neural networks on hardware? - By grouping layers into tile groups based on memory access patterns and output data criteria, the method optimizes hardware processing for faster and more efficient execution of neural networks. 2. What are the potential implications of this technology in the development of specialized AI hardware? - This technology can lead to the creation of more efficient and powerful AI hardware solutions, enabling faster and more accurate neural network processing in various applications.


Original Abstract Submitted

A neural network is mapped to hardware by defining a plurality of layer groups, each layer group comprising one or more layers of the neural network that are processed in a single pass through the hardware. The layer groups are grouped into tile groups, each tile group comprising a set of layers groups that are evaluated when executing the neural network. Grouping the layer groups into a tile group comprises selecting a layer group that precedes a first layer group in the tile group and determining a number of times that input data to the layer group is read from memory. In response to this number exceeding a threshold, it is determined whether to merger the layer group into the tile group by determining an amount of space in on-chip memory required for storing pre-fetched input data and assessing one or more criteria relating to output data of the layer group.