Microsoft technology licensing, llc (20240134439). ANALOG MAC AWARE DNN IMPROVEMENT simplified abstract

From WikiPatents
Revision as of 04:26, 26 April 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

ANALOG MAC AWARE DNN IMPROVEMENT

Organization Name

microsoft technology licensing, llc

Inventor(s)

Gilad Kirshenboim of Petach Tiqva (IL)

Ran Sahar of Evan Yehuda (IL)

Douglas C. Burger of Bellevue WA (US)

Yehonathan Refael Kalim of Herzeliya (IL)

ANALOG MAC AWARE DNN IMPROVEMENT - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240134439 titled 'ANALOG MAC AWARE DNN IMPROVEMENT

Simplified Explanation

The patent application focuses on improving the performance of a hardware accelerator, such as a neural processor, by selectively varying the precision of multiply and accumulate processing elements (MAC PEs) to reduce power consumption. Dynamic control of analog to digital output bits for MAC PEs based on precision information obtained during training and post-training of an artificial intelligence neural network model is utilized to conserve power.

  • Selective variation of MAC PE precision reduces power consumption of a neural processor.
  • Power conservation is achieved by dynamically controlling the precision of ADC output bits for MAC PEs.
  • Precision information obtained during training and post-training of an AI neural network model is used to dynamically control ADC output bit precision.

Potential Applications

This technology can be applied in various fields such as:

  • Artificial intelligence
  • Edge computing
  • Internet of Things (IoT) devices

Problems Solved

The technology addresses the following issues:

  • High power consumption in hardware accelerators
  • Precision control in multiply and accumulate processing elements
  • Dynamic power management in neural processors

Benefits

The benefits of this technology include:

  • Reduced power consumption
  • Improved performance of hardware accelerators
  • Enhanced efficiency in artificial intelligence applications

Potential Commercial Applications

Potential commercial applications of this technology include:

  • AI processors for edge devices
  • IoT devices with AI capabilities
  • Energy-efficient neural processors for various industries

Possible Prior Art

One possible prior art in this field is the use of dynamic voltage and frequency scaling techniques to optimize power consumption in hardware accelerators.

Unanswered Questions

How does this technology compare to existing power optimization methods in hardware accelerators?

This article does not provide a direct comparison with other power optimization techniques in hardware accelerators. Further research and analysis are needed to evaluate the effectiveness of this technology in comparison to existing methods.

What are the specific parameters used to determine the dynamic precision control of ADC output bits for MAC PEs?

The article does not delve into the specific parameters or algorithms used to determine the dynamic precision control of ADC output bits. Additional information on the calculation and decision-making process would provide a deeper understanding of the technology.


Original Abstract Submitted

methods, systems and computer program products are provided for improving performance (e.g., reducing power consumption) of a hardware accelerator (e.g., neural processor) comprising hybrid or analog multiply and accumulate (mac) processing elements (pes). selective variation of the precision of an array of mac pes may reduce power consumption of a neural processor. power may be conserved by dynamically controlling the precision of analog to digital (adc) output bits for one or more mac pes. dynamic control of adc output bit precision may be based on precision information determined during training and/or post-training (e.g., quantization) of an artificial intelligence (ai) neural network (nn) model implemented by the neural processor. precision information may include a range of dynamic precision for each of a plurality of nodes of a computation graph for the ai nn model.