17491910. TRANSLATING ARTIFICIAL NEURAL NETWORK SOFTWARE WEIGHTS TO HARDWARE-SPECIFIC ANALOG CONDUCTANCES simplified abstract (International Business Machines Corporation)

From WikiPatents
Jump to navigation Jump to search

TRANSLATING ARTIFICIAL NEURAL NETWORK SOFTWARE WEIGHTS TO HARDWARE-SPECIFIC ANALOG CONDUCTANCES

Organization Name

International Business Machines Corporation

Inventor(s)

Charles Mackin of San Jose CA (US)

Geoffrey Burr of San Jose CA (US)

Jonathan Paul Timcheck of Palo Alto CA (US)

TRANSLATING ARTIFICIAL NEURAL NETWORK SOFTWARE WEIGHTS TO HARDWARE-SPECIFIC ANALOG CONDUCTANCES - A simplified explanation of the abstract

This abstract first appeared for US patent application 17491910 titled 'TRANSLATING ARTIFICIAL NEURAL NETWORK SOFTWARE WEIGHTS TO HARDWARE-SPECIFIC ANALOG CONDUCTANCES

Simplified Explanation

The abstract describes a method for translating artificial neural network (ANN) software weights to analog conductances in an analog non-volatile memory device, taking into account conductance non-idealities. Here are the bullet points explaining the patent/innovation:

  • The method involves reading a set of target synaptic weights from an artificial neural network.
  • Each target synaptic weight is mapped to one or more conductance values.
  • A hardware model is applied to these conductance values, resulting in hardware-adjusted conductance values that correspond to an analog non-volatile memory device.
  • The hardware-adjusted conductance values are then mapped back to hardware-adjusted synaptic weights.
  • The conductance values are optimized to minimize the error between the target synaptic weights and the hardware-adjusted synaptic weights.

Potential applications of this technology:

  • This method can be used in the development of analog non-volatile memory devices that can store and process neural network weights efficiently.
  • It can be applied in various fields where artificial neural networks are used, such as machine learning, pattern recognition, and data analysis.
  • The technology can enable the deployment of ANN models on hardware platforms that utilize analog non-volatile memory.

Problems solved by this technology:

  • Conductance non-idealities can affect the accuracy and reliability of analog non-volatile memory devices when used for storing neural network weights.
  • This method addresses the challenge of mapping software weights to analog conductances while considering the non-idealities of the hardware platform.
  • It helps minimize the error between the target synaptic weights and the hardware-adjusted synaptic weights, improving the overall performance of the system.

Benefits of this technology:

  • The translation of software weights to analog conductances allows for efficient storage and processing of neural network models in analog non-volatile memory devices.
  • The optimization of conductance values helps to minimize the error between the target and hardware-adjusted synaptic weights, improving the accuracy of the system.
  • This technology enables the deployment of artificial neural networks on hardware platforms that utilize analog non-volatile memory, potentially leading to faster and more energy-efficient computations.


Original Abstract Submitted

Translation of artificial neural network (ANN) software weights to analog conductances in the presence of conductance non-idealities for deployment to an analog non-volatile memory device is provided. A plurality of target synaptic weights of an artificial neural network is read. The plurality of target synaptic weights is mapped to a plurality of conductance values, each of the plurality of target synaptic weights being mapped to at least one of the plurality of conductance values. A hardware model is applied to the plurality of conductance values, thereby determining a plurality of hardware-adjusted conductance values, the hardware model corresponding to an analog non-volatile memory device. The plurality of hardware-adjusted conductance values is mapped to a plurality of hardware-adjusted synaptic weights. The plurality of conductance values is optimized in order to minimize an error metric between the target synaptic weights and the hardware-adjusted synaptic weights.