Intel corporation (20240111830). ACCURACY-BASED APPROXIMATION OF ACTIVATION FUNCTIONS WITH PROGRAMMABLE LOOK-UP TABLE HAVING AREA BUDGET simplified abstract

From WikiPatents
Jump to navigation Jump to search

ACCURACY-BASED APPROXIMATION OF ACTIVATION FUNCTIONS WITH PROGRAMMABLE LOOK-UP TABLE HAVING AREA BUDGET

Organization Name

intel corporation

Inventor(s)

Umer Iftikhar Cheema of Hillsboro OR (US)

Robert Simofi of Dumbravita, Timis (RO)

Deepak Abraham Mathaikutty of Chandler AZ (US)

Arnab Raha of San Jose CA (US)

Dinakar Kondru of Frisco TX (US)

ACCURACY-BASED APPROXIMATION OF ACTIVATION FUNCTIONS WITH PROGRAMMABLE LOOK-UP TABLE HAVING AREA BUDGET - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240111830 titled 'ACCURACY-BASED APPROXIMATION OF ACTIVATION FUNCTIONS WITH PROGRAMMABLE LOOK-UP TABLE HAVING AREA BUDGET

Simplified Explanation

The abstract of the patent application describes a method for approximating a non-linear activation function in a neural network using one or more linear functions. The input range is divided into segments, each corresponding to a different exponent in the activation function. Linear functions are used to approximate the activation function within a target accuracy for each segment, with parameters stored in a look-up table for execution during the neural network's operation.

  • Explanation of the patent:
  • Non-linear activation functions in neural networks can be approximated by linear functions.
  • Input range is divided into segments based on different exponents in the activation function.
  • Linear functions are used to approximate the activation function within a target accuracy for each segment.
  • Parameters of the linear functions are stored in a look-up table for efficient execution during the neural network's operation.
      1. Potential Applications

This technology can be applied in various fields such as image recognition, natural language processing, and predictive analytics where neural networks are used.

      1. Problems Solved

This innovation addresses the challenge of efficiently approximating non-linear activation functions in neural networks, improving the accuracy and performance of machine learning models.

      1. Benefits

- Enhanced accuracy in approximating non-linear activation functions - Improved efficiency in neural network operations - Better performance of machine learning models

      1. Potential Commercial Applications

The technology can be utilized in industries such as healthcare, finance, and e-commerce for tasks like medical image analysis, fraud detection, and personalized recommendations.

      1. Possible Prior Art

One possible prior art is the use of piecewise linear functions to approximate non-linear activation functions in neural networks.

        1. Unanswered Questions
        2. How does this method compare to other techniques for approximating non-linear activation functions in neural networks?

This article does not provide a comparison with other methods for approximating non-linear activation functions.

        1. What impact does the approximation accuracy have on the overall performance of the neural network?

The article does not discuss the specific impact of the approximation accuracy on the neural network's performance.


Original Abstract Submitted

a non-linear activation function in a neural network may be approximated by one or more linear functions. the input range may be divided into input segments, each of which corresponds to a different exponent in the input range of the activation function and includes input data elements having the exponent. target accuracies may be assigned to the identified exponents based on a statistics analysis of the input data elements. the target accuracy of an input segment will be used to determine one or more linear functions that approximate the activation function for the input segment. an error of an approximation of the activation function by a linear function for the input segment may be within the target accuracy. the parameters of the linear functions may be stored in a look-up table (lut). during the execution of the dnn, the lut may be used to execute the activation function.