18534035. ACCURACY-BASED APPROXIMATION OF ACTIVATION FUNCTIONS WITH PROGRAMMABLE LOOK-UP TABLE HAVING AREA BUDGET simplified abstract (Intel Corporation)

From WikiPatents
Jump to navigation Jump to search

ACCURACY-BASED APPROXIMATION OF ACTIVATION FUNCTIONS WITH PROGRAMMABLE LOOK-UP TABLE HAVING AREA BUDGET

Organization Name

Intel Corporation

Inventor(s)

Umer Iftikhar Cheema of Hillsboro OR (US)

Robert Simofi of Dumbravita, Timis (RO)

Deepak Abraham Mathaikutty of Chandler AZ (US)

Arnab Raha of San Jose CA (US)

Dinakar Kondru of Frisco TX (US)

ACCURACY-BASED APPROXIMATION OF ACTIVATION FUNCTIONS WITH PROGRAMMABLE LOOK-UP TABLE HAVING AREA BUDGET - A simplified explanation of the abstract

This abstract first appeared for US patent application 18534035 titled 'ACCURACY-BASED APPROXIMATION OF ACTIVATION FUNCTIONS WITH PROGRAMMABLE LOOK-UP TABLE HAVING AREA BUDGET

Simplified Explanation

The abstract describes a method for approximating a non-linear activation function in a neural network using one or more linear functions. The input range is divided into segments, each corresponding to a different exponent in the activation function. Target accuracies are assigned to these segments based on statistical analysis of input data elements. Linear functions are then used to approximate the activation function within the target accuracy for each segment, with parameters stored in a look-up table for execution during the neural network's operation.

  • Explanation:
  • Non-linear activation function approximated by linear functions
  • Input range divided into segments based on exponents
  • Target accuracies assigned to segments from input data analysis
  • Linear functions used to approximate activation function within target accuracy
  • Parameters stored in look-up table for execution during neural network operation
      1. Potential Applications:

- Deep learning models - Artificial intelligence systems - Machine learning algorithms

      1. Problems Solved:

- Efficient approximation of non-linear activation functions - Improved accuracy in neural network operations

      1. Benefits:

- Enhanced performance of neural networks - Simplified implementation of complex activation functions

      1. Potential Commercial Applications:
        1. Optimizing Neural Network Operations for Enhanced Performance
      1. Possible Prior Art:

- Prior methods for approximating non-linear activation functions in neural networks

        1. Unanswered Questions:
        2. How does this method compare to other techniques for approximating non-linear activation functions in neural networks?

This article does not provide a direct comparison with other techniques, leaving the reader to wonder about the relative effectiveness and efficiency of this approach.

        1. What impact could this method have on the overall speed and efficiency of neural network operations?

The article does not delve into the potential speed and efficiency improvements that could result from using this method, leaving a gap in understanding the practical implications of this innovation.


Original Abstract Submitted

A non-linear activation function in a neural network may be approximated by one or more linear functions. The input range may be divided into input segments, each of which corresponds to a different exponent in the input range of the activation function and includes input data elements having the exponent. Target accuracies may be assigned to the identified exponents based on a statistics analysis of the input data elements. The target accuracy of an input segment will be used to determine one or more linear functions that approximate the activation function for the input segment. An error of an approximation of the activation function by a linear function for the input segment may be within the target accuracy. The parameters of the linear functions may be stored in a look-up table (LUT). During the execution of the DNN, the LUT may be used to execute the activation function.