Intel corporation (20240320490). EFFICIENT SOFTMAX COMPUTATION WITH NO LOSS IN ACCURACY simplified abstract

From WikiPatents
Jump to navigation Jump to search

EFFICIENT SOFTMAX COMPUTATION WITH NO LOSS IN ACCURACY

Organization Name

intel corporation

Inventor(s)

Jainaveen Sundaram Priya of Hillsboro OR (US)

Prerna Budhkar of Hillsboro OR (US)

Vui Seng Chua of Hillsboro OR (US)

Srivatsa Rangachar Srinivasa of Hillsboro OR (US)

Tanay Karnik of Portland OR (US)

EFFICIENT SOFTMAX COMPUTATION WITH NO LOSS IN ACCURACY - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240320490 titled 'EFFICIENT SOFTMAX COMPUTATION WITH NO LOSS IN ACCURACY

Simplified Explanation

A modified 2-pass version of the softmax operation is proposed to reduce computational cost in deep learning neural networks without sacrificing accuracy. The first pass includes scalar operations to calculate a logarithm and an operand value, while the second pass involves addition and exponentiation to avoid divisions.

Key Features and Innovation

  • Modified 2-pass softmax operation for reduced computational cost
  • First pass includes scalar operations for logarithm and operand value calculation
  • Second pass involves addition and exponentiation to avoid divisions

Potential Applications

This technology can be applied in deep learning neural networks, particularly transformer-based neural networks and large language models, to improve efficiency and performance.

Problems Solved

  • Reduces computational cost in deep learning neural networks
  • Maintains accuracy in softmax operation
  • Improves efficiency in transformer-based neural networks and large language models

Benefits

  • Enhanced performance in deep learning tasks
  • Cost-effective implementation in neural network architectures
  • Improved scalability for large language models

Commercial Applications

Potential commercial applications include optimizing deep learning models for natural language processing tasks, enhancing performance in recommendation systems, and improving efficiency in speech recognition technologies.

Prior Art

Further research can be conducted in the field of deep learning and neural network optimization to explore similar approaches to reducing computational cost in softmax operations.

Frequently Updated Research

Researchers are continuously exploring new techniques and optimizations for deep learning neural networks, including improvements in softmax operations for enhanced efficiency and accuracy.

Questions about Softmax Operation

How does the modified 2-pass softmax operation improve computational efficiency in deep learning neural networks?

The modified 2-pass softmax operation reduces computational cost by incorporating scalar operations and avoiding divisions in the calculation process.

What are the potential applications of the modified softmax operation in transformer-based neural networks and large language models?

The modified softmax operation can enhance efficiency and performance in transformer-based neural networks and large language models by reducing computational overhead and maintaining accuracy.


Original Abstract Submitted

a modified 2-pass version of the softmax operation can be implemented to address reduce computational cost without loss of accuracy, in particular for deep learning neural networks such as transformer-based neural networks and large language models (llms). the first pass is modified to include two scalar operations at the end. at the end of the first pass, a first scalar operation is performed to calculate a logarithm of the denominator, and a second scalar operation is performed to calculate an operand value based on a sum of the logarithm of the denominator and the maximum value. the second pass is modified to perform addition and exponentiation. in the second pass, an element of an input tensor is subtracted by the operand value to obtain an exponent, and a base is raised to the exponent. the second pass avoids divisions.