DIFFERENTIAL RECURRENT NEURAL NETWORK: abstract simplified (18209989)

From WikiPatents
Jump to navigation Jump to search
  • This abstract for appeared for patent application number 18209989 Titled 'DIFFERENTIAL RECURRENT NEURAL NETWORK'

Simplified Explanation

The abstract describes a type of neural network called a differential recurrent neural network (RNN). This network is designed to handle dependencies between inputs that occur at different points in time. It does this by using recurrent loops to store and update states without affecting the training process.

The differential RNN consists of three main components: a state component for storing states, a trainable transition and differential non-linearity component that includes a neural network, and a trainable OUT component that also includes a neural network for post-processing the current states.

The transition and differential non-linearity component takes as input the previous stored states from the state component and an input vector. It then produces positive and negative contribution vectors, which are used to create a state contribution vector. This state contribution vector is then input into the state component to generate a set of current states.

In one implementation, the current states are directly outputted. In another implementation, the current states are further processed by the trainable OUT component before being outputted.

Overall, this differential RNN allows for the handling of long-term dependencies in a neural network by effectively storing and updating states without interfering with the training process.


Original Abstract Submitted

A differential recurrent neural network (RNN) is described that handles dependencies that go arbitrarily far in time by allowing the network system to store states using recurrent loops without adversely affecting training. The differential RNN includes a state component for storing states, and a trainable transition and differential non-linearity component which includes a neural network. The trainable transition and differential non-linearity component takes as input, an output of the previous stored states from the state component along with an input vector, and produces positive and negative contribution vectors which are employed to produce a state contribution vector. The state contribution vector is input into the state component to create a set of current states. In one implementation, the current states are simply output. In another implementation, the differential RNN includes a trainable OUT component which includes a neural network that performs post-processing on the current states before outputting them.