US Patent Application 18209989. DIFFERENTIAL RECURRENT NEURAL NETWORK simplified abstract

From WikiPatents
Jump to navigation Jump to search

DIFFERENTIAL RECURRENT NEURAL NETWORK

Organization Name

Microsoft Technology Licensing, LLC


Inventor(s)

Patrice Simard of Bellevue WA (US)


DIFFERENTIAL RECURRENT NEURAL NETWORK - A simplified explanation of the abstract

  • This abstract for appeared for US patent application number 18209989 Titled 'DIFFERENTIAL RECURRENT NEURAL NETWORK'

Simplified Explanation

The abstract describes a type of neural network called a differential recurrent neural network (RNN) that can handle dependencies between inputs that occur at different points in time. This is achieved by allowing the network to store and remember previous states using recurrent loops. The differential RNN consists of a state component for storing states, and a trainable transition and differential non-linearity component that includes a neural network. This component takes as input the previous stored states and an input vector, and produces positive and negative contribution vectors. These contribution vectors are used to create a state contribution vector, which is then input into the state component to generate a set of current states. The current states can be directly output or processed further using another neural network component called the trainable OUT component.


Original Abstract Submitted

A differential recurrent neural network (RNN) is described that handles dependencies that go arbitrarily far in time by allowing the network system to store states using recurrent loops without adversely affecting training. The differential RNN includes a state component for storing states, and a trainable transition and differential non-linearity component which includes a neural network. The trainable transition and differential non-linearity component takes as input, an output of the previous stored states from the state component along with an input vector, and produces positive and negative contribution vectors which are employed to produce a state contribution vector. The state contribution vector is input into the state component to create a set of current states. In one implementation, the current states are simply output. In another implementation, the differential RNN includes a trainable OUT component which includes a neural network that performs post-processing on the current states before outputting them.