Jump to content

Deepmind technologies limited (20240220506). BEAM SEARCH DECODING WITH FORWARD-LOOKING SCORES simplified abstract

From WikiPatents

BEAM SEARCH DECODING WITH FORWARD-LOOKING SCORES

Organization Name

deepmind technologies limited

Inventor(s)

Domenic Joseph Donato of Oviedo FL (US)

Christopher James Dyer of London (GB)

Rémi Leblond of Cachan (FR)

BEAM SEARCH DECODING WITH FORWARD-LOOKING SCORES - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240220506 titled 'BEAM SEARCH DECODING WITH FORWARD-LOOKING SCORES

The patent application describes methods and systems for beam search decoding, a technique used in natural language processing and machine translation.

  • Initializing beam data with a set of candidate output sequences and their respective total scores.
  • Updating the beam data at each decoding step by generating a score distribution for each token, identifying expanded sequences, and computing total scores for each expanded sequence.
  • Updating the set of candidate output sequences using the total scores for the expanded sequences.

Potential Applications: - Machine translation - Speech recognition - Text summarization

Problems Solved: - Improving the accuracy of machine translation systems - Enhancing the efficiency of natural language processing tasks

Benefits: - Increased accuracy in generating output sequences - Faster decoding process - Improved overall performance of language processing systems

Commercial Applications: - Integration into language translation software - Implementation in voice recognition technology - Adoption in automated text summarization tools

Questions about Beam Search Decoding: 1. How does beam search decoding improve the efficiency of machine translation systems? 2. What are the key differences between beam search decoding and other decoding methods used in natural language processing?


Original Abstract Submitted

methods and systems for beam search decoding. one of the methods includes initializing beam data specifying a set of k candidate output sequences and a respective total score for each of the candidate output sequences; updating the beam data at each of a plurality of decoding steps, comprising, at each decoding step: generating a score distribution that comprises a respective score for each token in the vocabulary; identifying a plurality of expanded sequences; generating, for each expanded sequence, a respective backwards-looking score; generating, for each expanded sequence, a respective forward-looking score; computing, for each expanded sequence, a respective total score from the respective forward-looking score for the expanded sequence and the respective backwards-looking score for the expanded sequence; and updating the set of k candidate output sequences using the respective total scores for the expanded sequences.

Cookies help us deliver our services. By using our services, you agree to our use of cookies.