Google llc (20240339106). Phonemes And Graphemes for Neural Text-to-Speech simplified abstract

From WikiPatents
Jump to navigation Jump to search

Phonemes And Graphemes for Neural Text-to-Speech

Organization Name

google llc

Inventor(s)

Ye Jia of Mountain View CA (US)

Byungha Chun of Tokyo (JP)

Yu Zhang of Mountain View CA (US)

Jonathan Shen of Mountain View CA (US)

Yonghui Wu of Fremont CA (US)

Phonemes And Graphemes for Neural Text-to-Speech - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240339106 titled 'Phonemes And Graphemes for Neural Text-to-Speech

The method described in the abstract involves processing text input represented as a sequence of words using input encoder embeddings that include both grapheme tokens and phoneme tokens. The method then generates an output encoder embedding based on the relationship between phoneme tokens and corresponding grapheme tokens representing the same word.

  • The method receives text input with grapheme and phoneme tokens.
  • It identifies words corresponding to phoneme tokens and determines grapheme tokens for those words.
  • An output encoder embedding is generated based on the relationship between phoneme and grapheme tokens representing the same word.

Potential Applications: - Natural language processing systems - Speech recognition technology - Language translation tools

Problems Solved: - Improving accuracy in text-to-speech systems - Enhancing phonetic representation of words - Facilitating cross-linguistic analysis

Benefits: - Enhanced text processing capabilities - Improved accuracy in language-related tasks - Increased efficiency in phonetic analysis

Commercial Applications: Title: Advanced Text Processing Technology for Language Applications This technology can be utilized in various commercial applications such as: - Language learning platforms - Voice-controlled devices - Multilingual communication tools

Questions about the technology: 1. How does this method improve the accuracy of text-to-speech systems? 2. What are the potential implications of using both grapheme and phoneme tokens in text processing?


Original Abstract Submitted

a method includes receiving a text input including a sequence of words represented as an input encoder embedding. the input encoder embedding includes a plurality of tokens, with the plurality of tokens including a first set of grapheme tokens representing the text input as respective graphemes and a second set of phoneme tokens representing the text input as respective phonemes. the method also includes, for each respective phoneme token of the second set of phoneme tokens: identifying a respective word of the sequence of words corresponding to the respective phoneme token and determining a respective grapheme token representing the respective word of the sequence of words corresponding to the respective phoneme token. the method also includes generating an output encoder embedding based on a relationship between each respective phoneme token and the corresponding grapheme token determined to represent a same respective word as the respective phoneme token.