18102916. ACCENT CONVERSION FOR VIRTUAL CONFERENCES simplified abstract (Zoom Video Communications, Inc.)

From WikiPatents
Jump to navigation Jump to search

ACCENT CONVERSION FOR VIRTUAL CONFERENCES

Organization Name

Zoom Video Communications, Inc.

Inventor(s)

Tuan Nam Nguyen of Karlsruhe (DE)

Alexander Waibel of Sammamish WA (US)

ACCENT CONVERSION FOR VIRTUAL CONFERENCES - A simplified explanation of the abstract

This abstract first appeared for US patent application 18102916 titled 'ACCENT CONVERSION FOR VIRTUAL CONFERENCES

Simplified Explanation

The abstract describes a method for converting speech patterns from one accent to another during a virtual conference using machine learning models.

  • Receiving a first audio stream with speech patterns in a first accent from a participant's device.
  • Generating a second audio stream with speech patterns in a second accent using a machine learning model.
  • Outputting the second audio stream with the converted speech patterns.

Potential Applications

This technology could be applied in various industries such as language translation services, virtual meetings, and online education platforms.

Problems Solved

This technology solves the problem of language barriers and communication difficulties that may arise in virtual conferences with participants speaking different accents.

Benefits

The benefits of this technology include improved communication, enhanced understanding among participants, and increased accessibility for individuals with different accents.

Potential Commercial Applications

A potential commercial application of this technology could be integrating it into virtual conference platforms to offer real-time accent conversion services for users.

Possible Prior Art

Prior art in this field may include speech recognition and translation technologies, as well as machine learning models for audio processing and analysis.

Unanswered Questions

How accurate is the accent conversion process using this method?

The abstract does not provide details on the accuracy of the accent conversion process or any potential limitations.

What is the computational complexity of the machine learning model used for accent conversion?

The abstract does not mention the computational requirements or efficiency of the machine learning model employed in the method.


Original Abstract Submitted

One example method includes receiving, during a virtual conference hosted by a virtual conference provider, a first audio stream comprising speech having first speech patterns according to a first accent, the first audio stream received from a first client device associated with a first participant in the virtual conference; generating, by a first trained machine learning (“ML”) model, a second audio stream comprising the speech having second speech patterns according to a second accent; and outputting the second audio stream.