18237621. ASSISTED SPEECH simplified abstract (Apple Inc.)

From WikiPatents
Jump to navigation Jump to search

ASSISTED SPEECH

Organization Name

Apple Inc.

Inventor(s)

Ian M. Richter of Los Angeles CA (US)

ASSISTED SPEECH - A simplified explanation of the abstract

This abstract first appeared for US patent application 18237621 titled 'ASSISTED SPEECH

Simplified Explanation

The patent application describes various implementations of devices, systems, and methods for synthesizing virtual speech. These implementations involve a device with a display, an audio sensor, a memory, and processors. The device displays a computer-generated reality (CGR) representation of a fictional character in a CGR environment on the display. It receives speech input from a person through the audio sensor and modifies the speech based on language characteristic values associated with the fictional character to generate CGR speech. The CGR speech is then outputted in the CGR environment through the representation of the fictional character.

  • The device includes a display, audio sensor, memory, and processors.
  • It displays a computer-generated reality (CGR) representation of a fictional character in a CGR environment.
  • Speech input is received from a person through the audio sensor.
  • The speech input is modified based on language characteristic values associated with the fictional character.
  • CGR speech is generated from the modified speech.
  • The CGR speech is outputted in the CGR environment through the representation of the fictional character.

Potential applications of this technology:

  • Virtual reality gaming: This technology can be used to enhance the immersive experience in virtual reality games by allowing players to interact with virtual characters through speech.
  • Virtual assistants: The synthesized virtual speech can be used to create virtual assistants in computer-generated reality environments, providing users with a more interactive and natural way of interacting with the assistant.
  • Entertainment industry: This technology can be utilized in movies, animations, and other forms of entertainment to create realistic and dynamic virtual characters with synthesized speech.

Problems solved by this technology:

  • Enhancing immersion: By synthesizing virtual speech based on the language characteristic values of fictional characters, this technology helps create a more immersive and realistic experience in computer-generated reality environments.
  • Natural interaction: It allows for natural and interactive communication between users and virtual characters, eliminating the need for text-based or pre-recorded responses.

Benefits of this technology:

  • Enhanced user experience: The synthesized virtual speech adds depth and realism to computer-generated reality environments, enhancing the overall user experience.
  • Increased interactivity: Users can interact with virtual characters through speech, making the interaction more engaging and interactive.
  • Versatility: This technology can be applied to various industries, including gaming, entertainment, and virtual assistance, providing a wide range of applications.


Original Abstract Submitted

Various implementations disclosed herein include devices, systems, and methods for synthesizing virtual speech. In various implementations, a device includes a display, an audio sensor, a non-transitory memory and one or more processors coupled with the non-transitory memory. A computer-generated reality (CGR) representation of a fictional character is displayed in a CGR environment on the display. A speech input is received from a first person via the audio sensor. The speech input is modified based on one or more language characteristic values associated with the fictional character in order to generate CGR speech. The CGR speech is outputted in the CGR environment via the CGR representation of the fictional character.