Electronic Arts Inc. (20240331262). ACCESSIBLE ANIMATION SELECTION AND STYLIZATION IN VIDEO GAMES simplified abstract

From WikiPatents
Revision as of 16:31, 4 October 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

ACCESSIBLE ANIMATION SELECTION AND STYLIZATION IN VIDEO GAMES

Organization Name

Electronic Arts Inc.

Inventor(s)

Han Liu of Millbrae CA (US)

Jayesh Punjaram Patil of San Jose CA (US)

ACCESSIBLE ANIMATION SELECTION AND STYLIZATION IN VIDEO GAMES - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240331262 titled 'ACCESSIBLE ANIMATION SELECTION AND STYLIZATION IN VIDEO GAMES

    • Simplified Explanation:**

An animation system uses vocal audio data provided by a user during gameplay to curate selectable and stylized animations based on machine learning models.

    • Key Features and Innovation:**
  • Animation system accesses vocal audio data during gameplay.
  • Machine learning model encodes vocal audio data to produce feature embeddings.
  • Feature embeddings are used to create selectable and stylized animations.
  • Users can personalize their gameplay experience using their voice.
    • Potential Applications:**

This technology can be applied in video game applications, virtual reality environments, interactive storytelling platforms, and educational tools for language learning.

    • Problems Solved:**

This technology addresses the need for personalized and interactive experiences in gaming, as well as the integration of voice commands for gameplay customization.

    • Benefits:**
  • Enhanced user engagement and immersion in gameplay.
  • Personalized gaming experiences based on vocal input.
  • Improved accessibility for users with limited mobility or dexterity.
    • Commercial Applications:**

"Voice-Driven Animation System for Video Games: Enhancing User Experience and Personalization"

    • Questions about Voice-Driven Animation System:**

1. How does the animation system use vocal audio data to create animations? 2. What are the potential implications of this technology in the gaming industry?


Original Abstract Submitted

an animation system is configured to accessibly curate selectable animations and/or stylized animations based in part on vocal audio data provided by a user during gameplay of a video game application. the vocal audio data is encoded by way of a machine learning model to produce and/or extract feature embeddings corresponding to the utterances among the vocal audio data. the feature embeddings are used in part to create a list of selectable animations and to create stylized animations that can be displayed to the user. in turn, the animation system enables users to use their voice to personalize their gameplay experience.