University of Rochester (20240346735). SYSTEM AND METHOD FOR GENERATING VIDEOS DEPICTING VIRTUAL CHARACTERS simplified abstract

From WikiPatents
Jump to navigation Jump to search

SYSTEM AND METHOD FOR GENERATING VIDEOS DEPICTING VIRTUAL CHARACTERS

Organization Name

University of Rochester

Inventor(s)

Luchuan Song of Buffalo NY (US)

Chenliang Xu of Pittsford NY (US)

SYSTEM AND METHOD FOR GENERATING VIDEOS DEPICTING VIRTUAL CHARACTERS - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240346735 titled 'SYSTEM AND METHOD FOR GENERATING VIDEOS DEPICTING VIRTUAL CHARACTERS

The abstract describes a patent application related to generative machine learning techniques for generating virtual characters based on input from a video depicting a first subject.

  • Machine learning models use a video with an audio component of speech from a first subject and an image of a second subject to generate a video of the second subject.
  • The generated video can show the second subject blinking and displaying emotional reactions responsive to the speech and characteristics of the first subject.
  • The resulting video can be displayed or stored for later retrieval.

Potential Applications: - Entertainment industry for creating lifelike virtual characters in movies or video games. - Educational tools for interactive learning experiences. - Virtual assistants or chatbots with realistic facial expressions and reactions.

Problems Solved: - Enhances the realism and interactivity of virtual characters. - Streamlines the process of generating animated content.

Benefits: - Improved user engagement and immersion. - Cost-effective production of animated content. - Enhanced storytelling capabilities in various media.

Commercial Applications: Title: "Virtual Character Generation Technology for Media and Entertainment" This technology can revolutionize the way virtual characters are created for movies, video games, educational tools, and virtual assistants. It has the potential to disrupt the entertainment industry by offering more realistic and interactive characters.

Questions about Virtual Character Generation Technology: 1. How does this technology improve user engagement with virtual characters? This technology enhances user engagement by creating virtual characters that exhibit realistic emotional reactions and responses to stimuli.

2. What are the potential cost savings for production companies using this technology? By streamlining the process of generating animated content, production companies can save time and resources, leading to cost savings in the long run.


Original Abstract Submitted

features described herein pertain to generative machine learning, and more particularly, to machine learning techniques for generating virtual characters. a video that depicts a first subject and includes an audio component that corresponds to speech spoken by the first subject and an image that depicts a second subject are provided to and used by one or more machine learning models to generate a video that depicts the second subject. the second subject can blink and exhibit emotional characteristic and reactions that are responsive to the speech spoken by the first subject and/or a characteristic of the first subject such as a facial expression and/or head pose motion. the generated video can be displayed and/or stored where it can be later retrieved.