20240054811. Mouth Shape Correction Model, And Model Training And Application Method simplified abstract (NANJING SILICON INTELLIGENCE TECHNOLOGY CO., LTD.)

From WikiPatents
Jump to navigation Jump to search

Mouth Shape Correction Model, And Model Training And Application Method

Organization Name

NANJING SILICON INTELLIGENCE TECHNOLOGY CO., LTD.

Inventor(s)

Huapeng Sima of Nanjing (CN)

Guo Yang of Nanjing (CN)

Mouth Shape Correction Model, And Model Training And Application Method - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240054811 titled 'Mouth Shape Correction Model, And Model Training And Application Method

Simplified Explanation

Embodiments of this disclosure provide a mouth shape correction model, as well as model training and application methods. The model consists of a mouth feature extraction module, a key point extraction module, a first video module, a second video module, and a discriminator. The training method involves extracting corresponding features from a first original video and a second original video using various modules in the model to train the model. When the model meets a convergence condition, the training is completed to generate a target mouth shape correction model. The application method involves inputting a video with a digital-human actor's mouth shape to be corrected, along with corresponding audio, into the mouth shape correction model. This generates a video in which the mouth shape of the digital-human actor is corrected.

  • Mouth shape correction model with various modules for feature extraction and video processing.
  • Training method involves extracting features from original videos and training the model until convergence.
  • Application method involves inputting video and audio into the trained model to correct the mouth shape of a digital-human actor.

Potential applications of this technology:

  • Improving the realism of digital-human actors in movies, animations, and virtual reality experiences.
  • Enhancing the accuracy of lip-syncing in computer-generated characters.
  • Assisting in the development of virtual assistants and chatbots with more natural and realistic mouth movements.

Problems solved by this technology:

  • Inaccurate or unrealistic mouth shapes in digital-human actors can detract from the overall realism and immersion of media content.
  • Manual correction of mouth shapes in computer-generated characters can be time-consuming and labor-intensive.

Benefits of this technology:

  • Enables more realistic and natural-looking mouth movements in digital-human actors, enhancing the overall quality of media content.
  • Reduces the need for manual correction of mouth shapes, saving time and effort in the production of computer-generated characters.
  • Enhances the user experience in virtual reality and augmented reality applications by improving the realism of virtual characters.


Original Abstract Submitted

embodiments of this disclosure provide a mouth shape correction model, and model training and application methods. the model includes a mouth feature extraction module, a key point extraction module, a first video module, a second video module, and a discriminator. the training method includes: based on a first original video and a second original video, extracting corresponding features by using various modules in the model to train the model; and when the model meets a convergence condition, completing the training to generate a target mouth shape correction model. the application method includes: inputting a video in which a mouth shape of a digital-human actor is to be corrected and corresponding audio into a mouth shape correction model, to obtain a video in which the mouth shape of the digital-human actor in the video is corrected, wherein the mouth shape correction model is a model trained by using the training method.