17384522. ADAPTIVE BOUNDING FOR THREE-DIMENSIONAL MORPHABLE MODELS simplified abstract (QUALCOMM Incorporated)

From WikiPatents
Jump to navigation Jump to search

ADAPTIVE BOUNDING FOR THREE-DIMENSIONAL MORPHABLE MODELS

Organization Name

QUALCOMM Incorporated

Inventor(s)

Kuang-Man Huang of Zhubei City Hsinchu County (TW)

Min-Hui Lin of Gushan Dist (TW)

Ke-Li Cheng of San Diego CA (US)

Michel Adib Sarkis of San Diego CA (US)

ADAPTIVE BOUNDING FOR THREE-DIMENSIONAL MORPHABLE MODELS - A simplified explanation of the abstract

This abstract first appeared for US patent application 17384522 titled 'ADAPTIVE BOUNDING FOR THREE-DIMENSIONAL MORPHABLE MODELS

Simplified Explanation

The patent application describes a system and method for generating models based on facial expressions. Here are the key points:

  • The system obtains a set of input images of faces during a training period.
  • It determines the coefficient values representing the facial expressions in each input image.
  • From these coefficient values, it identifies the extremum value, which represents the highest or lowest point of the facial expression during the training period.
  • The system then generates an updated bounding value for the coefficient based on the initial bounding value and the extremum value.

Potential applications of this technology:

  • Facial recognition systems: The generated models can be used to improve the accuracy of facial recognition algorithms by incorporating facial expression information.
  • Emotion analysis: The models can help in analyzing and understanding emotions expressed in images or videos.
  • Virtual reality and gaming: The models can enhance the realism of virtual characters by enabling them to mimic facial expressions.

Problems solved by this technology:

  • Limited facial expression representation: Traditional models may not capture the full range of facial expressions. This technology provides a more comprehensive representation.
  • Inaccurate facial recognition: By incorporating facial expression information, the accuracy of facial recognition systems can be improved.
  • Lack of realism in virtual characters: The generated models can make virtual characters more lifelike by enabling them to display a wider range of facial expressions.

Benefits of this technology:

  • Improved accuracy: The generated models can enhance the accuracy of facial recognition systems and emotion analysis algorithms.
  • Enhanced realism: Virtual characters can exhibit a wider range of facial expressions, making them more realistic and engaging.
  • Better understanding of emotions: The models can aid in analyzing and understanding emotions expressed in images or videos.


Original Abstract Submitted

Systems and techniques are provided for generating one or more models. For example, a process can include obtaining a plurality of input images corresponding to faces of one or more people during a training interval. The process can include determining a value of the coefficient representing at least the portion of the facial expression for each of the plurality of input images during the training interval. The process can include determining, from the determined values of the coefficient representing at least the portion of the facial expression for each of the plurality of input images during the training interval, an extremum value of the coefficient representing at least the portion of the facial expression during the training interval. The process can include generating an updated bounding value for the coefficient representing at least the portion of the facial expression based on the initial bounding value and the extremum value.