CORNELL UNIVERSITY (20240212388). WEARABLE DEVICES TO DETERMINE FACIAL OUTPUTS USING ACOUSTIC SENSING simplified abstract

From WikiPatents
Jump to navigation Jump to search

WEARABLE DEVICES TO DETERMINE FACIAL OUTPUTS USING ACOUSTIC SENSING

Organization Name

CORNELL UNIVERSITY

Inventor(s)

Ke Li of Ithaca NY (US)

Cheng Zhang of Ithaca NY (US)

Francois Guimbretiere of Ithaca NY (US)

Ruidong Zhang of Ithaca NY (US)

WEARABLE DEVICES TO DETERMINE FACIAL OUTPUTS USING ACOUSTIC SENSING - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240212388 titled 'WEARABLE DEVICES TO DETERMINE FACIAL OUTPUTS USING ACOUSTIC SENSING

Simplified Explanation: This technology involves tracking facial movements and reconstructing facial expressions by learning skin deformation patterns and facial features using wearable devices equipped with cameras or acoustic devices.

  • **Facial Movement Tracking:** The system tracks facial movements by capturing images of the user's face contours and calculating skin deformation.
  • **Facial Expression Reconstruction:** It reconstructs facial expressions based on the data training set created from frontal view images of the user making various facial expressions.
  • **Wearable Devices:** Head-mounted or neck-mounted wearable devices with cameras or acoustic devices communicate with a data processing system to capture images and calculate skin deformation.
  • **Machine Learning:** The system uses machine learning processes to analyze the data training set and track facial movements or reconstruct facial expressions accurately.

Potential Applications: This technology can be used in various fields such as healthcare for monitoring patient emotions, virtual reality for realistic avatars, and security for facial recognition systems.

Problems Solved: This technology addresses the need for accurate facial movement tracking and facial expression reconstruction for applications such as emotion recognition and virtual communication.

Benefits: The benefits of this technology include improved accuracy in tracking facial movements, enhanced realism in facial expression reconstruction, and potential applications in diverse industries.

Commercial Applications: Potential commercial applications of this technology include emotion recognition software, virtual reality systems, and security systems for facial recognition.

Prior Art: Prior art related to this technology may include research on facial recognition systems, machine learning algorithms for facial analysis, and wearable devices for tracking biometric data.

Frequently Updated Research: Researchers are continually exploring advancements in facial recognition technology, machine learning algorithms for facial analysis, and wearable devices for biometric data tracking.

Questions about Facial Movement Tracking and Expression Reconstruction: 1. How does this technology improve the accuracy of facial movement tracking compared to traditional methods? 2. What are the potential privacy concerns associated with using wearable devices equipped with cameras for facial expression reconstruction?


Original Abstract Submitted

this technology provides systems and methods for tracking facial movements and reconstructing facial expressions by learning skin deformation patterns and facial features. frontal view images of a user making a variety of facial expressions are acquired to create a data training set for use in a machine-learning process. head-mounted or neck-mounted wearable devices are equipped with one or more camera(s) or acoustic device(s) in communication with a data processing system. the cameras capture images of contours of the users face from either the cheekbone or the chin profile of the user. the acoustic devices transmit and receive signals to calculate a representation of the skin deformation. a data processing system uses the images, the profile of the contours, or skin deformation to track facial movement or to reconstruct facial expressions of the user based on the data training set.