17951943. FULL BODY POSE ESTIMATION THROUGH FEATURE EXTRACTION FROM MULTIPLE WEARABLE DEVICES simplified abstract (Apple Inc.)

From WikiPatents
Jump to navigation Jump to search

FULL BODY POSE ESTIMATION THROUGH FEATURE EXTRACTION FROM MULTIPLE WEARABLE DEVICES

Organization Name

Apple Inc.

Inventor(s)

Victoria M. Powell of San Francisco CA (US)

Wesley W. Zuber of Mountain View CA (US)

Miki Olivia Hansen of Parsippany NJ (US)

FULL BODY POSE ESTIMATION THROUGH FEATURE EXTRACTION FROM MULTIPLE WEARABLE DEVICES - A simplified explanation of the abstract

This abstract first appeared for US patent application 17951943 titled 'FULL BODY POSE ESTIMATION THROUGH FEATURE EXTRACTION FROM MULTIPLE WEARABLE DEVICES

Simplified Explanation

The patent application describes a method for full body pose estimation using data from multiple wearable devices. Here are the key points:

  • The method involves collecting point of view (POV) video data and inertial sensor data from multiple wearable devices worn by a user.
  • Depth data is also captured to obtain a full body representation of the user.
  • Two-dimensional (2D) keypoints are extracted from the POV video data, and a 2D skeletal model of the user's full body is reconstructed.
  • A three-dimensional (3D) mesh model of the user's full body is generated based on the depth data.
  • The nodes of the 3D mesh model are merged with the inertial sensor data to improve accuracy.
  • The 2D skeletal model and the 3D mesh model are aligned in a common reference frame.
  • A machine learning model is then used to predict classification types based on the aligned 2D skeletal model and 3D mesh model.

Potential applications of this technology:

  • Virtual reality and augmented reality applications can benefit from accurate full body pose estimation, enhancing user immersion and interaction.
  • Fitness and sports tracking applications can use this technology to provide real-time feedback on body movements and exercise performance.
  • Motion capture for animation and gaming can be improved by accurately capturing and replicating full body movements.

Problems solved by this technology:

  • Traditional methods of full body pose estimation often require complex and expensive motion capture systems, limiting their accessibility and practicality.
  • Existing wearable devices may not provide sufficient data for accurate full body pose estimation.
  • This method solves these problems by combining data from multiple wearable devices and incorporating depth data to create a more comprehensive representation of the user's body.

Benefits of this technology:

  • The use of multiple wearable devices improves the accuracy and robustness of full body pose estimation.
  • The integration of depth data allows for a more detailed and realistic representation of the user's body.
  • The alignment of 2D and 3D models in a common reference frame enables more accurate predictions and classifications.
  • This technology provides a more accessible and cost-effective solution for full body pose estimation compared to traditional methods.


Original Abstract Submitted

Embodiments are disclosed for full body pose estimation using features extracted from multiple wearable devices. In an embodiment, a method comprises: obtaining point of view (POV) video data and inertial sensor data from multiple wearable devices worn at the same time by a user; obtaining depth data capturing the user's full body; extracting two-dimensional (2D) keypoints from the POV video data; reconstructing a full body 2D skeletal model from the 2D keypoints; generating a three-dimensional (3D) mesh model of the user's full body based on the depth data; merging nodes of the 3D mesh model with the inertial sensor data; aligning respective orientations of the 2D skeletal model and the 3D mesh model in a common reference frame; and predicting, using a machine learning model, classification types based on the aligned 2D skeletal model and 3D mesh model.