18520255. USER INTERFACE FOR POSE DRIVEN VIRTUAL EFFECTS simplified abstract (Snap Inc.)

From WikiPatents
Jump to navigation Jump to search

USER INTERFACE FOR POSE DRIVEN VIRTUAL EFFECTS

Organization Name

Snap Inc.

Inventor(s)

Amir Alavi of Los Angeles CA (US)

Olha Rykhliuk of Marina Del Rey CA (US)

Xintong Shi of Los Angeles CA (US)

Jonathan Solichin of Arcadi CA (US)

Olesia Voronova of Santa Monica CA (US)

Artem Yagodin of Playa del Rey CA (US)

USER INTERFACE FOR POSE DRIVEN VIRTUAL EFFECTS - A simplified explanation of the abstract

This abstract first appeared for US patent application 18520255 titled 'USER INTERFACE FOR POSE DRIVEN VIRTUAL EFFECTS

Simplified Explanation

The abstract describes a method for capturing real-time video with virtual effects based on pose information.

  • The system provides visual pose hints.
  • It identifies first pose information in the video.
  • Applies a first series of virtual effects to the video.
  • Identifies second pose information.
  • Applies a second series of virtual effects based on the first series.

Potential Applications

This technology could be used in various applications such as:

  • Virtual reality experiences
  • Video conferencing
  • Gaming

Problems Solved

This technology solves the following problems:

  • Enhancing user experience in video applications
  • Adding interactive elements to videos
  • Providing real-time virtual effects based on pose information

Benefits

The benefits of this technology include:

  • Enhanced visual effects in real-time video
  • Improved user engagement
  • Customizable virtual effects based on pose information

Potential Commercial Applications

With its ability to enhance user experience and engagement, this technology could be applied in various commercial settings such as:

  • Entertainment industry
  • Marketing and advertising
  • Telecommunications

Possible Prior Art

One possible prior art could be the use of motion capture technology in video games to track and animate characters based on real-life movements.

Unanswered Questions

How does the system accurately identify and apply virtual effects based on pose information in real-time?

The system's accuracy and efficiency in identifying and applying virtual effects based on pose information in real-time are not detailed in the abstract.

What specific virtual effects are included in the first and second series applied to the video?

The abstract does not specify the exact virtual effects included in the first and second series applied to the video.


Original Abstract Submitted

Systems and methods herein describe a method for capturing a video in real-time by an image capture device. The system provides a plurality of visual pose hints, identifies first pose information in the video while capturing the video, applies a first series of virtual effects to the video, identifies second pose information, and applies a second series of virtual effects to the video, the second series of virtual effects based on the first series of virtual effects.