Snap inc. (20240314461). VISUAL-INERTIAL TRACKING USING ROLLING SHUTTER CAMERAS simplified abstract

From WikiPatents
Jump to navigation Jump to search

VISUAL-INERTIAL TRACKING USING ROLLING SHUTTER CAMERAS

Organization Name

snap inc.

Inventor(s)

Matthias Kalkgruber of Vienna (AT)

Erick Mendez Mendez of Vienna (AT)

Daniel Wagner of Wien (AT)

Daniel Wolf of Modling (AT)

Kai Zhou of Wr Neudorf (AT)

VISUAL-INERTIAL TRACKING USING ROLLING SHUTTER CAMERAS - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240314461 titled 'VISUAL-INERTIAL TRACKING USING ROLLING SHUTTER CAMERAS

Simplified Explanation: The patent application describes a method for tracking the position of an eyewear device using a rolling shutter camera and motion sensing technology.

  • **Key Features and Innovation:**
   - Implementation of visual-inertial tracking for the eyewear device.
   - Computation of multiple poses for the rolling shutter camera based on sensed movement.
   - Selection of computed poses for feature points in captured images.
   - Determination of the device's position within the environment using feature points and computed poses.

Potential Applications: This technology could be used in augmented reality applications, navigation systems, and virtual reality experiences.

Problems Solved: - Accurately tracking the position of a mobile device in real-time. - Enhancing the user experience in AR and VR environments. - Improving the efficiency of navigation systems.

Benefits: - Precise positioning of the eyewear device. - Seamless integration of visual and inertial tracking. - Enhanced user interaction in virtual environments.

Commercial Applications: The technology could be applied in AR glasses, gaming devices, and sports performance analysis tools, offering innovative solutions for various industries.

Prior Art: Prior research in visual-inertial tracking systems and motion sensing technologies can provide insights into the development and applications of this innovation.

Frequently Updated Research: Stay updated on advancements in visual-inertial tracking systems, motion sensing algorithms, and AR/VR technologies to explore new possibilities for this technology.

Questions about visual-inertial tracking: 1. How does visual-inertial tracking differ from traditional GPS-based tracking systems? 2. What are the potential limitations of visual-inertial tracking technology in dynamic environments?

Question 1: How does visual-inertial tracking differ from traditional GPS-based tracking systems?

Answer 1: Visual-inertial tracking combines visual information from cameras with inertial data from motion sensors to track the position of a device in real-time, offering higher accuracy and reliability compared to GPS-based systems, especially in indoor or obstructed environments where GPS signals may be weak or unavailable.

Question 2: What are the potential limitations of visual-inertial tracking technology in dynamic environments?

Answer 2: Visual-inertial tracking may face challenges in rapidly changing or highly dynamic environments where sudden movements or occlusions can affect the accuracy of tracking. Calibration and synchronization issues between the visual and inertial sensors may also impact the performance of the system in such scenarios.


Original Abstract Submitted

visual-inertial tracking of an eyewear device using a rolling shutter camera(s). the device includes a position determining system. visual-inertial tracking is implemented by sensing motion of the device. an initial pose is obtained for a rolling shutter camera and an image of an environment is captured. the image includes feature points captured at a particular capture time. a number of poses for the rolling shutter camera is computed based on the initial pose and sensed movement of the device. the number of computed poses is responsive to the sensed movement of the mobile device. a computed pose is selected for each feature point in the image by matching the particular capture time for the feature point to the particular computed time for the computed pose. the position of the mobile device is determined within the environment using the feature points and the selected computed poses for the feature points.