18302622. SYSTEM AND METHOD FOR PARALLAX CORRECTION FOR VIDEO SEE-THROUGH AUGMENTED REALITY simplified abstract (SAMSUNG ELECTRONICS CO., LTD.)

From WikiPatents
Jump to navigation Jump to search

SYSTEM AND METHOD FOR PARALLAX CORRECTION FOR VIDEO SEE-THROUGH AUGMENTED REALITY

Organization Name

SAMSUNG ELECTRONICS CO., LTD.

Inventor(s)

Yingen Xiong of Mountain View CA (US)

SYSTEM AND METHOD FOR PARALLAX CORRECTION FOR VIDEO SEE-THROUGH AUGMENTED REALITY - A simplified explanation of the abstract

This abstract first appeared for US patent application 18302622 titled 'SYSTEM AND METHOD FOR PARALLAX CORRECTION FOR VIDEO SEE-THROUGH AUGMENTED REALITY

Simplified Explanation

The method described in the patent application involves generating virtual views based on stereo images and depth maps for display on VST AR devices.

  • Obtaining a stereo image pair with a first and second image
  • Generating feature maps for each image with extracted pixel positions
  • Creating a dense depth map and generating a disparity map between the images
  • Comparing predicted positions with extracted positions to verify depth map accuracy
  • Generating virtual views for display on VST AR devices based on the verified depth map

Potential Applications

This technology could be used in virtual reality applications, augmented reality devices, 3D modeling, and immersive gaming experiences.

Problems Solved

This technology solves the problem of accurately generating virtual views based on stereo images and depth maps, improving the visual experience for users of VST AR devices.

Benefits

The benefits of this technology include enhanced depth perception, realistic virtual views, and improved user immersion in virtual environments.

Potential Commercial Applications

Potential commercial applications of this technology include virtual reality headsets, augmented reality glasses, gaming consoles, and 3D content creation tools.

Possible Prior Art

One possible prior art for this technology could be existing methods for generating virtual views based on stereo images and depth maps in the field of computer vision and graphics.

Unanswered Questions

How does this technology compare to existing methods for generating virtual views based on stereo images and depth maps?

This technology improves upon existing methods by verifying the accuracy of the depth map through a pixelwise comparison of predicted and extracted positions, leading to more realistic virtual views.

What are the potential limitations or challenges of implementing this technology in VST AR devices?

One potential limitation could be the computational resources required to generate virtual views in real-time on VST AR devices, which may impact performance and battery life.


Original Abstract Submitted

A method includes obtaining a stereo image pair including a first image and a second image. The method also includes generating a first feature map of the first image and a second feature map of the second image, the first and second feature maps including extracted positions associated with pixels in the images. The method further includes generating a disparity map between the first and second images based on a dense depth map. The method also includes generating a verified depth map based on a pixelwise comparison of predicted positions and the extracted positions associated with at least some of the pixels in at least one of the images, the predicted positions determined based on the disparity map. In addition, the method includes generating a first virtual view and a second virtual view to present on a display panel of an VST AR device based on the verified depth map.