18055304. METHODS AND DEVICES FOR VIDEO RENDERING FOR VIDEO SEE-THROUGH (VST) AUGMENTED REALITY (AR) simplified abstract (SAMSUNG ELECTRONICS CO., LTD.)

From WikiPatents
Jump to navigation Jump to search

METHODS AND DEVICES FOR VIDEO RENDERING FOR VIDEO SEE-THROUGH (VST) AUGMENTED REALITY (AR)

Organization Name

SAMSUNG ELECTRONICS CO., LTD.

Inventor(s)

Yingen Xiong of Mountain View CA (US)

METHODS AND DEVICES FOR VIDEO RENDERING FOR VIDEO SEE-THROUGH (VST) AUGMENTED REALITY (AR) - A simplified explanation of the abstract

This abstract first appeared for US patent application 18055304 titled 'METHODS AND DEVICES FOR VIDEO RENDERING FOR VIDEO SEE-THROUGH (VST) AUGMENTED REALITY (AR)

Simplified Explanation

The patent application describes a method for capturing and processing images from multiple cameras to generate virtual views for display on different screens. Here are the key points:

  • The method involves capturing an image and associating it with a camera pose for each camera.
  • For each camera, the method determines the contribution of the image for a first virtual view and a second virtual view, which are intended for display on different screens.
  • The method also calculates confidence maps for each virtual view based on the camera pose and position in relation to virtual cameras.
  • The first virtual view is generated by combining the first contribution using the first confidence map for each camera, and the second virtual view is generated by combining the second contribution using the second confidence map for each camera.

Potential applications of this technology:

  • Virtual reality (VR) and augmented reality (AR) systems can benefit from this method by generating realistic virtual views from multiple camera inputs.
  • This method can be used in video conferencing systems to provide different views of participants on different displays.
  • It can also be applied in surveillance systems to generate multiple virtual views from different camera angles.

Problems solved by this technology:

  • This method solves the problem of generating accurate and realistic virtual views by considering the camera pose and position in relation to virtual cameras.
  • It addresses the challenge of combining contributions from multiple cameras to create seamless virtual views.

Benefits of this technology:

  • The method improves the quality and realism of virtual views by considering camera pose and position.
  • It allows for the generation of multiple virtual views for different displays, enhancing the user experience.
  • This technology provides a more efficient and effective way of combining contributions from multiple cameras.


Original Abstract Submitted

A method includes capturing an image and associating the image with a camera pose for each of multiple cameras. The method also includes determining, for each camera, a first contribution of the image for a first virtual view for display on a first display and a second contribution of the image for a second virtual view for display on a second display. The method further includes determining, for each camera, a first confidence map for the first virtual view based on the camera pose and a position of the camera in relation to a first virtual camera and a second confidence map for the second virtual view based on the camera pose and the position of the camera in relation to a second virtual camera. In addition, the method includes generating the first virtual view by combining the first contribution using the first confidence map for each of the cameras and the second virtual view by combining the second contribution using the second confidence map for each of the cameras.