20240046583. REAL-TIME PHOTOREALISTIC VIEW RENDERING ON AUGMENTED REALITY (AR) DEVICE simplified abstract (SAMSUNG ELECTRONICS CO., LTD.)

From WikiPatents
Jump to navigation Jump to search

REAL-TIME PHOTOREALISTIC VIEW RENDERING ON AUGMENTED REALITY (AR) DEVICE

Organization Name

SAMSUNG ELECTRONICS CO., LTD.

Inventor(s)

Yingen Xiong of Mountain View CA (US)

Christopher A. Peri of Mountain View CA (US)

REAL-TIME PHOTOREALISTIC VIEW RENDERING ON AUGMENTED REALITY (AR) DEVICE - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240046583 titled 'REAL-TIME PHOTOREALISTIC VIEW RENDERING ON AUGMENTED REALITY (AR) DEVICE

Simplified Explanation

The patent application describes a method for rendering augmented reality scenes using sparse feature vectors. The method involves obtaining images of a scene and corresponding position data of a capturing device. It also includes determining position and direction data associated with camera rays passing through keyframes of the images.

To create sparse feature vectors, a position-dependent multilayer perceptron (MLP) and a direction-dependent MLP are used. These feature vectors are then stored in a data structure.

When a request is received to render the scene on an augmented reality device, the method uses the sparse feature vectors in the data structure to render the scene associated with the viewing direction.

Potential applications of this technology:

  • Augmented reality gaming
  • Navigation and wayfinding in augmented reality
  • Virtual home staging and interior design

Problems solved by this technology:

  • Efficient rendering of augmented reality scenes
  • Accurate alignment of virtual objects with the real world
  • Real-time rendering of complex scenes on AR devices

Benefits of this technology:

  • Improved user experience in augmented reality applications
  • Reduced computational requirements for rendering AR scenes
  • Enhanced realism and immersion in augmented reality experiences


Original Abstract Submitted

a method includes obtaining images of a scene and corresponding position data of a device that captures the images. the method also includes determining position data and direction data associated with camera rays passing through keyframes of the images. the method further includes using a position-dependent multilayer perceptron (mlp) and a direction-dependent mlp to create sparse feature vectors. the method also includes storing the sparse feature vectors in at least one data structure. the method further includes receiving a request to render the scene on an augmented reality (ar) device associated with a viewing direction. in addition, the method includes rendering the scene associated with the viewing direction using the sparse feature vectors in the at least one data structure.