17846918. LIGHT ESTIMATION METHOD FOR THREE-DIMENSIONAL (3D) RENDERED OBJECTS simplified abstract (Snap Inc.)

From WikiPatents
Jump to navigation Jump to search

LIGHT ESTIMATION METHOD FOR THREE-DIMENSIONAL (3D) RENDERED OBJECTS

Organization Name

Snap Inc.

Inventor(s)

Menglei Chai of Los Angeles CA (US)

Sergey Demyanov of Santa Monica CA (US)

Yunqing Hu of Los Angeles CA (US)

Istvan Marton of Encino CA (US)

Daniil Ostashev of London CA (US)

Aleksei Podkin of Santa Monica CA (US)

LIGHT ESTIMATION METHOD FOR THREE-DIMENSIONAL (3D) RENDERED OBJECTS - A simplified explanation of the abstract

This abstract first appeared for US patent application 17846918 titled 'LIGHT ESTIMATION METHOD FOR THREE-DIMENSIONAL (3D) RENDERED OBJECTS

Simplified Explanation

The patent application describes a method for applying lighting conditions to a virtual object in an augmented reality (AR) device. Here are the key points:

  • The method involves using a camera on a mobile device to capture an image.
  • A virtual object corresponding to an object in the image is accessed.
  • Lighting parameters of the virtual object are identified using a pre-trained machine learning model.
  • The machine learning model is trained with a paired dataset that includes synthetic source data and synthetic target data.
  • The synthetic source data includes environment maps and 3D scans of items depicted in the environment map.
  • The synthetic target data includes a synthetic sphere image rendered in the same environment map.
  • The identified lighting parameters are then applied to the virtual object.
  • The shaded virtual object is displayed as a layer on top of the image in the display of the mobile device.

Potential applications of this technology:

  • Augmented reality applications can benefit from realistic lighting conditions applied to virtual objects, enhancing the overall user experience.
  • This method can be used in gaming applications to provide more immersive and visually appealing virtual objects.
  • It can also be used in architectural and interior design applications to visualize how virtual objects would look in different lighting conditions.

Problems solved by this technology:

  • Traditional methods of applying lighting conditions to virtual objects in AR devices may not provide realistic or accurate results.
  • This method utilizes a machine learning model trained with a paired dataset to accurately identify and apply lighting parameters, resulting in more realistic virtual objects.

Benefits of this technology:

  • Users of AR devices can experience more realistic and visually appealing virtual objects with accurate lighting conditions.
  • The method is efficient and can be implemented on mobile devices, allowing for real-time application of lighting parameters to virtual objects.
  • By using a machine learning model, the method can adapt to different lighting conditions and provide consistent results across various environments.


Original Abstract Submitted

A method for applying lighting conditions to a virtual object in an augmented reality (AR) device is described. In one aspect, the method includes generating, using a camera of a mobile device, an image, accessing a virtual object corresponding to an object in the image, identifying lighting parameters of the virtual object based on a machine learning model that is pre-trained with a paired dataset, the paired dataset includes synthetic source data and synthetic target data, the synthetic source data includes environment maps and 3D scans of items depicted in the environment map, the synthetic target data includes a synthetic sphere image rendered in the same environment map, applying the lighting parameters to the virtual object, and displaying, in a display of the mobile device, the shaded virtual object as a layer to the image.