18513716. METHOD FOR RENDERING RELIGHTED 3D PORTRAIT OF PERSON AND COMPUTING DEVICE FOR THE SAME simplified abstract (Samsung Electronics Co., Ltd.)

From WikiPatents
Jump to navigation Jump to search

METHOD FOR RENDERING RELIGHTED 3D PORTRAIT OF PERSON AND COMPUTING DEVICE FOR THE SAME

Organization Name

Samsung Electronics Co., Ltd.

Inventor(s)

Artem Mikhailovich Sevastopolskiy of Moscow (RU)

Victor Sergeevich Lempitsky of Moscow (RU)

METHOD FOR RENDERING RELIGHTED 3D PORTRAIT OF PERSON AND COMPUTING DEVICE FOR THE SAME - A simplified explanation of the abstract

This abstract first appeared for US patent application 18513716 titled 'METHOD FOR RENDERING RELIGHTED 3D PORTRAIT OF PERSON AND COMPUTING DEVICE FOR THE SAME

Simplified Explanation

The disclosure provides a method for generating relightable 3D portrait using a deep neural network and a computing device implementing the method. A possibility of obtaining, in real time and on computing devices having limited processing resources, realistically relighted 3D portraits having quality higher or at least comparable to quality achieved by prior art solutions, but without utilizing complex and costly equipment is provided. A method for rendering a relighted 3D portrait of a person, the method including: receiving an input defining a camera viewpoint and lighting conditions, rasterizing latent descriptors of a 3D point cloud at different resolutions based on the camera viewpoint to obtain rasterized images, wherein the 3D point cloud is generated based on a sequence of images captured by a camera with a blinking flash while moving the camera at least partly around an upper body, the sequence of images comprising a set of flash images and a set of no-flash images, processing the rasterized images with a deep neural network to predict albedo, normals, environmental shadow maps, and segmentation mask for the received camera viewpoint, and fusing the predicted albedo, normals, environmental shadow maps, and segmentation mask into the relighted 3D portrait based on the lighting conditions.

  • Realistically relighted 3D portraits can be generated in real time on computing devices with limited processing resources.
  • The method involves capturing a sequence of images with a blinking flash while moving the camera around an upper body to generate a 3D point cloud.

Potential Applications

The technology can be applied in various industries such as entertainment, virtual reality, gaming, and digital art for creating relightable 3D portraits of individuals.

Problems Solved

This technology solves the problem of generating high-quality relighted 3D portraits in real time without the need for complex and costly equipment.

Benefits

The benefits of this technology include the ability to create realistic relighted 3D portraits efficiently and effectively, even on devices with limited processing resources.

Potential Commercial Applications

Commercial applications of this technology include software development for creating relightable 3D portraits, virtual reality experiences, and gaming applications.

Possible Prior Art

Prior art solutions for generating relighted 3D portraits may involve complex equipment and extensive processing resources, which can be costly and time-consuming.

Unanswered Questions

== How does the deep neural network process the rasterized images to predict albedo, normals, environmental shadow maps, and segmentation mask? The specific algorithms and methodologies used by the deep neural network to process the rasterized images and make predictions are not detailed in the abstract. Further information on the neural network architecture and training process would provide more insight into this aspect of the technology.

== What are the specific limitations or constraints of the computing devices with limited processing resources when generating relighted 3D portraits in real time? The abstract mentions the possibility of generating relighted 3D portraits in real time on computing devices with limited processing resources, but it does not specify the exact limitations or constraints of these devices. Understanding the specific challenges faced when implementing this technology on devices with limited processing power would be beneficial for assessing its practical applications.


Original Abstract Submitted

The disclosure provides a method for generating relightable 3D portrait using a deep neural network and a computing device implementing the method. A possibility of obtaining, in real time and on computing devices having limited processing resources, realistically relighted 3D portraits having quality higher or at least comparable to quality achieved by prior art solutions, but without utilizing complex and costly equipment is provided. A method for rendering a relighted 3D portrait of a person, the method including: receiving an input defining a camera viewpoint and lighting conditions, rasterizing latent descriptors of a 3D point cloud at different resolutions based on the camera viewpoint to obtain rasterized images, wherein the 3D point cloud is generated based on a sequence of images captured by a camera with a blinking flash while moving the camera at least partly around an upper body, the sequence of images comprising a set of flash images and a set of no-flash images, processing the rasterized images with a deep neural network to predict albedo, normals, environmental shadow maps, and segmentation mask for the received camera viewpoint, and fusing the predicted albedo, normals, environmental shadow maps, and segmentation mask into the relighted 3D portrait based on the lighting conditions.