18609609. Inferred Shading simplified abstract (Apple Inc.)
Contents
Inferred Shading
Organization Name
Inventor(s)
Andrew P. Mason of Cottesloe (AU)
Olivier Soares of San Jose CA (US)
Haarm-Pieter Duiker of San Francisco CA (US)
John S. Mccarten of Wellington (NZ)
Inferred Shading - A simplified explanation of the abstract
This abstract first appeared for US patent application 18609609 titled 'Inferred Shading
The patent application involves rendering an avatar in a selected environment by using inputs such as expression geometry, head pose, camera angle, and lighting representation.
- Determining inputs for an inferred shading network, including expression geometry, head pose, camera angle, and lighting representation.
- Generating a texture of a face for the avatar using the inferred shading network.
- Obtaining lighting representation as lighting latent variables from an environment autoencoder trained on various lighting conditions.
- Utilizing the generated texture and lighting representation to render the avatar in the selected environment.
Potential Applications: This technology can be used in virtual reality applications, video games, and digital animation to create realistic avatars in different environments.
Problems Solved: This technology addresses the challenge of accurately rendering avatars with realistic textures and lighting in various environments.
Benefits: The technology allows for more immersive and visually appealing virtual experiences by enhancing the realism of avatars in different settings.
Commercial Applications: "Enhanced Avatar Rendering Technology for Virtual Reality Applications"
Questions about Avatar Rendering Technology: 1. How does this technology improve the realism of avatars in virtual environments? 2. What are the key factors that contribute to the accurate rendering of avatars in different lighting conditions?
Original Abstract Submitted
Rendering an avatar in a selected environment may include determining as inputs into an inferred shading network, an expression geometry to be represented by an avatar, head pose, and camera angle, along with a lighting representation for the selected environment. The inferred shading network may then generate a texture of a face to be utilized in rendering the avatar. The lighting representation may be obtained as lighting latent variables which are obtained from an environment autoencoder trained on environment images with various lighting conditions.