Unknown Organization (20240221309). INFINITE-SCALE CITY SYNTHESIS simplified abstract
Contents
INFINITE-SCALE CITY SYNTHESIS
Organization Name
Inventor(s)
Menglei Chai of Los Angeles CA (US)
Hsin-Ying Lee of San Jose CA (US)
Willi Menapace of Santa Monica CA (US)
Aliaksandr Siarohin of Los Angeles CA (US)
Sergey Tulyakov of Santa Monica CA (US)
INFINITE-SCALE CITY SYNTHESIS - A simplified explanation of the abstract
This abstract first appeared for US patent application 20240221309 titled 'INFINITE-SCALE CITY SYNTHESIS
The abstract describes an environment synthesis framework that creates virtual environments from a synthesized two-dimensional satellite map, a three-dimensional voxel environment, and a voxel-based neural rendering framework.
- The framework utilizes a map synthesis generative adversarial network (GAN) trained on city datasets to generate a 2D satellite map.
- The 2D map is then transformed into a set of 3D octrees, which are used to generate a 3D voxel environment.
- A neural rendering GAN is employed to convert the voxel environment into a texturized 3D virtual environment using pseudo ground truth images.
- The resulting virtual environment is lifelike, editable, and suitable for virtual reality (VR) and augmented reality (AR) experiences.
Potential Applications: - Virtual city planning and design - Simulation training for emergency responders - Entertainment and gaming industry for realistic virtual worlds
Problems Solved: - Efficient generation of large-scale, detailed virtual environments - Seamless integration of 2D and 3D data for realistic simulations
Benefits: - Realistic and immersive virtual environments - Scalable for various applications - Enhanced user experience in VR and AR
Commercial Applications: Title: "Virtual Environment Synthesis Framework for Immersive Experiences" This technology can be used in urban planning, gaming, training simulations, and entertainment industries. It has the potential to revolutionize the way virtual environments are created and experienced.
Questions about the technology: 1. How does the framework ensure the generated virtual environments are realistic and lifelike? 2. What sets this environment synthesis framework apart from traditional methods of creating virtual environments?
Original Abstract Submitted
an environment synthesis framework generates virtual environments from a synthesized two-dimensional (2d) satellite map of a geographic area, a three-dimensional (3d) voxel environment, and a voxel-based neural rendering framework. in an example implementation, the synthesized 2d satellite map is generated by a map synthesis generative adversarial network (gan) which is trained using sample city datasets. the multi-stage framework lifts the 2d map into a set of 3d octrees, generates an octree-based 3d voxel environment, and then converts it into a texturized 3d virtual environment using a neural rendering gan and a set of pseudo ground truth images. the resulting 3d virtual environment is texturized, lifelike, editable, traversable in virtual reality (vr) and augmented reality (ar) experiences, and very large in scale.