Apple inc. (20240104693). Deep Learning Based Causal Image Reprojection for Temporal Supersampling in AR/VR Systems simplified abstract

From WikiPatents
Jump to navigation Jump to search

Deep Learning Based Causal Image Reprojection for Temporal Supersampling in AR/VR Systems

Organization Name

apple inc.

Inventor(s)

Vinay Palakkode of Santa Clara CA (US)

Kaushik Raghunath of Pleasanton CA (US)

Venu M. Duggineni of San Jose CA (US)

Vivaan Bahl of Palo Alto CA (US)

Deep Learning Based Causal Image Reprojection for Temporal Supersampling in AR/VR Systems - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240104693 titled 'Deep Learning Based Causal Image Reprojection for Temporal Supersampling in AR/VR Systems

Simplified Explanation

The patent application describes a method for generating synthesized data using wearable devices and trained networks.

  • Capturing frames of a scene at a first frame rate using cameras of a wearable device
  • Determining body position parameters for the frames
  • Obtaining geometry data for the scene
  • Applying frames, body position parameters, and geometry data to a trained network to predict additional frames
  • Predicting future gaze position based on current body position parameters
  • Rendering gaze region of a frame at a first resolution
  • Predicting peripheral region at a second resolution
  • Combining regions to form a frame for display

Potential Applications

This technology could be applied in virtual reality simulations, gaming, and sports training.

Problems Solved

This technology solves the problem of generating realistic synthesized data for immersive experiences.

Benefits

The benefits of this technology include enhanced user experience, improved training simulations, and increased accuracy in data generation.

Potential Commercial Applications

Potential commercial applications include virtual reality gaming, sports training software, and entertainment industry for creating realistic scenes.

Possible Prior Art

One possible prior art could be the use of motion capture technology in the entertainment industry for creating realistic animations.

Unanswered Questions

How does the trained network predict additional frames accurately?

The patent application does not provide detailed information on the specific algorithms or methodologies used by the trained network to predict additional frames.

What are the limitations of using wearable devices for capturing frames?

The patent application does not address any potential limitations or challenges associated with using wearable devices for capturing frames of a scene.


Original Abstract Submitted

generating synthesized data includes capturing one or more frames of a scene at a first frame rate by one or more cameras of a wearable device, determining body position parameters for the frames, and obtaining geometry data for the scene in accordance with the one or more frames. the frames, body position parameters, and geometry data are applied to a trained network which predicts one or more additional frames. with respect to virtual data, generating a synthesized frame includes determining current body position parameters in accordance with the one or more frames, predicting a future gaze position based on the current body position parameters, and rendering, at a first resolution, a gaze region of a frame in accordance with the future gaze position. a peripheral region is predicted for the frame at a second resolution, and the combined regions form a frame that is used to drive a display.