20240040103. LIGHT FIELD SAMPLING METHOD simplified abstract (Avalon Holographics Inc.)

From WikiPatents
Jump to navigation Jump to search

LIGHT FIELD SAMPLING METHOD

Organization Name

Avalon Holographics Inc.

Inventor(s)

Matthew Hamilton of St. John's (CA)

Chuck Rumbolt of St. John's (CA)

Donovan Benoit of St. John's (CA)

Matthew Troke of St. John's (CA)

Robert Lockyer of St. John's (CA)

Thomas Butyn of St. John's (CA)

LIGHT FIELD SAMPLING METHOD - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240040103 titled 'LIGHT FIELD SAMPLING METHOD

Simplified Explanation

The patent application describes a system and methods for driving a real-time light field display for multi-dimensional video streaming, interactive gaming, and other light field display applications. The system applies a layered scene decomposition strategy to divide multi-dimensional scene data into multiple data layers of increasing depths based on their distance from the display surface. The data layers are sampled using an effective resolution function to determine a suitable sampling rate and rendered using hybrid rendering techniques, such as perspective and oblique rendering, to encode light fields corresponding to each data layer. The resulting compressed core representation of the scene data is produced at predictable rates, reconstructed, and merged at the light field display in real-time using view synthesis protocols, including edge adaptive interpolation, to reconstruct pixel arrays in stages from reference elemental images.

  • The system and methods drive a real-time light field display for multi-dimensional video streaming and interactive gaming.
  • The layered scene decomposition strategy divides the multi-dimensional scene data into data layers of increasing depths based on their distance from the display surface.
  • The data layers are sampled using an effective resolution function to determine a suitable sampling rate.
  • Hybrid rendering techniques, such as perspective and oblique rendering, are used to encode light fields corresponding to each data layer.
  • The resulting compressed core representation of the scene data is produced at predictable rates.
  • View synthesis protocols, including edge adaptive interpolation, are applied to reconstruct and merge the scene data at the light field display in real-time.
  • The pixel arrays are reconstructed in stages (columns then rows) from reference elemental images.

Potential applications of this technology:

  • Real-time light field displays for multi-dimensional video streaming
  • Real-time light field displays for interactive gaming
  • Light field display applications in virtual reality (VR) and augmented reality (AR)
  • Light field display applications in medical imaging and visualization
  • Light field display applications in architectural design and visualization

Problems solved by this technology:

  • Efficient encoding and rendering of multi-dimensional scene data for real-time light field displays
  • Effective sampling and resolution determination for different data layers
  • Real-time reconstruction and merging of compressed scene data at the light field display
  • Accurate and high-quality view synthesis protocols for pixel array reconstruction

Benefits of this technology:

  • Real-time display of multi-dimensional scene data with depth information
  • Enhanced visual experience with realistic depth perception
  • Efficient compression and rendering techniques for light field displays
  • Improved image quality and resolution for different data layers
  • Versatile applications in various fields such as entertainment, healthcare, and design.


Original Abstract Submitted

a system and methods for a codec driving a real-time light field display for multi-dimensional video streaming, interactive gaming and other light field display applications is provided applying a layered scene decomposition strategy. multi-dimensional scene data is divided into a plurality of data layers of increasing depths as the distance between a given layer and the display surface increases. data layers which are sampled using an effective resolution function to determine a suitable sampling rate and rendered using hybrid rendering, such as perspective and oblique rendering, to encode light fields corresponding to each data layer. the resulting compressed, (layered) core representation of the multi-dimensional scene data is produced at predictable rates, reconstructed and merged at the light field display in real-time by applying view synthesis protocols, including edge adaptive interpolation, to reconstruct pixel arrays in stages (e.g. columns then rows) from reference elemental images.