18290838. CODING HYBRID MULTI-VIEW SENSOR CONFIGURATIONS simplified abstract (KONINKLIJKE PHILIPS N.V.)

From WikiPatents
Revision as of 04:03, 18 October 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

CODING HYBRID MULTI-VIEW SENSOR CONFIGURATIONS

Organization Name

KONINKLIJKE PHILIPS N.V.

Inventor(s)

Christiaan Varekamp of Veldhoven (NL)

Bart Kroon of Eindhoven (NL)

CODING HYBRID MULTI-VIEW SENSOR CONFIGURATIONS - A simplified explanation of the abstract

This abstract first appeared for US patent application 18290838 titled 'CODING HYBRID MULTI-VIEW SENSOR CONFIGURATIONS

The method described in the abstract involves transmitting multi-view image frame data obtained from multiple sensors capturing different views of a scene, some with depth information and some without.

  • Obtaining multi-view components from sensors, with each component corresponding to a sensor and some including depth information.
  • Generating virtual sensor poses for each sensor in a virtual scene, representing their position when capturing the scene.
  • Creating sensor parameter metadata containing extrinsic parameters for the multi-view components, including virtual sensor poses.
  • Using extrinsic parameters to warp depth components based on sensor poses and target positions in the virtual scene.
  • Transmitting the multi-view components and sensor parameter metadata for further processing.

Potential Applications: - Virtual reality and augmented reality applications - 3D modeling and reconstruction - Surveillance and security systems - Autonomous vehicles and robotics - Medical imaging and diagnostics

Problems Solved: - Efficient transmission of multi-view image data - Accurate depth information integration - Simplified sensor parameter management - Enhanced 3D visualization capabilities - Improved scene understanding and analysis

Benefits: - Enhanced depth perception in multi-view images - Increased accuracy in virtual scene representation - Streamlined data transmission and processing - Improved spatial awareness in various applications - Enhanced user experience in immersive technologies

Commercial Applications: Title: "Advanced Multi-View Image Data Transmission Method for Virtual Reality and 3D Modeling" This technology can be utilized in industries such as virtual reality gaming, architectural visualization, medical imaging, and security systems. The method offers improved depth perception and scene reconstruction capabilities, making it valuable for companies developing immersive experiences, advanced visualization tools, and intelligent surveillance systems.

Questions about Multi-View Image Frame Data Transmission: 1. How does the method ensure accurate alignment of depth components with virtual sensor poses? 2. What are the potential challenges in implementing this technology in real-time applications?


Original Abstract Submitted

A method for transmitting multi-view image frame data. The method comprises obtaining multi-view components representative of a scene generated from a plurality of sensors, wherein each multi-view component corresponds to a sensor and wherein at least one of the multi-view components includes a depth component and at least one of the multi-view components does not include a depth component. A virtual sensor pose is obtained for each sensor in a virtual scene, wherein the virtual scene is a virtual representation of the scene and wherein the virtual sensor pose is a virtual representation of the pose of the sensor in the scene when generating the corresponding multi-view component. Sensor parameter metadata is generated for the multi-view components, wherein the sensor parameter metadata contains extrinsic parameters for the multi-view components and the extrinsic parameters contain at least the virtual sensor pose of a sensor for each of the corresponding multi-view components. The extrinsic parameters enable the generation of additional depth components by warping the depth components based on their corresponding virtual sensor pose and a target position in the virtual scene. The multi-view components and the sensor parameter metadata is thus transmitted.