Nvidia corporation (20240265712). BELIEF PROPAGATION FOR RANGE IMAGE MAPPING IN AUTONOMOUS MACHINE APPLICATIONS simplified abstract

From WikiPatents
Jump to navigation Jump to search

BELIEF PROPAGATION FOR RANGE IMAGE MAPPING IN AUTONOMOUS MACHINE APPLICATIONS

Organization Name

nvidia corporation

Inventor(s)

David Wehr of Redmond WA (US)

Ibrahim Eden of Redmond WA (US)

Joachim Pehserl of Lynnwood WA (US)

BELIEF PROPAGATION FOR RANGE IMAGE MAPPING IN AUTONOMOUS MACHINE APPLICATIONS - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240265712 titled 'BELIEF PROPAGATION FOR RANGE IMAGE MAPPING IN AUTONOMOUS MACHINE APPLICATIONS

The patent application describes systems and methods that generate scene flow in 3D space by simplifying 3D LiDAR data into "2.5D" optical flow space, including x, y, and depth flow.

  • LiDAR range images are used to create 2.5D representations of depth flow information between frames of LiDAR data.
  • Two or more range images are compared to generate depth flow information.
  • Messages are passed using a belief propagation algorithm to update pixel values in the 2.5D representation.
  • The resulting images are used to generate 2.5D motion vectors.
  • The 2.5D motion vectors are converted back to 3D space to create a 3D scene flow representation of an environment around an autonomous machine.

Potential Applications: - Autonomous driving systems - Robotics - Augmented reality

Problems Solved: - Efficient generation of scene flow in 3D space - Simplification of complex LiDAR data - Real-time processing of depth flow information

Benefits: - Improved navigation for autonomous machines - Enhanced object detection and tracking - Increased safety and accuracy in various applications

Commercial Applications: - LiDAR technology companies - Autonomous vehicle manufacturers - Robotics companies

Questions about the technology: 1. How does the belief propagation algorithm help update pixel values in the 2.5D representation? 2. What are the key advantages of converting 2.5D motion vectors back to 3D space?


Original Abstract Submitted

in various examples, systems and methods are described that generate scene flow in 3d space through simplifying the 3d lidar data to “2.5d” optical flow space (e.g., x, y, and depth flow). for example, lidar range images may be used to generate 2.5d representations of depth flow information between frames of lidar data, and two or more range images may be compared to generate depth flow information, and messages may be passed—e.g., using a belief propagation algorithm—to update pixel values in the 2.5d representation. the resulting images may then be used to generate 2.5d motion vectors, and the 2.5d motion vectors may be converted back to 3d space to generate a 3d scene flow representation of an environment around an autonomous machine.