18511031. METHOD AND SYSTEM FOR FUSING DATA FROM LIDAR AND CAMERA simplified abstract (Hyundai Motor Company)

From WikiPatents
Revision as of 07:59, 24 May 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

METHOD AND SYSTEM FOR FUSING DATA FROM LIDAR AND CAMERA

Organization Name

Hyundai Motor Company

Inventor(s)

Sung Moon Jang of Seongnam-si (KR)

Ki Chun Jo of Seoul (KR)

Jin Su Ha of Seoul (KR)

Ha Min Song of Yeosu-si (KR)

Chan Soo Kim of Seoul (KR)

Ji Eun Cho of Seoul (KR)

METHOD AND SYSTEM FOR FUSING DATA FROM LIDAR AND CAMERA - A simplified explanation of the abstract

This abstract first appeared for US patent application 18511031 titled 'METHOD AND SYSTEM FOR FUSING DATA FROM LIDAR AND CAMERA

Simplified Explanation

The patent application describes a method for fusing LiDAR and camera data to generate a more comprehensive dataset for analysis and decision-making purposes.

  • Voxel-wise feature map generation based on LiDAR point cloud data
  • Pixel-wise feature map generation based on camera image data
  • Conversion of 3D coordinates to 2D coordinates using calibration parameters
  • Fusion of pixel data from the camera and point data from the LiDAR sensor to create fused data

Potential Applications

This technology can be applied in autonomous vehicles, robotics, urban planning, and environmental monitoring.

Problems Solved

This method solves the challenge of integrating data from LiDAR sensors and cameras to provide a more detailed and accurate representation of the environment.

Benefits

- Improved accuracy in object detection and recognition - Enhanced depth perception and spatial awareness - Increased efficiency in data processing and analysis

Potential Commercial Applications

"LiDAR and Camera Data Fusion Method for Autonomous Vehicles and Robotics"

Possible Prior Art

There are existing methods for LiDAR and camera data fusion, but this specific approach may offer unique advantages in terms of accuracy and efficiency.

What are the limitations of this fusion method in terms of real-time applications?

The patent application does not specify the processing speed or latency of the fusion method, which could be crucial for real-time applications such as autonomous driving.

How does this method compare to existing LiDAR-camera fusion techniques in terms of computational complexity?

The patent application does not provide a comparison with other fusion techniques in terms of computational resources required, which could be a key factor for practical implementation.


Original Abstract Submitted

A LiDAR and camera data fusion method includes generating a voxel-wise feature map based on point cloud data of a LiDAR sensor, generating a pixel-wise feature map based on image data of a camera, converting three-dimensional (D) coordinates of point data of the voxel-wise feature map to two-dimensional (D) coordinates, based on at least one predefined calibration parameter, and generating fused data by combining pixel data of the pixel-wise feature map and point data of the D coordinates.