18511031. METHOD AND SYSTEM FOR FUSING DATA FROM LIDAR AND CAMERA simplified abstract (Kia Corporation)

From WikiPatents
Revision as of 07:53, 24 May 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

METHOD AND SYSTEM FOR FUSING DATA FROM LIDAR AND CAMERA

Organization Name

Kia Corporation

Inventor(s)

Sung Moon Jang of Seongnam-si (KR)

Ki Chun Jo of Seoul (KR)

Jin Su Ha of Seoul (KR)

Ha Min Song of Yeosu-si (KR)

Chan Soo Kim of Seoul (KR)

Ji Eun Cho of Seoul (KR)

METHOD AND SYSTEM FOR FUSING DATA FROM LIDAR AND CAMERA - A simplified explanation of the abstract

This abstract first appeared for US patent application 18511031 titled 'METHOD AND SYSTEM FOR FUSING DATA FROM LIDAR AND CAMERA

Simplified Explanation

The patent application describes a method for fusing LiDAR and camera data to generate a more comprehensive dataset for various applications.

  • LiDAR and camera data fusion method
  • Generates voxel-wise feature map from LiDAR point cloud data
  • Generates pixel-wise feature map from camera image data
  • Converts 3D coordinates to 2D coordinates based on calibration parameters
  • Combines pixel data and point data to generate fused data

Potential Applications

This technology can be applied in autonomous vehicles, robotics, augmented reality, and urban planning for enhanced perception and decision-making.

Problems Solved

1. Limited information from individual sensors 2. Lack of comprehensive data for accurate analysis and decision-making

Benefits

1. Improved accuracy and reliability of data 2. Enhanced perception capabilities 3. Better decision-making in various applications

Potential Commercial Applications

"LiDAR and Camera Data Fusion Technology for Enhanced Perception in Autonomous Vehicles"

Possible Prior Art

One possible prior art is the use of LiDAR and camera data fusion in the field of autonomous vehicles for improved object detection and scene understanding.

Unanswered Questions

1. What specific calibration parameters are used in the conversion of 3D to 2D coordinates? 2. How does the fusion of LiDAR and camera data improve the overall performance of the system compared to using each sensor individually?


Original Abstract Submitted

A LiDAR and camera data fusion method includes generating a voxel-wise feature map based on point cloud data of a LiDAR sensor, generating a pixel-wise feature map based on image data of a camera, converting three-dimensional (D) coordinates of point data of the voxel-wise feature map to two-dimensional (D) coordinates, based on at least one predefined calibration parameter, and generating fused data by combining pixel data of the pixel-wise feature map and point data of the D coordinates.