18511031. METHOD AND SYSTEM FOR FUSING DATA FROM LIDAR AND CAMERA simplified abstract (Kia Corporation)
Contents
- 1 METHOD AND SYSTEM FOR FUSING DATA FROM LIDAR AND CAMERA
- 1.1 Organization Name
- 1.2 Inventor(s)
- 1.3 METHOD AND SYSTEM FOR FUSING DATA FROM LIDAR AND CAMERA - A simplified explanation of the abstract
- 1.4 Simplified Explanation
- 1.5 Potential Applications
- 1.6 Problems Solved
- 1.7 Benefits
- 1.8 Potential Commercial Applications
- 1.9 Possible Prior Art
- 1.10 Unanswered Questions
- 1.11 Original Abstract Submitted
METHOD AND SYSTEM FOR FUSING DATA FROM LIDAR AND CAMERA
Organization Name
Inventor(s)
Sung Moon Jang of Seongnam-si (KR)
METHOD AND SYSTEM FOR FUSING DATA FROM LIDAR AND CAMERA - A simplified explanation of the abstract
This abstract first appeared for US patent application 18511031 titled 'METHOD AND SYSTEM FOR FUSING DATA FROM LIDAR AND CAMERA
Simplified Explanation
The patent application describes a method for fusing LiDAR and camera data to generate a more comprehensive dataset for various applications.
- LiDAR and camera data fusion method
- Generates voxel-wise feature map from LiDAR point cloud data
- Generates pixel-wise feature map from camera image data
- Converts 3D coordinates to 2D coordinates based on calibration parameters
- Combines pixel data and point data to generate fused data
Potential Applications
This technology can be applied in autonomous vehicles, robotics, augmented reality, and urban planning for enhanced perception and decision-making.
Problems Solved
1. Limited information from individual sensors 2. Lack of comprehensive data for accurate analysis and decision-making
Benefits
1. Improved accuracy and reliability of data 2. Enhanced perception capabilities 3. Better decision-making in various applications
Potential Commercial Applications
"LiDAR and Camera Data Fusion Technology for Enhanced Perception in Autonomous Vehicles"
Possible Prior Art
One possible prior art is the use of LiDAR and camera data fusion in the field of autonomous vehicles for improved object detection and scene understanding.
Unanswered Questions
1. What specific calibration parameters are used in the conversion of 3D to 2D coordinates? 2. How does the fusion of LiDAR and camera data improve the overall performance of the system compared to using each sensor individually?
Original Abstract Submitted
A LiDAR and camera data fusion method includes generating a voxel-wise feature map based on point cloud data of a LiDAR sensor, generating a pixel-wise feature map based on image data of a camera, converting three-dimensional (D) coordinates of point data of the voxel-wise feature map to two-dimensional (D) coordinates, based on at least one predefined calibration parameter, and generating fused data by combining pixel data of the pixel-wise feature map and point data of the D coordinates.