Hyundai motor company (20240161481). METHOD AND SYSTEM FOR FUSING DATA FROM LIDAR AND CAMERA simplified abstract

From WikiPatents
Jump to navigation Jump to search

METHOD AND SYSTEM FOR FUSING DATA FROM LIDAR AND CAMERA

Organization Name

hyundai motor company

Inventor(s)

Sung Moon Jang of Seongnam-si (KR)

Ki Chun Jo of Seoul (KR)

Jin Su Ha of Seoul (KR)

Ha Min Song of Yeosu-si (KR)

Chan Soo Kim of Seoul (KR)

Ji Eun Cho of Seoul (KR)

METHOD AND SYSTEM FOR FUSING DATA FROM LIDAR AND CAMERA - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240161481 titled 'METHOD AND SYSTEM FOR FUSING DATA FROM LIDAR AND CAMERA

Simplified Explanation

The patent application describes a method for fusing lidar and camera data to generate a more comprehensive dataset for various applications.

  • The method involves creating a feature map based on point cloud data from a lidar sensor.
  • Another feature map is generated based on image data from a camera.
  • The 3D coordinates from the lidar data are converted to 2D coordinates using calibration parameters.
  • The fused data is produced by combining the pixel data from the camera with the point data from the lidar sensor.

Potential Applications

This technology can be used in autonomous vehicles for improved object detection and scene understanding. It can also be applied in robotics for navigation and mapping tasks.

Problems Solved

This technology addresses the challenge of integrating data from different sensors to provide a more complete and accurate representation of the environment. It also helps in reducing the complexity of data processing and analysis.

Benefits

The fusion of lidar and camera data enhances the capabilities of perception systems, leading to better decision-making and increased safety in various applications. It also enables more efficient and effective data interpretation.

Potential Commercial Applications

This technology has potential applications in the automotive industry, robotics, surveillance systems, and smart city infrastructure development. The section title could be "Commercial Applications of Lidar and Camera Data Fusion Technology".

Possible Prior Art

One example of prior art in this field is the use of sensor fusion techniques in robotics and autonomous systems to improve perception and decision-making capabilities. Another example is the integration of lidar and camera data in mapping and navigation applications.

Unanswered Questions

How does this technology handle occlusions in the environment?

The method described in the patent application does not specifically address how occlusions in the environment are handled when fusing lidar and camera data. This could be a potential limitation in scenarios where objects are partially or fully obstructed from view.

What is the computational overhead of implementing this fusion method?

The patent application does not provide information on the computational resources required to implement the lidar and camera data fusion method. Understanding the computational overhead is crucial for assessing the feasibility of deploying this technology in real-time applications.


Original Abstract Submitted

a lidar and camera data fusion method includes generating a voxel-wise feature map based on point cloud data of a lidar sensor, generating a pixel-wise feature map based on image data of a camera, converting three-dimensional (d) coordinates of point data of the voxel-wise feature map to two-dimensional (d) coordinates, based on at least one predefined calibration parameter, and generating fused data by combining pixel data of the pixel-wise feature map and point data of the d coordinates.