US Patent Application 17733101. Self-Calibration for Decoration Based Sensor Fusion Method simplified abstract

From WikiPatents
Jump to navigation Jump to search

Self-Calibration for Decoration Based Sensor Fusion Method

Organization Name

TOYOTA JIDOSHA KABUSHIKI KAISHA


Inventor(s)

Jie Li of Ann Arbor MI (US)

Vitor Guizilini of Santa Clara CA (US)

Adrien Gaidon of San Jose CA (US)

Self-Calibration for Decoration Based Sensor Fusion Method - A simplified explanation of the abstract

This abstract first appeared for US patent application 17733101 titled 'Self-Calibration for Decoration Based Sensor Fusion Method

Simplified Explanation

The patent application describes a method for automatically aligning image data and point cloud data using a machine learning model. This alignment is important for tasks like object detection and tracking in autonomous systems.

  • The method involves receiving image data from a vision sensor and point cloud data from a depth sensor.
  • An electronic control unit implements a machine learning model that is trained to align the point cloud data and the image data based on a current calibration.
  • The model can also detect any differences in alignment between the two types of data.
  • If a difference is detected, the current calibration is adjusted based on this difference.
  • The method then outputs a calibrated embedding feature map, which is a representation of the aligned data.
  • This self-calibrating alignment process improves the accuracy and reliability of object detection and tracking in autonomous systems.
  • The use of machine learning allows for more efficient and accurate alignment, reducing the need for manual calibration.
  • The method can be applied to various applications, such as autonomous vehicles, robotics, and augmented reality systems.


Original Abstract Submitted

A method for self-calibrating alignment between image data and point cloud data utilizing a machine learning model includes receiving, with an electronic control unit, image data from a vision sensor and point cloud data from a depth sensor, implementing, with the electronic control unit, a machine learning model trained to: align the point cloud data and the image data based on a current calibration, detect a difference in alignment of the point cloud data and the image data, adjust the current calibration based on the difference in alignment, and output a calibrated embedding feature map based on adjustments to the current calibration.