Intel corporation (20240111295). INTELLIGENT AND ADAPTIVE MULTI-MODAL REAL-TIME SIMULTANEOUS LOCALIZATION AND MAPPING BASED ON LIGHT DETECTION AND RANGING AND CAMERA OR IMAGE SENSORS simplified abstract

From WikiPatents
Revision as of 07:34, 10 April 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

INTELLIGENT AND ADAPTIVE MULTI-MODAL REAL-TIME SIMULTANEOUS LOCALIZATION AND MAPPING BASED ON LIGHT DETECTION AND RANGING AND CAMERA OR IMAGE SENSORS

Organization Name

intel corporation

Inventor(s)

Mohammad Haghighipanah of Tigard OR (US)

Rita Chattopadhyay of Chandler AZ (US)

INTELLIGENT AND ADAPTIVE MULTI-MODAL REAL-TIME SIMULTANEOUS LOCALIZATION AND MAPPING BASED ON LIGHT DETECTION AND RANGING AND CAMERA OR IMAGE SENSORS - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240111295 titled 'INTELLIGENT AND ADAPTIVE MULTI-MODAL REAL-TIME SIMULTANEOUS LOCALIZATION AND MAPPING BASED ON LIGHT DETECTION AND RANGING AND CAMERA OR IMAGE SENSORS

Simplified Explanation

The abstract describes a method for motion tracking that involves receiving data from a camera and a light detection and ranging (lidar) sensor, transforming the lidar data to correspond to the camera data, weighting the data from both sources, and combining them to generate image data.

  • Camera data and lidar data are received.
  • Lidar data is transformed to match the camera data.
  • Weighting factors are determined for the camera data and transformed lidar data.
  • Camera data is weighted and transformed lidar data is weighted.
  • Weighted camera data and weighted lidar data are combined to generate image data.

Potential Applications

This technology can be applied in various fields such as autonomous vehicles, robotics, augmented reality, and virtual reality for accurate motion tracking and scene reconstruction.

Problems Solved

1. Accurate motion tracking in dynamic environments. 2. Seamless integration of data from different sensors for comprehensive scene analysis.

Benefits

1. Improved accuracy in motion tracking. 2. Enhanced scene reconstruction capabilities. 3. Real-time data fusion for better decision-making.

Potential Commercial Applications

"Motion Tracking Technology for Autonomous Vehicles and Robotics"

Possible Prior Art

Prior art in the field of computer vision and sensor fusion techniques for motion tracking and scene reconstruction may exist, but specific examples are not provided in the abstract.

Unanswered Questions

How does this technology handle occlusions in the scene during motion tracking?

The abstract does not mention how the method deals with occlusions when combining data from the camera and lidar sensor.

What is the computational complexity of this motion tracking method?

The abstract does not provide information on the computational resources required for implementing this technology.


Original Abstract Submitted

a method for motion tracking is provided including receive first data, receive second data, transform the second data to generate transformed second data corresponding to the first frame; determine a first weighting factor for the first data and a second weighting factor for the transformed second data; weight the first data using the first weighting factor to generate first weighted data; weight the transformed second data using the second weighting factor to generate second weighted data; and combine the weighted first data and the weighted second data to generate combined image data. the first data include a first frame of a first scene of an environment detected by a camera or image sensor. the second data include a second frame of a second scene of an environment detected by a light detection and ranging (lidar) sensor. at least a subset of the second scene corresponds to the first scene.