18507593. DETERMINING TRANSLATION SCALE IN A MULTI-CAMERA DYNAMIC CALIBRATION SYSTEM simplified abstract (Intel Corporation)

From WikiPatents
Jump to navigation Jump to search

DETERMINING TRANSLATION SCALE IN A MULTI-CAMERA DYNAMIC CALIBRATION SYSTEM

Organization Name

Intel Corporation

Inventor(s)

Avinash Kumar of Karnataka (IN)

DETERMINING TRANSLATION SCALE IN A MULTI-CAMERA DYNAMIC CALIBRATION SYSTEM - A simplified explanation of the abstract

This abstract first appeared for US patent application 18507593 titled 'DETERMINING TRANSLATION SCALE IN A MULTI-CAMERA DYNAMIC CALIBRATION SYSTEM

Simplified Explanation

Multi-camera dynamic calibration involves using multiple images from different cameras to determine translation scales and magnitudes within a 3D scene. By comparing translations between pairs of cameras, a relative translation scale can be established and expanded to larger camera configurations.

  • Multi-camera dynamic calibration uses three or more images from separate cameras viewing the same 3D scene.
  • Translation magnitudes are determined by comparing information from additional images.
  • Relative translation scales are calculated by comparing translation magnitudes between pairs of cameras.
  • The translation scale can be extended to configurations with more than three cameras by using pair-wise camera translations.
  • Ground-truth translations can be used to determine translation magnitudes accurately for all camera pairs.
  • The process is divided into smaller overlapping triplet-camera scale estimations, which are iteratively applied to sets of three images.
  • Estimates are merged by aligning overlapping sets of estimates linearly.

Potential Applications

This technology can be applied in various fields such as computer vision, robotics, augmented reality, and surveillance systems.

Problems Solved

  • Accurate calibration of multiple cameras in a dynamic environment.
  • Determining translation scales and magnitudes for different camera configurations.

Benefits

  • Improved accuracy in multi-camera systems.
  • Enhanced performance in 3D scene reconstruction.
  • Increased reliability in object tracking and localization.

Potential Commercial Applications

Optimizing surveillance systems, enhancing augmented reality experiences, improving robotics navigation, and advancing computer vision technologies.

Unanswered Questions

How does this technology handle occlusions in the 3D scene?

The abstract does not mention how occlusions are addressed when determining translation scales and magnitudes between cameras.

What is the computational complexity of this multi-camera dynamic calibration process?

The abstract does not provide information on the computational resources required for implementing this technology.


Original Abstract Submitted

Multi-camera dynamic calibration can be performed using three or more images, each from a separate camera viewing the same 3D scene. Multi-camera translation magnitude can be determined by incorporating information from an additional image. A relative translation scale is determined for a configuration of three cameras using a ratio of translation magnitudes. The translation scale can be expanded to configurations having more than three cameras using the relative scale of the pair-wise camera translations to determine translation scales for a multi-camera set-up. If the ground-truth translation is known for a pair of cameras, then the translation magnitude can be determined for all pairs of cameras to ground-truth accuracy. Multi-camera scale estimation is divided into smaller overlapping triplet-camera scale estimation, and the translation scale determination corresponding to each image pair is applied iteratively to overlapping sets of three images. The estimates can be merged by linearly aligning overlapping sets of estimates.