18112822. EFFICIENT FLOW-GUIDED MULTI-FRAME DE-FENCING simplified abstract (Samsung Electronics Co., Ltd.)

From WikiPatents
Jump to navigation Jump to search

EFFICIENT FLOW-GUIDED MULTI-FRAME DE-FENCING

Organization Name

Samsung Electronics Co., Ltd.

Inventor(s)

Stavros Tsogkas of Toronto CA (US)

Fengjia Zhang of Toronto (CA)

Aleksai Levinshtein of Thornhill (CA)

Allen Douglas Jepson of Oakville (CA)

EFFICIENT FLOW-GUIDED MULTI-FRAME DE-FENCING - A simplified explanation of the abstract

This abstract first appeared for US patent application 18112822 titled 'EFFICIENT FLOW-GUIDED MULTI-FRAME DE-FENCING

Simplified Explanation

The present disclosure describes a method for performing multi-frame de-fencing by a device, which involves removing opaque obstructions from a background scene in a series of images.

  • Obtaining an image burst with obstructed portions of the background scene.
  • Generating obstruction masks to mark the obstructed portions in the images.
  • Computing the motion of the background scene with respect to a keyframe using an occlusion-aware optical flow model.
  • Reconstructing the keyframe by combining features in an image fusion and inpainting network.
  • Providing the user with the reconstructed keyframe showing an unobstructed version of the background scene.

---

      1. Potential Applications
  • Surveillance systems
  • Photography editing software
  • Video editing tools
      1. Problems Solved
  • Removing obstructions from images
  • Enhancing visibility of background scenes
  • Improving image quality
      1. Benefits
  • Clearer and more visually appealing images
  • Enhanced surveillance footage
  • Better user experience in photography and video editing applications


Original Abstract Submitted

The present disclosure provides methods, apparatuses, and computer-readable mediums for performing multi-frame de-fencing by a device. In some embodiments, a method includes obtaining an image burst having at least one portion of a background scene obstructed by an opaque obstruction. The method further includes generating a plurality of obstruction masks marking the at least one portion of the background scene obstructed by the opaque obstruction in images of the image burst. The method further includes computing a motion of the background scene, with respect to a keyframe selected from the plurality of images, by applying an occlusion-aware optical flow model. The method further includes reconstructing the selected keyframe by providing a combination of features to an image fusion and inpainting network. The method further includes providing, to the user, the reconstructed keyframe comprising an unobstructed version of the background scene of the image burst.