18406006. HALLUCINATING DETAILS FOR OVER-EXPOSED PIXELS IN VIDEOS USING LEARNED REFERENCE FRAME SELECTION simplified abstract (NVIDIA Corporation)

From WikiPatents
Jump to navigation Jump to search

HALLUCINATING DETAILS FOR OVER-EXPOSED PIXELS IN VIDEOS USING LEARNED REFERENCE FRAME SELECTION

Organization Name

NVIDIA Corporation

Inventor(s)

Iuri Frosio of Bergamo (IT)

Yazhou Xing of Shenzhen (CN)

Chao Liu of Pittsburgh PA (US)

Anjul Patney of Kirkland WA (US)

Hongxu Yin of San Jose CA (US)

Amrita Mazumdar of San Francisco CA (US)

Jan Kautz of Lexington MA (US)

HALLUCINATING DETAILS FOR OVER-EXPOSED PIXELS IN VIDEOS USING LEARNED REFERENCE FRAME SELECTION - A simplified explanation of the abstract

This abstract first appeared for US patent application 18406006 titled 'HALLUCINATING DETAILS FOR OVER-EXPOSED PIXELS IN VIDEOS USING LEARNED REFERENCE FRAME SELECTION

Simplified Explanation: This patent application involves enhancing live video frames in real-time by filling in missing details using neural networks and reference frames with different exposure levels.

Key Features and Innovation:

  • Receiving live video frames from a capturing device
  • Identifying reference frames with varying exposure levels
  • Using neural networks to generate missing details for current frame
  • Outputting an updated version of the current frame in real-time

Potential Applications: This technology could be used in video streaming services, security systems, video conferencing platforms, and virtual reality applications.

Problems Solved: This technology addresses the challenge of enhancing live video quality by filling in missing details in real-time.

Benefits: The benefits of this technology include improved video quality, real-time enhancement, and better user experience in various applications.

Commercial Applications: Enhancing live video quality in real-time can have commercial applications in the entertainment industry, security and surveillance systems, video conferencing platforms, and virtual reality experiences.

Questions about the Technology: 1. How does this technology improve the overall user experience in video applications? 2. What are the potential limitations of using neural networks to enhance live video frames?


Original Abstract Submitted

One or more embodiments include receiving one or more frames of a live video captured by a video capturing device, wherein the one or more frames include a current frame that is most-recently captured, identifying a set of reference frames included in the one or more frames based on at least the current frame, wherein each frame in the set of reference frames has a different exposure level relative to the current frame, determining, using one or more neural networks, a set of missing details for one or more regions of the current frame based on the set of reference frames, generating an updated version of the current frame based on the set of details, and outputting the updated version of the current frame in real-time.