17491120. PARALLEL METADATA GENERATION BASED ON A WINDOW OF OVERLAPPED FRAMES simplified abstract (SAMSUNG ELECTRONICS CO., LTD.)

From WikiPatents
Jump to navigation Jump to search

PARALLEL METADATA GENERATION BASED ON A WINDOW OF OVERLAPPED FRAMES

Organization Name

SAMSUNG ELECTRONICS CO., LTD.

Inventor(s)

Hunsop Hong of Irvine CA (US)

Seongnam Oh of Irvine CA (US)

PARALLEL METADATA GENERATION BASED ON A WINDOW OF OVERLAPPED FRAMES - A simplified explanation of the abstract

This abstract first appeared for US patent application 17491120 titled 'PARALLEL METADATA GENERATION BASED ON A WINDOW OF OVERLAPPED FRAMES

Simplified Explanation

The patent application describes a method for segmenting a video into chunks and generating metadata for each chunk. The method involves selecting frames from a preceding video chunk to create a window of overlapped frames for each subsequent chunk. The metadata is generated in parallel for each chunk, taking into account the corresponding window of frames. A portion of the metadata specific to the window of frames is discarded for each subsequent chunk. Finally, the video chunks are merged into a single output video, retaining the associated metadata.

  • Method for segmenting a video into chunks and generating metadata for each chunk
  • Select frames from a preceding chunk to create a window of overlapped frames for each subsequent chunk
  • Process each chunk in parallel to generate metadata, considering the corresponding window of frames
  • Discard a portion of the metadata specific to the window of frames for each subsequent chunk
  • Merge the video chunks into a single output video, preserving the associated metadata

Potential Applications

  • Video editing and post-production workflows
  • Video compression and encoding algorithms
  • Video streaming platforms and services
  • Video analytics and content analysis systems

Problems Solved

  • Efficient segmentation and processing of large video files
  • Seamless merging of video chunks while retaining relevant metadata
  • Optimization of video processing and analysis tasks
  • Enhanced video compression and streaming techniques

Benefits

  • Improved efficiency and speed in video processing workflows
  • Enhanced accuracy and reliability of video analytics and content analysis
  • Reduced storage and bandwidth requirements for video streaming
  • Simplified video editing and post-production tasks


Original Abstract Submitted

One embodiment provides a method comprising segmenting an input video into a first video chunk and one or more subsequent video chunks. The method further comprises, for each subsequent video chunk, generating a corresponding window of overlapped frames by selecting a subsequence of frames from a different video chunk immediately preceding the subsequent video chunk. The method further comprises generating metadata corresponding to each video chunk by processing each video chunk in parallel. Each subsequent video chunk is processed based in part on a corresponding window of overlapped frames. The method further comprises, for each subsequent video chunk, discarding a portion of metadata corresponding to the subsequent video chunk, where the portion discarded is specific to a corresponding window of overlapped frames. The method further comprises merging each video chunk into a single output video. Each video chunk merged is associated with any remaining corresponding metadata.