18430882. Advanced Fusion Mode For Adaptive Loop Filter In Video Coding simplified abstract (Beijing Bytedance Network Technology Co., Ltd.)

From WikiPatents
Jump to navigation Jump to search

Advanced Fusion Mode For Adaptive Loop Filter In Video Coding

Organization Name

Beijing Bytedance Network Technology Co., Ltd.

Inventor(s)

Wenbin Yin of Beijing (CN)

Kai Zhang of San Diego CA (US)

Li Zhang of San Diego CA (US)

Advanced Fusion Mode For Adaptive Loop Filter In Video Coding - A simplified explanation of the abstract

This abstract first appeared for US patent application 18430882 titled 'Advanced Fusion Mode For Adaptive Loop Filter In Video Coding

Simplified Explanation

The method described in the patent application involves processing media data by filtering a sample of a video unit using virtual filters in a fusion mode and then converting the video based on the filtered sample. This innovation aims to improve the quality and efficiency of video coding.

  • Filtering a sample of a video unit using virtual filters in a fusion mode
  • Performing a conversion between the video and a bitstream based on the filtered sample

Potential Applications

This technology could be applied in video streaming services, video editing software, and video surveillance systems.

Problems Solved

This technology helps improve the quality of videos by applying virtual filters in a fusion mode, leading to better compression and encoding of video data.

Benefits

The benefits of this technology include enhanced video quality, improved compression efficiency, and better overall performance in video processing tasks.

Potential Commercial Applications

The potential commercial applications of this technology include video streaming platforms, video editing software companies, and manufacturers of video surveillance systems.

Possible Prior Art

One possible prior art could be the use of virtual filters in video processing to enhance video quality and compression efficiency.

What are the specific virtual filters used in the fusion mode?

The specific virtual filters used in the fusion mode are not detailed in the abstract. Further information on the types of filters and how they are combined could provide a clearer understanding of the innovation.

How does the conversion process between the video and bitstream work?

The abstract mentions a conversion process between the video and a bitstream based on the filtered sample, but it does not elaborate on the specifics of this process. Exploring the details of this conversion could shed light on the technical aspects of the innovation.


Original Abstract Submitted

A method of processing media data. The method includes filtering a sample of a video unit using one or more virtual filters of a fusion mode to produce a filtering result; and performing a conversion between a video including the video unit and a bitstream of the video based on the sample as filtered. A corresponding video coding apparatus and non-transitory computer-readable recording medium are also disclosed.