18433054. ENCODING AND DECODING METHOD AND APPARATUS simplified abstract (HUAWEI TECHNOLOGIES CO., LTD.)

From WikiPatents
Revision as of 06:42, 22 June 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

ENCODING AND DECODING METHOD AND APPARATUS

Organization Name

HUAWEI TECHNOLOGIES CO., LTD.

Inventor(s)

Jiaying Liu of Beijing (CN)

Dezhao Wang of Beijing (CN)

Jing Wang of Beijing (CN)

Tiansheng Guo of Beijing (CN)

Ze Cui of Beijing (CN)

Yunying Ge of Beijing (CN)

ENCODING AND DECODING METHOD AND APPARATUS - A simplified explanation of the abstract

This abstract first appeared for US patent application 18433054 titled 'ENCODING AND DECODING METHOD AND APPARATUS

Simplified Explanation

This patent application discusses methods and devices for encoding and decoding data to enhance rate distortion performance using artificial intelligence technologies.

  • The method involves obtaining data to be encoded, inputting it into a first encoding network to get a target parameter, constructing a second encoding network based on this parameter, obtaining a first feature by inputting the data into the second network, and finally encoding the feature to generate an encoded bitstream.

Key Features and Innovation

  • Utilizes artificial intelligence technologies to improve data encoding and decoding performance.
  • Involves a multi-step process including obtaining data, constructing encoding networks, and generating an encoded bitstream.
  • Focuses on enhancing rate distortion performance of encoding and decoding methods.

Potential Applications

This technology can be applied in various fields such as telecommunications, data compression, image and video processing, and artificial intelligence systems.

Problems Solved

  • Improves the efficiency and performance of data encoding and decoding methods.
  • Enhances the rate distortion performance of encoding and decoding processes.
  • Enables better compression and transmission of data.

Benefits

  • Higher efficiency in data encoding and decoding.
  • Improved rate distortion performance.
  • Enhanced data compression capabilities.
  • Better quality of transmitted data.

Commercial Applications

Artificial intelligence-based data encoding and decoding methods can be utilized in industries such as telecommunications, multimedia processing, data storage, and cloud computing, leading to faster and more efficient data processing and transmission.

Prior Art

Readers can explore prior art related to data encoding and decoding methods, artificial intelligence technologies, and rate distortion performance in the field of information theory and signal processing.

Frequently Updated Research

Researchers are constantly working on improving data encoding and decoding techniques using artificial intelligence, exploring new algorithms and architectures to enhance performance and efficiency.

Questions about Data Encoding and Decoding with AI

How does this technology impact data transmission efficiency?

This technology significantly improves data transmission efficiency by enhancing the rate distortion performance of encoding and decoding methods, leading to better compression and quality of transmitted data.

What are the potential applications of AI-based data encoding and decoding methods?

AI-based data encoding and decoding methods have diverse applications in telecommunications, multimedia processing, data storage, and artificial intelligence systems, offering improved performance and efficiency in various industries.


Original Abstract Submitted

This application discloses encoding and decoding methods and apparatuses, and relates to the field of artificial intelligence technologies, to improve rate distortion performance of data encoding and decoding methods. The method includes: first obtaining to-be-encoded data, and then inputting the to-be-encoded data into a first encoding network to obtain a target parameter; then constructing a second encoding network based on the target parameter; next inputting the to-be-encoded data into the second encoding network to obtain a first feature; and finally encoding the first feature to obtain an encoded bitstream.