US Patent Application 18353882. CROSS-MODAL SEARCH METHOD AND RELATED DEVICE simplified abstract

From WikiPatents
Jump to navigation Jump to search

CROSS-MODAL SEARCH METHOD AND RELATED DEVICE

Organization Name

TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED

Inventor(s)

Ke Mei of Shenzhen (CN)

Huan Zheng of Shenzhen (CN)

Ming Li of Shenzhen (CN)

CROSS-MODAL SEARCH METHOD AND RELATED DEVICE - A simplified explanation of the abstract

This abstract first appeared for US patent application 18353882 titled 'CROSS-MODAL SEARCH METHOD AND RELATED DEVICE

Simplified Explanation

- The patent application describes a method for cross-modal search, which involves searching for data in one modality based on the content and semantic information of data in another modality. - The method begins by acquiring data in the first modality. - The first modality data is then used to search a database in the second modality based on its content information. This search results in a first set of second modality data that matches the content information of the first modality data. - Additionally, the first modality data is used to search the second modality database based on its semantic information. This search results in a second set of second modality data that matches the semantic information of the first modality data. - The first set and the second set are then merged to obtain a cross-modal search result that corresponds to the first modality data.


Original Abstract Submitted

A cross-modal search method includes: acquiring first modality data; searching in a second modality database based on content information of the first modality data to obtain a first set, the first set including at least one piece of second modality data matched with the content information of the first modality data; searching in the second modality database based on semantic information of the first modality data to obtain a second set, the second set including at least one piece of second modality data matched with the semantic information of the first modality data; and merging the first set and the second set to obtain a cross-modal search result corresponding to the first modality data.