Tencent Technology (Shenzhen) Company Limited patent applications published on December 14th, 2023

From WikiPatents
Jump to navigation Jump to search

Contents

Patent applications for Tencent Technology (Shenzhen) Company Limited on December 14th, 2023

CLOUD APPLICATION-BASED DEVICE CONTROL METHOD AND APPARATUS, ELECTRONIC DEVICE AND READABLE MEDIUM (18239665)

Main Inventor

Shili XU


Brief explanation

The abstract describes a cloud-based device control method performed by an electronic device. The method involves establishing a communication connection with a cloud application server, receiving a video stream and multimedia feedback information from the server, and controlling a physical device associated with the client to execute a multimedia feedback operation while the video stream is being played.
  • The method involves establishing a connection with a cloud application server.
  • The electronic device receives a video stream and multimedia feedback information from the server.
  • A physical device associated with the client is controlled to execute a multimedia feedback operation.
  • The multimedia feedback operation is performed while the video stream is being played.

Potential Applications

  • Remote control of physical devices through a cloud application.
  • Interactive multimedia feedback operations during video streaming.
  • Automation and control of devices through a cloud-based platform.

Problems Solved

  • Enables remote control and interaction with physical devices through a cloud application.
  • Provides a method for executing multimedia feedback operations during video streaming.
  • Simplifies device control and automation through a cloud-based platform.

Benefits

  • Allows users to control physical devices remotely using a cloud application.
  • Enhances user experience by enabling multimedia feedback operations during video streaming.
  • Provides a convenient and centralized platform for device control and automation.

Abstract

This application provides a cloud application-based device control method performed by an electronic device. The method includes: establishing a communication connection with a cloud application server; receiving a video stream corresponding to a cloud application scene transmitted by the cloud application server and multimedia feedback information corresponding to the cloud application scene; and controlling a physical device associated with the cloud application client to execute a multimedia feedback operation in accordance with the multimedia feedback information while the video stream corresponding to the cloud application scene being played.

VIRTUAL OBJECT CONTROL METHOD AND APPARATUS, STORAGE MEDIUM, AND ELECTRONIC DEVICE (18457152)

Main Inventor

Xiongfei HUANG


Brief explanation

The present disclosure describes a method for controlling a virtual object using an electronic device. This method involves several steps:
  • Obtaining a first operation instruction when the virtual object performs an acceleration operation. This instruction guides the virtual object to perform a specific action.
  • Obtaining a second operation instruction within a certain time period after the completion of the first action. This second instruction guides the virtual object to perform another action.
  • Adjusting the movement state of the virtual object from a first state to a second state. In the first state, the virtual object collects less energy per unit time compared to the second state.
  • Activating an acceleration control button when the energy accumulation value reaches a specific threshold.

Potential applications of this technology:

  • Gaming: This method can be used to control virtual objects in video games, allowing for more dynamic and interactive gameplay experiences.
  • Virtual reality (VR): The method can enhance the immersion and realism of VR environments by providing more precise control over virtual objects.
  • Augmented reality (AR): AR applications can benefit from this method by enabling users to manipulate virtual objects in real-world environments.

Problems solved by this technology:

  • Improved control: The method allows for more precise and intuitive control of virtual objects, enhancing the user experience.
  • Energy optimization: By adjusting the movement state of the virtual object, the method optimizes energy collection, potentially extending the device's battery life.

Benefits of this technology:

  • Enhanced user experience: The method provides more dynamic and interactive control over virtual objects, leading to a more engaging user experience.
  • Energy efficiency: By optimizing energy collection, the method helps conserve energy and prolong the device's battery life.

Abstract

The present disclosure provides a method for controlling a virtual object performed by an electronic device. The method includes obtaining a first operation instruction when a virtual object performs an acceleration operation, the first operation instruction instructing the virtual object to perform a first target action; obtaining a second operation instruction within a first target time period after the first target action is completed, the second operation instruction instructing the virtual object to perform a second target action; adjusting a movement state of the virtual object from a first state to a second state, a first energy value collected by the virtual object per unit time in the first state being less than a second energy value collected by the virtual object per unit time in the second state; and adjusting an acceleration control button to an active state when an energy accumulation value reaches a trigger threshold.

VIRTUAL ITEM PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT (18227860)

Main Inventor

Jieqi XIE


Brief explanation

The abstract describes a virtual item processing method performed by an electronic device. Here are the key points:
  • The method involves displaying a virtual scene that includes an entry to a first virtual item list associated with a first virtual object.
  • When the entry to the first virtual item list is triggered, the method displays the first virtual item list within the virtual scene.
  • The first virtual item list consists of different types of virtual items owned by the first virtual object.
  • When a user selects a specific type of virtual items from the first virtual item list, the method discards at least one virtual item associated with the selected type.

Potential applications of this technology:

  • Virtual reality gaming platforms or applications that involve collecting and managing virtual items.
  • Virtual marketplaces where users can buy, sell, or trade virtual items.
  • Virtual training or simulation environments where users need to interact with and manage virtual objects.

Problems solved by this technology:

  • Simplifies the process of managing virtual items within a virtual scene or environment.
  • Provides a user-friendly interface for users to view and select virtual items.
  • Allows for efficient discarding of unwanted virtual items.

Benefits of this technology:

  • Enhances the user experience by providing a visually appealing and interactive virtual scene.
  • Streamlines the management of virtual items, making it easier for users to navigate and organize their collections.
  • Improves efficiency by allowing users to quickly discard unwanted virtual items.

Abstract

This application provides a virtual item processing method performed by an electronic device. The method includes: displaying a virtual scene, the virtual scene comprising an entry to a first virtual item list associated with a first virtual object; displaying the first virtual item list in the virtual scene in response to a trigger operation on the entry to the first virtual item list, the first virtual item list including at least one type of virtual items owned by the first virtual object; and in response to a first selection operation on one type of virtual items in the first virtual item list, discarding at least one virtual item associated with the selected type of virtual items.

VIRTUAL OBJECT LOCATION DISPLAY (18456392)

Main Inventor

Zhihong LIU


Brief explanation

The patent application describes a method for displaying information in a virtual scene.
  • The method involves displaying a virtual scene with a first virtual object.
  • If the first virtual object is within a target region and the view of at least one second virtual object is obstructed, a graphical representation of the second virtual object is displayed.
  • The graphical representation indicates the location of the second virtual object within the virtual scene.

Potential applications of this technology:

  • Virtual reality gaming: The method can be used to display graphical representations of objects that are obstructed in the virtual scene, providing players with information about their location.
  • Architectural visualization: When exploring a virtual building or space, the method can display graphical representations of objects that are obstructed, helping users understand the layout and location of objects.

Problems solved by this technology:

  • Limited visibility: In virtual scenes, objects can be obstructed from view, making it difficult for users to understand their location and surroundings. This method solves this problem by displaying graphical representations of obstructed objects.
  • Spatial awareness: Users may struggle to navigate and understand their position within a virtual scene. The method helps users gain a better understanding of the location of objects within the scene.

Benefits of this technology:

  • Improved user experience: By providing graphical representations of obstructed objects, users can have a more immersive and informative experience in virtual scenes.
  • Enhanced spatial understanding: Users can better comprehend the layout and location of objects within a virtual scene, improving their spatial awareness and navigation abilities.

Abstract

In an information display method in a virtual scene, the virtual scene of a first virtual object is displayed. A graphical representation of each of at least one second virtual object in the virtual scene is displayed based on the first virtual object being within a target region in the virtual scene and a view of each of the least one second virtual object being obstructed. The displayed graphical representation of each of the at least one second virtual object indicates a location of the respective second virtual object within the virtual scene.

IMIDAZO[1,2-a]PYRAZINE OR PYRAZOLO[1,5-a]PYRIMIDINE DERIVATIVE AND USE THEREOF ([[18455578. IMIDAZO[1,2-a]PYRAZINE OR PYRAZOLO[1,5-a]PYRIMIDINE DERIVATIVE AND USE THEREOF simplified abstract (Tencent Technology (Shenzhen) Company Limited)|18455578]])

Main Inventor

Ding XUE


Brief explanation

The abstract of the patent application describes a compound represented by Formula (I) and its various forms, such as stereoisomers, tautomers, deuterated compounds, oxynitrides, solvates, metabolites, pharmaceutically acceptable salts, and prodrugs.
  • The compound described in the patent application is represented by Formula (I) or its various forms.
  • The compound can exist as stereoisomers, tautomers, deuterated compounds, oxynitrides, solvates, metabolites, pharmaceutically acceptable salts, or prodrugs.
  • The patent application covers the compound and its various forms for potential use in pharmaceutical applications.

Potential Applications:

  • The compound described in the patent application may have potential applications in the field of pharmaceuticals.
  • It can be used as a starting point for the development of new drugs or therapeutic agents.

Problems Solved:

  • The patent application does not explicitly mention any specific problems solved by the compound. However, it suggests that the compound and its various forms may have potential applications in the pharmaceutical industry, indicating a potential solution to unmet medical needs.

Benefits:

  • The compound and its various forms described in the patent application may offer new opportunities for drug discovery and development.
  • The patent application provides protection for the compound and its various forms, allowing for potential commercialization and further research in the pharmaceutical field.

Abstract

The present disclosure provides a compound. The compound is a compound as shown in Formula (I) or a stereoisomer, tautomer, deuterated compound, oxynitride, solvate, metabolite, pharmaceutically acceptable salt or prodrug of the compound having a structure as shown in Formula (I).

NAVIGATION INTERFACE DISPLAY METHOD AND APPARATUS, TERMINAL, AND STORAGE MEDIUM (18455705)

Main Inventor

Honglong ZHANG


Brief explanation

The patent application describes a method for displaying a navigation interface that incorporates real-time environment information. The method involves obtaining the environment information, determining a suitable interface component based on the current navigation scene, and displaying a fused navigation interface.
  • The method involves obtaining real-time environment information.
  • The first interface component is determined based on the current navigation scene.
  • The first interface component includes a base map and a sky box.
  • The navigation interface is displayed by fusing the base map and the sky box.
  • The base map represents the road surface environment.
  • The sky box represents the sky environment.
  • Different navigation scenes have different styles of interface components.

Potential Applications

This technology can be applied in various navigation systems, such as:

  • In-car navigation systems
  • Mobile navigation apps
  • Augmented reality navigation systems

Problems Solved

This technology addresses the following problems:

  • Lack of real-time environment information in navigation interfaces
  • Difficulty in visually representing different navigation scenes
  • Inability to provide a comprehensive and immersive navigation experience

Benefits

The benefits of this technology include:

  • Enhanced navigation experience with real-time environment information
  • Improved visual representation of different navigation scenes
  • Increased user engagement and immersion in the navigation interface

Abstract

A navigation interface display method includes: obtaining real-time environment information; determining a first interface component based on a first navigation scene corresponding to the real-time environment information, the first interface component including a first base map and a first sky box); and displaying a navigation interface obtained by fusing the first base map and the first sky box. The first base map indicates a road surface environment, the first sky box indicates a sky environment, and styles of interface components corresponding to different navigation scenes are different.

VIBRATION EVALUATION METHOD AND APPARATUS, COMPUTER DEVICE, STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT (18239081)

Main Inventor

Shili XU


Brief explanation

==Abstract Explanation==

This patent application describes a method for evaluating vibrations using a computer device. The method involves generating a curve that represents the vibration attributes of a target object at different timestamps. The method then determines any delay in the transformation of the actual vibration attributes based on the difference between the reference vibration attributes and the actual ones. The actual vibration attributes are then corrected using this delay information to obtain a corrected curve. Finally, the method acquires information about the deviation between the reference curve and the corrected curve, and adjusts the vibration effect of the target object based on this deviation.

Bullet Points

  • Method for evaluating vibrations using a computer device
  • Generates a curve representing vibration attributes of a target object
  • Determines delay in transformation of actual vibration attributes
  • Corrects the actual vibration attributes using the delay information
  • Acquires information about deviation between reference curve and corrected curve
  • Adjusts vibration effect of the target object based on the deviation

Potential Applications

  • Industrial machinery: This technology can be used to evaluate the vibration effects of various industrial machinery, allowing for better maintenance and optimization.
  • Structural engineering: The method can be applied to assess the vibrations in buildings, bridges, and other structures, aiding in their design and safety.
  • Automotive industry: The technology can be utilized to evaluate the vibrations in vehicles, helping to improve ride comfort and reduce noise levels.
  • Consumer electronics: This method can be used to assess the vibrations in smartphones, laptops, and other electronic devices, ensuring their performance and durability.

Problems Solved

  • Improved accuracy: The method enhances the accuracy of evaluating the vibration effects of a target object, leading to more reliable results.
  • Efficient analysis: By using a computer device, the evaluation process can be automated and performed more efficiently, saving time and resources.
  • Real-time adjustments: The ability to correct the actual vibration attributes in real-time allows for immediate adjustments to the vibration effect of the target object, improving its performance.

Benefits

  • Enhanced maintenance: By accurately evaluating vibrations, potential issues in machinery and structures can be identified early, enabling proactive maintenance and preventing costly breakdowns.
  • Improved safety: The method helps in assessing the vibrations in structures, ensuring their stability and safety.
  • Enhanced user experience: By evaluating and adjusting vibrations in consumer electronics and vehicles, the technology improves user comfort and satisfaction.
  • Cost savings: The efficient evaluation process and proactive maintenance enabled by this technology can result in cost savings by reducing downtime and avoiding major repairs.

Abstract

Embodiments of this application provide a vibration evaluation method performed by a computer device. The method includes: generating an actual vibration attribute curve of a target object including reference vibration attribute values at a plurality of timestamps; determining at least one delay duration of N actual transformation points according to a time difference between a reference vibration attribute curve of the reference vibration attribute information and the actual vibration attribute curve; correcting the actual vibration attribute curve by using the delay duration to obtain a corrected vibration attribute curve; and acquiring target curve deviation information between the reference vibration attribute curve and the corrected vibration attribute curve, and adjusting a vibration effect of the target object based on the target curve deviation information. Through this application, the accuracy of an evaluation result of the vibration effect of the target object can be improved.

MODEL OPTIMIZATION METHOD AND APPARATUS, ELECTRONIC DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT (18455717)

Main Inventor

Zhiling YE


Brief explanation

The patent application describes a method for adjusting a model in a project by encapsulating a model operator in a project model to obtain a super-model. The super-model has a dynamically variable space structure and is trained based on a configuration search space and the project model. The method aims to find an adjusted model corresponding to the project model by searching the convergence super-model.
  • The method involves encapsulating a model operator in a project model to obtain a super-model.
  • The super-model has a dynamically variable space structure.
  • A configuration search space is determined based on the model operator and a control parameter.
  • The super-model is trained using the configuration search space and the project model.
  • A convergence super-model is obtained when a training end condition is reached.
  • The convergence super-model is searched for an adjusted model corresponding to the project model.

Potential Applications

  • This method can be applied in various fields where model adjustment is required, such as machine learning, data analysis, and optimization.
  • It can be used to fine-tune models in complex projects to improve their performance and accuracy.

Problems Solved

  • The method solves the problem of adjusting a model in a project by providing a systematic approach to encapsulate a model operator and train a super-model.
  • It addresses the challenge of finding an adjusted model that corresponds to the project model by searching the convergence super-model.

Benefits

  • The method allows for the adjustment of models in a project by encapsulating a model operator, providing flexibility in the model's structure.
  • It enables the training of a super-model based on a configuration search space, allowing for optimization and fine-tuning.
  • The method provides a systematic approach to finding an adjusted model corresponding to the project model, improving the overall performance and accuracy of the project.

Abstract

A model adjustment method includes: encapsulating a model operator in a project model to obtain a super-model corresponding to the project model, the model operator at least including: a network layer in the project model, the super-model being a model with a dynamically variable space structure; determining a configuration search space corresponding to the project model according to the model operator and a control parameter; training the super-model based on the configuration search space and the project model and obtaining a convergence super-model corresponding to the project model in response to that a training end condition is reached; and searching the convergence super-model for an adjusted model corresponding to the project model.

VIDEO PROCESSING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM (18456027)

Main Inventor

Boyuan JIANG


Brief explanation

The patent application describes a method for processing videos using neural networks. Here is a simplified explanation of the abstract:
  • The method starts by obtaining two consecutive frames from a target video.
  • These frames are then inputted into a neural network.
  • The neural network is trained using an optical flow distillation constraint and a feature consistency constraint.
  • The neural network outputs an intermediate video frame.
  • Finally, the method interpolates this intermediate frame between the two original frames.

Potential applications of this technology:

  • Video editing and post-production: The method can be used to enhance the quality of videos by generating high-quality intermediate frames.
  • Video compression: By generating intermediate frames, the method can help reduce the size of video files without significant loss of quality.
  • Virtual reality and gaming: The technology can improve the visual experience in virtual reality environments and video games by generating smoother and more realistic video sequences.

Problems solved by this technology:

  • Low-quality video interpolation: Traditional methods of video interpolation often result in blurry or distorted frames. This technology aims to generate high-quality intermediate frames.
  • Time-consuming video processing: Manual video processing can be time-consuming and labor-intensive. This method automates the process using neural networks, saving time and effort.

Benefits of this technology:

  • Improved video quality: The method aims to generate high-quality intermediate frames, resulting in smoother and more visually appealing videos.
  • Time and cost savings: By automating the video processing using neural networks, the method can save time and reduce the need for manual intervention.
  • Versatile applications: The technology can be applied to various video-related fields, including video editing, compression, virtual reality, and gaming.

Abstract

A video processing method includes: obtaining a first video frame and a second video frame in a target video, the first video frame being a previous frame of the second video frame; inputting the first video frame and the second video frame to a target neural network, and obtaining a target intermediate video frame output by the target neural network, the target neural network being trained based on an optical flow distillation constraint and a feature consistency constraint; and interpolating the target intermediate video frame between the first video frame and the second video frame.

DATA PROCESSING METHOD AND APPARATUS, AND DEVICE AND MEDIUM (18238321)

Main Inventor

Liang ZHANG


Brief explanation

The patent application describes a method, apparatus, device, and computer-readable medium for data processing in fields like artificial intelligence and assisted driving.
  • The technology involves acquiring object pose detection results and part pose detection results from an image frame.
  • Some object parts may be missing from the object pose detection results.
  • Interpolation processing is performed on the missing object parts using the part pose detection results and a standard pose associated with the object.
  • This interpolation process helps obtain a global pose that corresponds to the object.
  • The global pose can be used to control a computer and enable a service function related to the global pose.

Potential Applications

This technology has potential applications in various fields, including:

  • Artificial intelligence: It can be used in object recognition and pose estimation tasks, improving the accuracy and completeness of the results.
  • Assisted driving: The technology can aid in detecting and tracking objects, enhancing the safety and efficiency of autonomous vehicles.
  • Robotics: It can be utilized in robot perception systems, allowing robots to better understand and interact with their environment.

Problems Solved

The technology addresses several problems in data processing:

  • Incomplete object pose detection: By interpolating missing object parts, the technology improves the completeness of object pose detection results.
  • Limited accuracy: The interpolation process helps enhance the accuracy of the global pose estimation by incorporating part pose detection results.
  • Service function realization: The global pose obtained through this method enables the computer to perform specific service functions related to the object.

Benefits

The technology offers several benefits:

  • Improved object recognition: By filling in missing object parts, the technology enhances the accuracy and reliability of object recognition systems.
  • Enhanced pose estimation: The interpolation process improves the accuracy of pose estimation by considering part pose detection results.
  • Increased functionality: The global pose obtained enables the computer to provide service functions based on the object's pose, expanding the capabilities of applications in various fields.

Abstract

A data processing method, apparatus, a device, and a computer-readable medium are provided for use in fields such as artificial intelligence, assisted driving, and the like. An object pose detection result corresponding to an object in an image frame and a part pose detection result corresponding to a first object part of the object in the image frame is acquired. At least one object part of the object is missing from the object pose detection result. The first object part is one or more parts of the object. Interpolation processing is performed on the at least one object part missing from the object pose detection result according to the part pose detection result and a standard pose associated with the object to obtain a global pose corresponding to the object. The global pose is used for controlling a computer to realize a service function corresponding to the global pose.

ANIMATION FRAME DISPLAY METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM (18455592)

Main Inventor

Shuchang LIU


Brief explanation

Abstract:

An animation frame display method is disclosed. The method involves obtaining a resource consumption index based on the initial animation update frequency of at least one virtual model. This index represents the amount of resources required for animation updates based on the initial frequency. The method further includes obtaining a target animation update frequency for the virtual model based on the resource consumption index and the initial frequency. Finally, the method displays an animation frame corresponding to the virtual model according to the target animation update frequency.

Patent/Innovation Explanation:

  • The method determines the amount of resources needed for animation updates based on the initial animation update frequency of virtual models.
  • It calculates a target animation update frequency for the virtual models based on the resource consumption index and the initial frequency.
  • The method then displays animation frames for the virtual models according to the target animation update frequency.

Potential Applications:

  • This technology can be applied in various fields that involve animation, such as video games, computer-generated movies, and virtual reality applications.
  • It can be used to optimize resource consumption and improve the overall performance and efficiency of animation rendering.

Problems Solved:

  • The method addresses the issue of resource consumption in animation rendering by dynamically adjusting the animation update frequency based on resource availability.
  • It helps prevent resource overconsumption and ensures smooth and efficient animation display.

Benefits:

  • By dynamically adjusting the animation update frequency, the method optimizes resource consumption and improves the overall performance of animation rendering.
  • It allows for smoother and more efficient animation display, enhancing the user experience in applications that involve animation.

Abstract

An animation frame display method includes obtaining a resource consumption index based on an initial animation update frequency of at least one virtual model, the resource consumption index indicating a quantity of resources required to be consumed for animation update according to the initial animation update frequency of the at least one virtual model; obtaining a target animation update frequency of the at least one virtual model based on the resource consumption index and the initial animation update frequency; and displaying an animation frame corresponding to the at least one virtual model according to the target animation update frequency.

SCENE ELEMENT PROCESSING METHOD AND APPARATUS, DEVICE, AND MEDIUM (18238413)

Main Inventor

Zhixiu ZHU


Brief explanation

The patent application describes a method performed by a computer device for processing scene elements in a virtual scene. Here is a simplified explanation of the abstract:
  • The method starts by displaying a virtual scene that includes a specific target region.
  • When there is an update operation on the target region, the method acquires information about the distribution of scene element density in that region.
  • Based on this density distribution information, the method determines density values for different candidate positions within the target region.
  • Finally, the method renders a scene element at a specific position within the target region, which is determined based on the density values of the candidate positions.

Potential applications of this technology:

  • Virtual reality and augmented reality applications that require realistic rendering of scene elements.
  • Video game development, where dynamic and realistic scene elements are crucial.
  • Architectural and interior design software, where virtual scenes need to be rendered with accurate and visually appealing elements.

Problems solved by this technology:

  • Efficiently processing and rendering scene elements in a virtual environment.
  • Ensuring that scene elements are placed in a visually realistic and aesthetically pleasing manner.
  • Handling updates and changes to scene elements in real-time.

Benefits of this technology:

  • Improved realism and visual quality in virtual scenes.
  • Faster and more efficient processing of scene elements.
  • Flexibility in updating and modifying scene elements in real-time.

Abstract

This application relates to a scene element processing method performed by a computer device. The method includes: displaying a virtual scene including a target region; in response to a scene element update operation on the target region, acquiring scene element density distribution information corresponding to the target region; determining element density values corresponding to candidate positions in the target region based on the scene element density distribution information; and rendering a scene element at an element generation position determined from the candidate positions based on the element density values.

METHOD, COMPUTER DEVICE, AND STORAGE MEDIUM, FOR FEATURE FUSION MODEL TRAINING AND SAMPLE RETRIEVAL (18450463)

Main Inventor

Hui GUO


Brief explanation

The patent application describes a method for training a feature fusion model and retrieving samples. Here are the key points:
  • The method starts by inputting a training sample into an initial feature fusion model to obtain a training semantic feature and a training global feature.
  • Classification and recognition are performed based on the training semantic feature to obtain an initial training category.
  • The training semantic feature and the training global feature are spliced together to obtain a spliced training feature.
  • Autocorrelation feature calculation is performed on the spliced training feature to obtain an autocorrelation feature.
  • Self-attention weight calculation is performed based on the autocorrelation feature to obtain a self-attention weight.
  • The spliced training feature is adjusted using the self-attention weight to obtain a fused training feature.
  • The initial feature fusion model is updated based on the training global feature, training semantic feature, fused training feature, initial training category, and training sample category label.
  • The process iterates in a loop to obtain a target fusion model.

Potential applications of this technology:

  • Image recognition and classification systems
  • Natural language processing and sentiment analysis
  • Speech recognition and transcription
  • Video analysis and object detection

Problems solved by this technology:

  • Improved accuracy and performance in feature fusion models
  • Efficient training and retrieval of samples
  • Enhanced classification and recognition capabilities

Benefits of this technology:

  • Higher accuracy in classification and recognition tasks
  • Improved efficiency in training and retrieval processes
  • Enhanced feature fusion capabilities for complex data analysis

Abstract

A method for feature fusion model training and sample retrieval includes: inputting training sample into an initial feature fusion model to obtain a training semantic feature and a training global feature, performing classification and recognition based on the training semantic feature to obtain an initial training category, splicing the training semantic feature and the training global feature to obtain a spliced training feature, performing autocorrelation feature calculation based on the spliced training feature to obtain an autocorrelation feature, performing self-attention weight calculation based on the autocorrelation feature to obtain a self-attention weight, and adjusting the spliced training feature through the self-attention weight to obtain a fused training feature; and updating the initial feature fusion model based on the training global feature, the training semantic feature, the fused training feature, the initial training category, and a training sample category label, and performing a loop iteration to obtain a target fusion model.

MESSAGE PROCESSING METHOD AND APPARATUS, DEVICE, AND MEDIUM (18456582)

Main Inventor

Yuzhi CHEN


Brief explanation

The abstract describes a method for processing messages that involves displaying two message contents, moving the first message content to the second message content through a continuous action operation, and displaying the merged message content once the continuous action operation ends.
  • The method involves displaying two message contents.
  • The first message content is moved to the second message content through a continuous action operation.
  • Once the continuous action operation ends, the merged message content is displayed, which includes both the first and second message contents.

Potential Applications

  • Messaging applications
  • Email clients
  • Chat platforms

Problems Solved

  • Streamlining message processing
  • Enhancing user experience in managing messages
  • Improving message organization and consolidation

Benefits

  • Simplifies message management
  • Reduces clutter in message interfaces
  • Increases efficiency in processing messages
  • Provides a more seamless and intuitive user experience

Abstract

A message processing method includes: displaying first message content and second message content; moving the first message content to the second message content in response to a continuous action operation for the first message content; and displaying merged message content in response to an end of the continuous action operation and that the first message content is moved to an area where the second message content is located, the merged message content including the first message content and the second message content.

DATA PROCESSING METHOD AND APPARATUS, COMPUTER AND READABLE STORAGE MEDIUM (18239080)

Main Inventor

Ying HU


Brief explanation

The abstract of this patent application describes a data processing method performed by a computer device. The method involves obtaining multiple component tracks in a target file, where each component track has a component track data box containing segment quantity and space region information. The segment quantity represents the number of atlas segments corresponding to the component track, and the space region information refers to the spatial region of the component track. The target file is then unpacked based on each component track data box to obtain the corresponding segment display content.
  • The method involves obtaining and processing multiple component tracks in a target file.
  • Each component track has a component track data box containing segment quantity and space region information.
  • The segment quantity represents the number of atlas segments corresponding to the component track.
  • The space region information refers to the spatial region of the component track.
  • The target file is unpacked based on each component track data box.
  • The unpacking process allows for obtaining the corresponding segment display content.

Potential Applications:

  • This data processing method can be used in various computer applications that involve handling target files with multiple component tracks.
  • It can be applied in multimedia editing software to efficiently process and display different segments of a multimedia file.
  • The method can be utilized in data compression algorithms to unpack compressed files and retrieve specific segments of data.

Problems Solved:

  • The method provides a systematic approach to handle and process multiple component tracks within a target file.
  • It allows for efficient unpacking of the target file based on the component track data boxes.
  • The method solves the problem of extracting and displaying specific segments of data from a target file.

Benefits:

  • The method simplifies the data processing of target files with multiple component tracks.
  • It enables efficient retrieval and display of specific segments of data.
  • The method improves the overall performance and usability of applications that involve handling target files with component tracks.

Abstract

Embodiments of this application provides a data processing method performed by a computer device, the method including: obtaining k component tracks in a target file, each component track having a component track data box, which comprises a segment quantity and space region information; the segment quantity referring to a quantity of atlas segments corresponding to the component track to which the component track data box belongs; the space region information referring to a space region of the component track to which the component track data box belongs; k being a positive integer; and unpacking the target file according to each component track data box, to obtain a corresponding segment display content.

FILE DECAPSULATION METHOD AND APPARATUS FOR FREE VIEWPOINT VIDEO, DEVICE, AND STORAGE MEDIUM (18239654)

Main Inventor

Ying HU


Brief explanation

The abstract describes a method for decapsulating a media file of free viewpoint video data. The media file contains video data from multiple viewpoints, and the method involves decoding the video bitstreams to obtain reconstructed video data.
  • The method is used for decapsulating a media file of free viewpoint video data.
  • The media file includes video data from multiple viewpoints.
  • The video track in the media file contains codec independence indication information and video bitstreams.
  • The codec independence indication information indicates whether video data of one viewpoint depends on video data of other viewpoints during codec.
  • The method decapsulates the media file based on the codec independence indication information.
  • The decapsulation process obtains a video bitstream corresponding to at least one of the viewpoints.
  • The video bitstream is then decoded to obtain reconstructed video data of the corresponding viewpoint.

Potential Applications

  • Free viewpoint video systems and applications
  • Virtual reality (VR) and augmented reality (AR) experiences
  • Interactive video games and simulations
  • 360-degree video streaming platforms

Problems Solved

  • Efficient decapsulation of media files containing free viewpoint video data
  • Handling codec dependencies between different viewpoints
  • Ensuring accurate reconstruction of video data from multiple viewpoints

Benefits

  • Enables seamless playback and viewing of free viewpoint video content
  • Enhances user experience in VR, AR, and interactive video applications
  • Simplifies the processing and decoding of video data from multiple viewpoints
  • Facilitates the development of immersive and interactive media experiences

Abstract

A file decapsulation method for a free viewpoint video includes receiving a media file of free viewpoint video data. The media file includes a video track, the free viewpoint video data includes video data of N viewpoints, and the video track includes codec independence indication information and video bitstreams of M viewpoints. The codec independence indication information indicates whether video data of one of the M viewpoints in the video track depends on video data of other viewpoints during codec. The method further includes decapsulating the media file according to the codec independence indication information, to obtain a video bitstream corresponding to at least one of the M viewpoints, and decoding the video bitstream corresponding to the at least one of the M viewpoints, to obtain reconstructed video data of the at least one of the M viewpoints.

MULTI-CHANNEL ECHO CANCELLATION METHOD AND RELATED APPARATUS (18456054)

Main Inventor

Rui ZHU


Brief explanation

The patent application describes a method for multi-channel echo cancellation. Here are the key points:
  • The method involves obtaining audio signals from multiple channels and a filter coefficient matrix corresponding to a specific frame of microphone signal from a target microphone.
  • Frame-partitioning and block-partitioning processing is performed on the audio signals to determine a frequency domain signal matrix corresponding to the microphone signal.
  • Filtering processing is then carried out using the filter coefficient matrix and the frequency domain signal matrix to obtain an echo signal in the microphone signal frame.
  • Finally, echo cancellation is performed using the frequency domain signal of the microphone frame and the echo signal to obtain a near-end audio signal from the target microphone.

Potential applications of this technology:

  • Teleconferencing systems: The method can be used to cancel echo in multi-channel audio setups, improving the audio quality during teleconferences.
  • Voice assistants: This technology can enhance the performance of voice-controlled devices by reducing echo and improving the accuracy of voice recognition.
  • Audio recording and broadcasting: The method can be applied to eliminate echo in multi-channel audio recordings or live broadcasts, resulting in clearer and more professional sound.

Problems solved by this technology:

  • Echo cancellation: The method effectively removes echo caused by audio feedback, resulting in improved audio quality and intelligibility.
  • Multi-channel audio processing: The method addresses the challenge of processing audio signals from multiple channels simultaneously, ensuring accurate echo cancellation across all channels.

Benefits of this technology:

  • Improved audio quality: By canceling echo, the method enhances the clarity and intelligibility of audio signals, leading to a better user experience.
  • Real-time processing: The method can be implemented in real-time, making it suitable for applications that require immediate echo cancellation, such as live audio broadcasts or teleconferencing.
  • Scalability: The method can be applied to various multi-channel audio setups, making it adaptable to different devices and systems.

Abstract

A multi-channel echo cancellation method includes obtaining far-end audio signals outputted by channels, obtaining a filter coefficient matrix corresponding to a kframe of microphone signal outputted by a target microphone and including frequency domain filter coefficients of filter sub-blocks corresponding to the channels, performing frame-partitioning and block-partitioning processing on the far-end audio signals to determine a far-end frequency domain signal matrix corresponding to the kframe of microphone signal and including far-end frequency domain signals of the filter sub-blocks, performing filtering processing according to the filter coefficient matrix and the far-end frequency domain signal matrix to obtain an echo signal in the kframe of microphone signal, and performing echo cancellation according to a frequency domain signal of the kframe of microphone signal and the echo signal in the kframe of microphone signal to obtain a near-end audio signal outputted by the target microphone.