17941417. FRIEND-TRAINING: METHODS, SYSTEMS, AND APPARATUS FOR LEARNING FROM MODELS OF DIFFERENT BUT RELATED TASKS simplified abstract (TENCENT AMERICA LLC)

From WikiPatents
Jump to navigation Jump to search

FRIEND-TRAINING: METHODS, SYSTEMS, AND APPARATUS FOR LEARNING FROM MODELS OF DIFFERENT BUT RELATED TASKS

Organization Name

TENCENT AMERICA LLC

Inventor(s)

Lifeng Jin of Mill Creek WA (US)

FRIEND-TRAINING: METHODS, SYSTEMS, AND APPARATUS FOR LEARNING FROM MODELS OF DIFFERENT BUT RELATED TASKS - A simplified explanation of the abstract

This abstract first appeared for US patent application 17941417 titled 'FRIEND-TRAINING: METHODS, SYSTEMS, AND APPARATUS FOR LEARNING FROM MODELS OF DIFFERENT BUT RELATED TASKS

Simplified Explanation

The abstract describes a method, apparatus, and non-transitory storage medium for training two or more cross-task neural network models based on multiple neural network tasks. This involves mapping pseudo labels from different models associated with different tasks to the same space, computing a matching score between the pseudo labels, selecting cross-task pseudo labels based on the matching score and accuracies, and training the neural network models based on the selected pseudo labels.

  • Training method for multiple cross-task neural network models
  • Mapping pseudo labels from different models to a common space
  • Computing matching scores between pseudo labels
  • Selecting cross-task pseudo labels based on scores and accuracies
  • Training neural network models based on selected pseudo labels

Potential Applications

This technology could be applied in various fields such as computer vision, natural language processing, and speech recognition for improving the performance of neural network models across different tasks.

Problems Solved

1. Enhancing the performance of neural network models by training them on cross-task pseudo labels. 2. Addressing the challenge of transferring knowledge between different neural network tasks.

Benefits

1. Improved accuracy and efficiency of neural network models. 2. Enhanced generalization capabilities across multiple tasks. 3. Facilitates knowledge transfer and learning from diverse datasets.

Potential Commercial Applications

Optimizing neural network models for specific industries such as healthcare, finance, and autonomous vehicles to improve decision-making processes and enhance overall performance.

Possible Prior Art

One possible prior art in this field is the concept of transfer learning, where knowledge gained from one task is applied to another related task to improve performance and efficiency of neural network models.

Unanswered Questions

How does this method compare to existing techniques for training cross-task neural network models?

This method introduces the use of cross-task pseudo labels and matching scores to train neural network models. It would be interesting to see a comparison with other approaches such as transfer learning or multi-task learning in terms of performance and efficiency.

What are the potential limitations or challenges of implementing this method in real-world applications?

It would be important to consider factors such as dataset size, computational resources, and model complexity when implementing this method in practical scenarios. Additionally, the impact of noise or errors in pseudo labels on the overall performance of the neural network models should be explored.


Original Abstract Submitted

Method, apparatus, and non-transitory storage medium for training two or more cross-task neural network models based on two or more neural network tasks, including mapping first pseudo labels based on a first model associated with a first task among the two or more neural network tasks and second pseudo labels based on a second model associated with a second task among the two or more neural network tasks to a same space, and computing a matching score indicating a cross-task matching between the first pseudo labels and the second pseudo labels based on the mapping. The method may further include selecting one or more cross-task pseudo labels based on the matching score and accuracies associated with the first model and the second model, and training the two or more cross-task neural network models based on the one or more cross-task pseudo labels.