17966568. SYSTEMS AND METHODS FOR ARTIFICIAL-INTELLIGENCE MODEL TRAINING USING UNSUPERVISED DOMAIN ADAPTATION WITH MULTI-SOURCE META-DISTILLATION simplified abstract (Huawei Technologies Co., Ltd.)
Jump to navigation
Jump to search
SYSTEMS AND METHODS FOR ARTIFICIAL-INTELLIGENCE MODEL TRAINING USING UNSUPERVISED DOMAIN ADAPTATION WITH MULTI-SOURCE META-DISTILLATION
Organization Name
Inventor(s)
SYSTEMS AND METHODS FOR ARTIFICIAL-INTELLIGENCE MODEL TRAINING USING UNSUPERVISED DOMAIN ADAPTATION WITH MULTI-SOURCE META-DISTILLATION - A simplified explanation of the abstract
This abstract first appeared for US patent application 17966568 titled 'SYSTEMS AND METHODS FOR ARTIFICIAL-INTELLIGENCE MODEL TRAINING USING UNSUPERVISED DOMAIN ADAPTATION WITH MULTI-SOURCE META-DISTILLATION
Simplified Explanation
The abstract of the patent application describes a method that involves the following steps:
- Obtaining a set of training samples from one or more domains.
- Using the set of training samples to query multiple artificial-intelligence (AI) models.
- Combining the outputs of the queried AI models.
- Adapting a target AI model using knowledge distillation with the combined outputs.
Potential applications of this technology:
- Improving the performance of AI models by leveraging knowledge from multiple domains.
- Enhancing the accuracy and reliability of AI systems in various fields such as image recognition, natural language processing, and recommendation systems.
Problems solved by this technology:
- Overcoming the limitations of training AI models on a single domain by incorporating knowledge from multiple domains.
- Addressing the challenge of generalizing AI models to perform well on diverse datasets and real-world scenarios.
Benefits of this technology:
- Increased accuracy and robustness of AI models due to the combination of outputs from multiple models.
- Improved transfer learning capabilities, allowing AI models to apply knowledge learned from one domain to another.
- Enhanced efficiency in training AI models by leveraging pre-trained models and distilling their knowledge into a target model.
Original Abstract Submitted
A method has the steps of obtaining a set of training samples from one or more domains, using the set of training samples to query a plurality of artificial-intelligence (AI) models, combining the outputs of the queried AI models, and adapting a target AI model via knowledge distillation using the combined outputs.