18154574. METHOD FOR TRAINING NEURAL NETWORK AND RELATED DEVICE simplified abstract (HUAWEI TECHNOLOGIES CO., LTD.)
Contents
METHOD FOR TRAINING NEURAL NETWORK AND RELATED DEVICE
Organization Name
Inventor(s)
Lanqing Hong of Hong Kong (CN)
METHOD FOR TRAINING NEURAL NETWORK AND RELATED DEVICE - A simplified explanation of the abstract
This abstract first appeared for US patent application 18154574 titled 'METHOD FOR TRAINING NEURAL NETWORK AND RELATED DEVICE
Simplified Explanation
The disclosed method and device are related to training a neural network. Here is a simplified explanation of the abstract:
- The method involves inputting a subset of samples into a neural network to generate feature information and a prediction result for a query sample.
- It then generates a second prediction result for the query sample based on feature information from multiple groups of support samples and labeling results corresponding to those groups.
- The first neural network is trained using a first loss function that measures the similarity between the first prediction result and a labeling result, and a second loss function that measures the similarity between the first prediction result and the second prediction result or the second prediction result and the labeling result.
Potential applications of this technology:
- This method can be used in various fields where neural networks are employed, such as image recognition, natural language processing, and recommendation systems.
- It can improve the accuracy and performance of neural networks by incorporating support samples and labeling results during training.
Problems solved by this technology:
- Neural networks often struggle with generalization and may not perform well on new, unseen data.
- This method addresses this problem by leveraging support samples and labeling results to improve the training process and enhance the network's ability to make accurate predictions.
Benefits of this technology:
- By incorporating support samples and labeling results, the neural network can learn from a wider range of data and improve its ability to generalize.
- The method provides a more robust and effective training process, leading to better performance and accuracy in various applications.
- It can potentially reduce the need for extensive manual labeling of training data, making the training process more efficient.
Original Abstract Submitted
This disclosure discloses a method for training a neural network and a related device. The method includes: inputting a first sample subset into a first neural network, to generate first feature information and a first prediction result of a first query sample; generating a second prediction result of the first query sample based on second feature information corresponding to M groups of support samples included in a sample set, first labeling results corresponding to the M groups of support samples, and the first feature information, and training the first neural network. A first loss function indicates a similarity between the first prediction result and a second labeling result, and a second loss function indicates a similarity between the first prediction result and the second prediction result, or indicates a similarity between the second prediction result and the second labeling result.