18459320. TRAINING ENSEMBLE MODELS TO IMPROVE PERFORMANCE IN THE PRESENCE OF UNRELIABLE BASE CLASSIFIERS simplified abstract (Western Digital Technologies, Inc.)
Contents
TRAINING ENSEMBLE MODELS TO IMPROVE PERFORMANCE IN THE PRESENCE OF UNRELIABLE BASE CLASSIFIERS
Organization Name
Western Digital Technologies, Inc.
Inventor(s)
Yongjune Kim of San Jose CA (US)
Yuval Cassuto of Sunnyvale CA (US)
TRAINING ENSEMBLE MODELS TO IMPROVE PERFORMANCE IN THE PRESENCE OF UNRELIABLE BASE CLASSIFIERS - A simplified explanation of the abstract
This abstract first appeared for US patent application 18459320 titled 'TRAINING ENSEMBLE MODELS TO IMPROVE PERFORMANCE IN THE PRESENCE OF UNRELIABLE BASE CLASSIFIERS
Simplified Explanation
The abstract of the patent application describes a system and method for training base classifiers in a boosting algorithm. The system optimally trains base classifiers considering an unreliability model and then uses an aggregator decoder that reverse-flips inputs using inter-classifier redundancy introduced in training.
- The system and method are used for training base classifiers in a boosting algorithm.
- The training process considers an unreliability model to optimize the training of the base classifiers.
- An aggregator decoder is used to reverse-flip inputs using inter-classifier redundancy introduced during training.
Potential Applications
- Machine learning and artificial intelligence systems
- Pattern recognition systems
- Data analysis and prediction models
Problems Solved
- Improves the training process of base classifiers in a boosting algorithm
- Addresses the issue of unreliability in training base classifiers
- Enhances the accuracy and reliability of the boosting algorithm
Benefits
- Optimal training of base classifiers considering an unreliability model
- Improved accuracy and reliability of the boosting algorithm
- Enhanced performance of machine learning and pattern recognition systems.
Original Abstract Submitted
A system and method for training base classifiers in a boosting algorithm includes optimally training base classifiers considering an unreliability model, and then using a scheme with an aggregator decoder that reverse-flips inputs using inter-classifier redundancy introduced in training.