18530331. MACHINE LEARNING KNOWLEDGE MANAGEMENT BASED ON LIFELONG BOOSTING IN PRESENCE OF LESS DATA simplified abstract (NEC Corporation)
Contents
- 1 MACHINE LEARNING KNOWLEDGE MANAGEMENT BASED ON LIFELONG BOOSTING IN PRESENCE OF LESS DATA
- 1.1 Organization Name
- 1.2 Inventor(s)
- 1.3 MACHINE LEARNING KNOWLEDGE MANAGEMENT BASED ON LIFELONG BOOSTING IN PRESENCE OF LESS DATA - A simplified explanation of the abstract
- 1.4 Simplified Explanation
- 1.5 Potential Applications
- 1.6 Problems Solved
- 1.7 Benefits
- 1.8 Potential Commercial Applications
- 1.9 Possible Prior Art
- 1.10 Original Abstract Submitted
MACHINE LEARNING KNOWLEDGE MANAGEMENT BASED ON LIFELONG BOOSTING IN PRESENCE OF LESS DATA
Organization Name
Inventor(s)
Ammar Shaker of Heidelberg (DE)
Francesco Alesiani of Heidelberg (DE)
MACHINE LEARNING KNOWLEDGE MANAGEMENT BASED ON LIFELONG BOOSTING IN PRESENCE OF LESS DATA - A simplified explanation of the abstract
This abstract first appeared for US patent application 18530331 titled 'MACHINE LEARNING KNOWLEDGE MANAGEMENT BASED ON LIFELONG BOOSTING IN PRESENCE OF LESS DATA
Simplified Explanation
The method described in the abstract involves lifelong machine learning using boosting, where a distribution of weights is learned over a learning sample for a new task by leveraging previously learned classifiers from old tasks. Task-specific classifiers are then learned for the new task using a boosting algorithm and the distribution of weights over the learning sample, with updates made using the task-specific classifiers.
- Leveraging previously learned classifiers from old tasks
- Learning task-specific classifiers for new tasks using a boosting algorithm
- Updating the distribution of weights over the learning sample using task-specific classifiers
Potential Applications
The technology described in this patent application could have potential applications in various fields such as:
- Autonomous vehicles
- Fraud detection systems
- Personalized recommendation systems
Problems Solved
This technology addresses several key problems in machine learning, including:
- Continual adaptation to new tasks without forgetting previous knowledge
- Efficient utilization of previously learned information
- Improving the performance of classifiers over time
Benefits
The benefits of this technology include:
- Enhanced accuracy and efficiency in learning new tasks
- Reduced computational resources required for lifelong machine learning
- Improved adaptability to changing environments and tasks
Potential Commercial Applications
The potential commercial applications of this technology could include:
- Software development for intelligent systems
- Data analytics platforms for businesses
- Machine learning services for various industries
Possible Prior Art
One possible prior art for this technology could be the concept of transfer learning, where knowledge gained from one task is applied to another related task to improve learning efficiency and performance.
What are the limitations of this technology in real-world applications?
One limitation of this technology in real-world applications could be the computational resources required to update the distribution of weights over the learning sample for each new task, especially in scenarios with a large number of tasks.
How does this technology compare to existing lifelong machine learning methods?
This technology differs from existing lifelong machine learning methods by incorporating a boosting algorithm to learn task-specific classifiers for new tasks, which can lead to improved performance and adaptability over time.
Original Abstract Submitted
A method for lifelong machine learning using boosting includes receiving a new task and a learning sample for the new task. A distribution of weights is learned over the learning sample using previously learned classifiers from old tasks. A set of task-specific classifiers are learned for the new task using a boosting algorithm and the distribution of weights over the learning sample, whereby the distribution of weights over the learning sample is updated using the task-specific classifiers for the new task.