17837636. SCALABLE KNOWLEDGE DISTILLATION TECHNIQUES FOR MACHINE LEARNING simplified abstract (Microsoft Technology Licensing, LLC)

From WikiPatents
Jump to navigation Jump to search

SCALABLE KNOWLEDGE DISTILLATION TECHNIQUES FOR MACHINE LEARNING

Organization Name

Microsoft Technology Licensing, LLC

Inventor(s)

Adit Krishnan of Mountain View CA (US)

Ji Li of San Jose CA (US)

Yixuan Wei of Beijing (CN)

Xiaozhi Yu of San Jose CA (US)

Han Hu of Beijing (CN)

Qi Dai of Beijing (CN)

SCALABLE KNOWLEDGE DISTILLATION TECHNIQUES FOR MACHINE LEARNING - A simplified explanation of the abstract

This abstract first appeared for US patent application 17837636 titled 'SCALABLE KNOWLEDGE DISTILLATION TECHNIQUES FOR MACHINE LEARNING

Simplified Explanation

The abstract describes a data processing system that implements a dynamic knowledge distillation process. Here is a simplified explanation of the abstract:

  • The data processing system divides training data into multiple batches of samples.
  • It uses an iterative knowledge distillation process to distill a student model from a teacher model.
  • The system instantiates both the teacher model and the student model in its memory.
  • It retrieves a batch of training data from the memory.
  • The system trains both the teacher and student models using each sample in the batch.
  • It evaluates the performance of the student model compared to the teacher model.
  • Based on the performance, the system provides feedback to the student model to adjust its behavior.

Potential Applications:

  • Education: This technology can be applied in educational settings to improve the learning process by distilling knowledge from experienced teachers to students.
  • Machine Learning: The knowledge distillation process can be used to transfer knowledge from complex and computationally expensive models to simpler and more efficient models.
  • Artificial Intelligence: The system can be used to improve the performance of AI models by leveraging the knowledge of more advanced models.

Problems Solved:

  • Knowledge Transfer: The technology solves the problem of transferring knowledge from a teacher model to a student model, allowing for more efficient learning and improved performance.
  • Model Optimization: By distilling knowledge from a teacher model, the system helps optimize the student model, making it more accurate and efficient.

Benefits:

  • Improved Learning: The dynamic knowledge distillation process enhances the learning experience by transferring knowledge from a teacher model to a student model.
  • Computational Efficiency: By distilling knowledge from a teacher model, the system helps create more efficient student models that require less computational resources.
  • Performance Enhancement: The iterative knowledge distillation process helps improve the performance of the student model by adjusting its behavior based on feedback from the teacher model.


Original Abstract Submitted

A data processing system implements a dynamic knowledge distillation process including dividing training data into a plurality of batches of samples and distilling a student model from a teacher model using an iterative knowledge distillation. The process includes instantiating an instance of the teacher model and the student model in a memory of the data processing system and obtaining a respective batch of training data from the plurality of batches of samples in the memory. The process includes training the teacher and student models using each of the samples in the respective batch of the training data, evaluating the performance of the student model compared with the performance of the teacher model, and providing feedback to student model to adjust the behavior of the student model based on the performance of the student model.