18057266. PROTECTION OF A MACHINE-LEARNING MODEL simplified abstract (International Business Machines Corporation)
Contents
- 1 PROTECTION OF A MACHINE-LEARNING MODEL
- 1.1 Organization Name
- 1.2 Inventor(s)
- 1.3 PROTECTION OF A MACHINE-LEARNING MODEL - A simplified explanation of the abstract
- 1.4 Simplified Explanation
- 1.5 Potential Applications
- 1.6 Problems Solved
- 1.7 Benefits
- 1.8 Potential Commercial Applications
- 1.9 Possible Prior Art
- 1.10 Unanswered Questions
- 1.11 Original Abstract Submitted
PROTECTION OF A MACHINE-LEARNING MODEL
Organization Name
International Business Machines Corporation
Inventor(s)
Matthias Seul of Folsom CA (US)
Andrea Giovannini of Zurich (CH)
Frederik Frank Flother of Schlieren (CH)
Tim Uwe Scheideler of Schoenenberg (CH)
PROTECTION OF A MACHINE-LEARNING MODEL - A simplified explanation of the abstract
This abstract first appeared for US patent application 18057266 titled 'PROTECTION OF A MACHINE-LEARNING MODEL
Simplified Explanation
The abstract describes a method for protecting a machine-learning model against training data attacks by initially training the model with controlled data, identifying high-impact training data, building an artificial pseudo-malicious training data set from the identified data, and retraining the model using this data set.
- The method involves performing an initial training of a machine-learning system with controlled training data.
- High-impact training data is identified from a larger training data set based on their impact on the training cycle.
- An artificial pseudo-malicious training data set is created from the high-impact training data.
- The machine-learning system is retrained using the artificial pseudo-malicious training data set.
Potential Applications
This technology could be applied in industries where machine-learning models are vulnerable to attacks on training data, such as cybersecurity, finance, and healthcare.
Problems Solved
This technology addresses the issue of protecting machine-learning models from training data attacks, which can compromise the integrity and effectiveness of the models.
Benefits
The method helps improve the robustness and security of machine-learning models by identifying and mitigating potential vulnerabilities in the training data.
Potential Commercial Applications
Potential commercial applications of this technology include developing secure and reliable machine-learning systems for various industries, offering enhanced protection against data attacks.
Possible Prior Art
One possible prior art in this field is the use of adversarial training techniques to enhance the robustness of machine-learning models against adversarial attacks on the input data.
Unanswered Questions
1. How does the method determine the impact threshold value for identifying high-impact training data? 2. Are there any limitations or challenges associated with retraining machine-learning models using artificial pseudo-malicious training data sets?
Original Abstract Submitted
A computer-implemented method or protecting a machine-learning model against training data attacks is disclosed. The method comprises performing an initial training of a machine-learning system with controlled training data, thereby building a trained initial machine-learning model and identifying high-impact training data from a larger training data set than in the controlled training data, wherein the identified individual training data have an impact on a training cycle of the training of machine-learning model, wherein the impact is larger than a predefined impact threshold value. The method also comprises building an artificial pseudo-malicious training data set from the identified high-impact training data and retraining the machine-learning system comprising the trained initial machine-learning model using the artificial pseudo-malicious training data set.