International business machines corporation (20240119137). PROTECTION OF A MACHINE-LEARNING MODEL simplified abstract

From WikiPatents
Jump to navigation Jump to search

PROTECTION OF A MACHINE-LEARNING MODEL

Organization Name

international business machines corporation

Inventor(s)

Matthias Seul of Folsom CA (US)

Andrea Giovannini of Zurich (CH)

Frederik Frank Flother of Schlieren (CH)

Tim Uwe Scheideler of Schoenenberg (CH)

PROTECTION OF A MACHINE-LEARNING MODEL - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240119137 titled 'PROTECTION OF A MACHINE-LEARNING MODEL

Simplified Explanation

The abstract describes a method for protecting a machine-learning model against training data attacks by initially training the model with controlled data, identifying high-impact training data, building an artificial pseudo-malicious training data set from the high-impact data, and retraining the model using this data set.

  • Initial training of machine-learning system with controlled training data
  • Identifying high-impact training data from a larger data set
  • Building an artificial pseudo-malicious training data set
  • Retraining the machine-learning system using the artificial data set

Potential Applications

This technology can be applied in industries where machine-learning models are vulnerable to attacks on training data, such as cybersecurity, finance, healthcare, and autonomous vehicles.

Problems Solved

This technology addresses the issue of machine-learning models being manipulated or compromised through attacks on the training data, ensuring the models remain robust and reliable in real-world applications.

Benefits

- Enhanced security and protection for machine-learning models - Improved accuracy and reliability of predictions - Safeguarding sensitive data and systems from malicious attacks

Potential Commercial Applications

- Cybersecurity companies can use this technology to protect their AI systems from adversarial attacks. - Financial institutions can implement this method to secure their fraud detection algorithms. - Healthcare organizations can utilize this approach to safeguard patient data in medical diagnosis systems.

Possible Prior Art

One possible prior art could be the use of adversarial training techniques in machine learning to enhance model robustness against attacks on training data.

What are the limitations of this method in real-world applications?

The abstract does not mention the scalability of the method to large datasets or the computational resources required for retraining the machine-learning system with the artificial data set.

How does this method compare to existing techniques for protecting machine-learning models?

The abstract does not provide a comparison with other existing techniques for protecting machine-learning models, such as adversarial training, data augmentation, or model distillation.


Original Abstract Submitted

a computer-implemented method or protecting a machine-learning model against training data attacks is disclosed. the method comprises performing an initial training of a machine-learning system with controlled training data, thereby building a trained initial machine-learning model and identifying high-impact training data from a larger training data set than in the controlled training data, wherein the identified individual training data have an impact on a training cycle of the training of machine-learning model, wherein the impact is larger than a predefined impact threshold value. the method also comprises building an artificial pseudo-malicious training data set from the identified high-impact training data and retraining the machine-learning system comprising the trained initial machine-learning model using the artificial pseudo-malicious training data set.