18147774. SYSTEM AND METHOD FOR PREVENTING INTRODUCTION OF POISONED TRAINING DATA TO ARTIFICIAL INTELLIGENCE MODELS simplified abstract (Dell Products L.P.)

From WikiPatents
Jump to navigation Jump to search

SYSTEM AND METHOD FOR PREVENTING INTRODUCTION OF POISONED TRAINING DATA TO ARTIFICIAL INTELLIGENCE MODELS

Organization Name

Dell Products L.P.

Inventor(s)

OFIR Ezrielev of Be'er Sheva (IL)

AMIHAI Savir of Newton MA (US)

TOMER Kushnir of Omer (IL)

SYSTEM AND METHOD FOR PREVENTING INTRODUCTION OF POISONED TRAINING DATA TO ARTIFICIAL INTELLIGENCE MODELS - A simplified explanation of the abstract

This abstract first appeared for US patent application 18147774 titled 'SYSTEM AND METHOD FOR PREVENTING INTRODUCTION OF POISONED TRAINING DATA TO ARTIFICIAL INTELLIGENCE MODELS

    • Simplified Explanation:**

The patent application discusses methods and systems for managing AI models to prevent malicious attacks using poisoned training data.

    • Key Features and Innovation:**
  • Prevents re-training of AI models with training data too similar to previously used data.
  • Analyzes candidate training data sets to detect poisoned data.
  • Scores candidate data sets to determine usability for training AI models.
    • Potential Applications:**

This technology can be applied in various industries such as cybersecurity, finance, healthcare, and autonomous vehicles to enhance the security and accuracy of AI models.

    • Problems Solved:**

This technology addresses the issue of malicious attacks on AI models using poisoned training data, ensuring the integrity and reliability of the models.

    • Benefits:**
  • Enhances the security of AI models.
  • Prevents malicious attacks on AI systems.
  • Improves the accuracy and reliability of AI models.
    • Commercial Applications:**

The technology can be used by companies developing AI systems for various applications to ensure the security and integrity of their models, potentially leading to increased trust from users and clients.

    • Prior Art:**

Researchers can explore prior art related to AI model security, data poisoning attacks, and methods for detecting and preventing such attacks in the field of artificial intelligence.

    • Frequently Updated Research:**

Stay updated on the latest advancements in AI model security, data poisoning prevention techniques, and the evolving landscape of cybersecurity in AI applications.

    • Questions about AI Model Security:**

1. How does this technology differentiate between normal training data and poisoned training data? 2. What are the potential implications of not detecting poisoned data in AI models?


Original Abstract Submitted

Methods and systems for managing artificial intelligence (AI) models are disclosed. To manage AI models, an instance of an AI model may not be re-trained using training data determined to be too similar to previously used training data. By doing so, malicious attacks intending to shift the AI model in a particular direction using poisoned training data may be prevented. To do so, a candidate training data set may be analyzed prior to performing re-training of an instance of an AI model using the candidate training data set. The analysis may result in a score. If the score exceeds a score threshold, the candidate training data set may be considered to contain poisoned training data. If the score does not exceed the score threshold, the candidate training data set may be accepted as usable to train an instance of the AI model.