17643896. DEFENDING DEEP GENERATIVE MODELS AGAINST ADVERSARIAL ATTACKS simplified abstract (INTERNATIONAL BUSINESS MACHINES CORPORATION)

From WikiPatents
Jump to navigation Jump to search

DEFENDING DEEP GENERATIVE MODELS AGAINST ADVERSARIAL ATTACKS

Organization Name

INTERNATIONAL BUSINESS MACHINES CORPORATION

Inventor(s)

Mathieu Sinn of Dublin (IE)

Killian Levacher of Dublin (IE)

Ambrish Rawat of Dublin (IE)

DEFENDING DEEP GENERATIVE MODELS AGAINST ADVERSARIAL ATTACKS - A simplified explanation of the abstract

This abstract first appeared for US patent application 17643896 titled 'DEFENDING DEEP GENERATIVE MODELS AGAINST ADVERSARIAL ATTACKS

Simplified Explanation

The abstract of this patent application describes a method for detecting and defending against adversarial attacks on deep generative models. Here is a simplified explanation:

  • Adversarial attack detection operations are used to protect deep generative models from adversarial attacks.
  • These operations are applied to one or more deep generative models.
  • The detection of adversarial attacks is based on a variety of detection operations.
  • If an attack is detected, the deep generative models are sanitized to mitigate the effects of the attack.

Potential Applications

This technology has potential applications in various fields, including:

  • Cybersecurity: Protecting deep generative models used in sensitive applications, such as fraud detection or malware detection, from adversarial attacks.
  • Image and video processing: Safeguarding deep generative models used in image and video generation tasks, such as content creation or data augmentation, from adversarial attacks.
  • Natural language processing: Defending deep generative models used in text generation or language translation tasks from adversarial attacks.

Problems Solved

This technology addresses the following problems:

  • Vulnerability to adversarial attacks: Deep generative models are susceptible to adversarial attacks, where malicious inputs can manipulate the model's output.
  • Security risks: Adversarial attacks can compromise the integrity and reliability of deep generative models, leading to incorrect or biased results.
  • Trustworthiness of deep generative models: By detecting and defending against adversarial attacks, this technology enhances the trustworthiness of deep generative models and the applications that rely on them.

Benefits

The use of this technology offers several benefits:

  • Enhanced security: By detecting and mitigating adversarial attacks, the security of deep generative models is improved, reducing the risk of malicious manipulation.
  • Improved reliability: Sanitizing deep generative models after an attack helps maintain their reliability and ensures more accurate and trustworthy outputs.
  • Trustworthy AI applications: By defending against adversarial attacks, this technology contributes to the development of more trustworthy and robust AI applications in various domains.


Original Abstract Submitted

Adversarial attack detection operations may be applied on one or more deep generative models for defending deep generative models from adversarial attacks. The adversarial attack may be detected on the one or more deep generative models based on the one or more of a plurality of adversarial attack detection operations. The one or more deep generative models may be sanitized based on the adversarial attack.