17937242. DEFENSE AGAINST XAI ADVERSARIAL ATTACKS BY DETECTION OF COMPUTATIONAL RESOURCE FOOTPRINTS simplified abstract (Dell Products L.P.)

From WikiPatents
Jump to navigation Jump to search

DEFENSE AGAINST XAI ADVERSARIAL ATTACKS BY DETECTION OF COMPUTATIONAL RESOURCE FOOTPRINTS

Organization Name

Dell Products L.P.

Inventor(s)

Iam Palatnik De Sousa of Rio de Janeiro (BR)

Adriana Bechara Prado of Niteroi (BR)

DEFENSE AGAINST XAI ADVERSARIAL ATTACKS BY DETECTION OF COMPUTATIONAL RESOURCE FOOTPRINTS - A simplified explanation of the abstract

This abstract first appeared for US patent application 17937242 titled 'DEFENSE AGAINST XAI ADVERSARIAL ATTACKS BY DETECTION OF COMPUTATIONAL RESOURCE FOOTPRINTS

Simplified Explanation

The abstract of the patent application describes a method for auditing a machine learning model by analyzing its computational resource footprint to determine if it is under an adversarial attack.

  • Initiating an audit of a machine learning model
  • Providing input data to the machine learning model during the audit
  • Receiving information on the operation of the machine learning model, including its computational resource footprint
  • Analyzing the computational resource footprint
  • Determining if the footprint indicates an adversarial attack on the machine learning model

Potential Applications

This technology could be applied in various industries where machine learning models are used, such as cybersecurity, finance, healthcare, and autonomous vehicles.

Problems Solved

This technology helps in identifying and mitigating adversarial attacks on machine learning models, which can compromise the integrity and reliability of the models.

Benefits

- Enhances the security and trustworthiness of machine learning models - Helps in maintaining the performance and accuracy of machine learning systems - Provides insights into potential vulnerabilities in machine learning models

Potential Commercial Applications

"Enhancing Machine Learning Model Security: Applications in Cybersecurity and Beyond"

Possible Prior Art

Prior research has been conducted on detecting adversarial attacks on machine learning models using various techniques such as adversarial training, input sanitization, and model verification.

Unanswered Questions

How does this method compare to existing techniques for detecting adversarial attacks on machine learning models?

The article does not provide a direct comparison with other methods or technologies in the field.

What are the limitations of using computational resource footprints as an indicator of adversarial attacks on machine learning models?

The article does not discuss any potential drawbacks or limitations of this approach.


Original Abstract Submitted

One example method includes initiating an audit of a machine learning model, providing input data to the machine learning model as part of the audit, while the audit is running, receiving information regarding operation of the machine learning model, wherein the information comprises a computational resource footprint, analyzing the computational resource footprint, and determining, based on the analyzing, that the computational resource footprint is characteristic of an adversarial attack on the machine learning model.