18048341. GENERATING LOCALLY INVARIANT EXPLANATIONS FOR MACHINE LEARNING simplified abstract (INTERNATIONAL BUSINESS MACHINES CORPORATION)

From WikiPatents
Revision as of 09:13, 26 April 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

GENERATING LOCALLY INVARIANT EXPLANATIONS FOR MACHINE LEARNING

Organization Name

INTERNATIONAL BUSINESS MACHINES CORPORATION

Inventor(s)

Amit Dhurandhar of Yorktown Heights NY (US)

Karthikeyan Natesan Ramamurthy of Pleasantville NY (US)

Kartik Ahuja of White Plains NY (US)

Vijay Arya of Gurgaon (IN)

GENERATING LOCALLY INVARIANT EXPLANATIONS FOR MACHINE LEARNING - A simplified explanation of the abstract

This abstract first appeared for US patent application 18048341 titled 'GENERATING LOCALLY INVARIANT EXPLANATIONS FOR MACHINE LEARNING

Simplified Explanation

The patent application describes techniques for generating explanations for machine learning models by identifying the model, output, and constraints, generating neighborhoods based on constraints, creating predictors for each neighborhood, combining predictors into a combined predictor, and generating explanations using the combined predictor.

  • Identifying machine learning model, output, and constraints
  • Generating neighborhoods based on constraints
  • Creating predictors for each neighborhood
  • Combining predictors into a combined predictor
  • Generating explanations using the combined predictor

Potential Applications

The technology could be applied in various fields such as healthcare, finance, and autonomous systems where interpretability of machine learning models is crucial.

Problems Solved

This technology addresses the issue of black-box nature of machine learning models by providing explanations for their outputs, increasing transparency and trust in the decision-making process.

Benefits

- Improved interpretability of machine learning models - Enhanced trust in the decision-making process - Better understanding of model behavior and predictions

Potential Commercial Applications

"Enhancing Machine Learning Model Interpretability for Better Decision-Making"

Possible Prior Art

One possible prior art is the use of LIME (Local Interpretable Model-agnostic Explanations) technique for explaining machine learning models.

=== What are the specific constraints used in generating neighborhoods? The specific constraints used in generating neighborhoods are not mentioned in the abstract. It would be helpful to know if these constraints are based on data characteristics, model architecture, or other factors.

=== How is the combined predictor constructed from individual predictors? The abstract mentions constructing a combined predictor based on combining individual predictors for each neighborhood. It would be interesting to learn more about the methodology or algorithm used to combine these predictors and how it contributes to generating explanations for the machine learning model.


Original Abstract Submitted

Techniques for generating explanations for machine learning (ML) are disclosed. These techniques include identifying an ML model, an output from the ML model, and a plurality of constraints, and generating a plurality of neighborhoods relating to the ML model based on the plurality of constraints. The techniques further include generating a predictor for each of the plurality of neighborhoods using the ML model and the plurality of constraints, constructing a combined predictor based on combining each of the respective predictors for the plurality of neighborhoods, and creating one or more explanations relating to the ML model and the output from the ML model using the combined predictor.