17985457. DATA MINIMIZATION USING LOCAL MODEL EXPLAINABILITY simplified abstract (INTERNATIONAL BUSINESS MACHINES CORPORATION)

From WikiPatents
Jump to navigation Jump to search

DATA MINIMIZATION USING LOCAL MODEL EXPLAINABILITY

Organization Name

INTERNATIONAL BUSINESS MACHINES CORPORATION

Inventor(s)

Abigail Goldsteen of Haifa (IL)

Ron Shmelkin of Haifa (IL)

DATA MINIMIZATION USING LOCAL MODEL EXPLAINABILITY - A simplified explanation of the abstract

This abstract first appeared for US patent application 17985457 titled 'DATA MINIMIZATION USING LOCAL MODEL EXPLAINABILITY

Simplified Explanation

The abstract describes a method for generating feature explainability data for a predictive model. This data helps understand the significance of a feature in predicting the output of the model for a given sample. The method involves extracting feature value data, constructing a generalization group based on predetermined conditions, and generating generalized domain data for the feature.

  • The method involves generating feature explainability data for a predictive model.
  • The data is based on the significance of a feature in predicting the model output for a sample.
  • Feature value data is extracted from the input data.
  • A generalization group is constructed based on specific conditions.
  • Generalized domain data is generated for the feature.

Potential Applications

This technology could be applied in various fields such as finance, healthcare, marketing, and more for improving predictive modeling and understanding the impact of different features on model outcomes.

Problems Solved

This technology helps in interpreting and explaining the importance of individual features in predictive models, leading to better decision-making and model optimization.

Benefits

The method provides insight into the significance of features in predictive models, enhancing transparency and trust in model predictions. It also aids in model debugging and refinement.

Potential Commercial Applications

"Enhancing Predictive Model Explainability for Improved Decision Making"

Possible Prior Art

There may be prior art related to feature explainability in predictive modeling, such as methods for interpreting model outputs or visualizing feature importance.

Unanswered Questions

How does this method compare to existing techniques for feature explainability in predictive modeling?

The article does not provide a direct comparison with other methods or tools available for feature explainability in predictive modeling.

What are the limitations or challenges of implementing this method in real-world applications?

The article does not address potential obstacles or difficulties that may arise when implementing this method in practical scenarios.


Original Abstract Submitted

An embodiment includes generating feature explainability data associated with a feature in a set of features of a sample represented by input data for a predictive model, where the explainability value is based at least in part on a significance of the feature for an output of the predictive model for the sample. The embodiment extracts feature value data from the input data that is representative of a feature value of the feature for the sample. The embodiment constructs a generalization group comprising the feature of the sample by detecting that the feature value and the explainability value satisfy a predetermined condition. The embodiment generates generalized domain data indicative of a generalized domain that comprises a generalized feature value that corresponds to a plurality of feature values in a domain of the generalization group such that the generalized feature is a generalization of the feature of the sample.