18170724. UNCERTAINTY ESTIMATION USING UNINFORMATIVE FEATURES simplified abstract (Capital One Services, LLC)
Contents
UNCERTAINTY ESTIMATION USING UNINFORMATIVE FEATURES
Organization Name
Inventor(s)
Samuel Sharpe of Cambridge MA (US)
Brian Barr of Schenectady NY (US)
Justin Au-yeung of Somerville MA (US)
Daniel Barcklow of McLean VA (US)
UNCERTAINTY ESTIMATION USING UNINFORMATIVE FEATURES - A simplified explanation of the abstract
This abstract first appeared for US patent application 18170724 titled 'UNCERTAINTY ESTIMATION USING UNINFORMATIVE FEATURES
In some aspects, a computing system may generate uninformative features that may be added to a dataset of real features to use as a baseline for determining the quality of an explanation of model output. The uninformative features may be features that do not correlate with what a model is tasked with predicting (e.g., the uninformative features may be random values), and the real features may be informative and correlate with what the model is tasked with predicting (e.g., variables of a dataset sample). A machine learning model may be trained on a dataset that includes both the real features and the uninformative features. The computing system may generate feature attributions for model output, which may include feature attributions for the uninformative features and the real features in the dataset.
- Uninformative features generated by a computing system
- Real features added to a dataset for model output explanation
- Training a machine learning model on a dataset with both types of features
- Generating feature attributions for model output
- Using uninformative features as a baseline for explanation quality
Potential Applications: - Improving the interpretability of machine learning models - Enhancing the trustworthiness of model predictions - Facilitating the identification of influential features in model output
Problems Solved: - Addressing the challenge of explaining complex model outputs - Providing a method to distinguish between informative and uninformative features in a dataset
Benefits: - Increased transparency in machine learning processes - Better understanding of model decision-making - Improved model performance through feature analysis
Commercial Applications: "Enhancing Machine Learning Model Interpretability for Improved Decision-Making and Trust"
Questions about the technology: 1. How does the inclusion of uninformative features impact the interpretability of machine learning models? 2. What are the potential implications of using feature attributions for model output in real-world applications?
Original Abstract Submitted
In some aspects, a computing system may generate uninformative features that may be added to a dataset of real features to use as a baseline for determining the quality of an explanation of model output. The uninformative features may be features that do not correlate with what a model is tasked with predicting (e.g., the uninformative features may be random values), and the real features may be informative and correlate with what the model is tasked with predicting (e.g., variables of a dataset sample). A machine learning model may be trained on a dataset that includes both the real features and the uninformative features. The computing system may generate feature attributions for model output, which may include feature attributions for the uninformative features and the real features in the dataset.