18577739. METHOD, APPARATUS AND DEVICE FOR EXPLAINING MODEL AND COMPUTER STORAGE MEDIUM simplified abstract (BOE Technology Group Co., Ltd.)

From WikiPatents
Jump to navigation Jump to search

METHOD, APPARATUS AND DEVICE FOR EXPLAINING MODEL AND COMPUTER STORAGE MEDIUM

Organization Name

BOE Technology Group Co., Ltd.

Inventor(s)

Yusi Chen of Beijing (CN)

METHOD, APPARATUS AND DEVICE FOR EXPLAINING MODEL AND COMPUTER STORAGE MEDIUM - A simplified explanation of the abstract

This abstract first appeared for US patent application 18577739 titled 'METHOD, APPARATUS AND DEVICE FOR EXPLAINING MODEL AND COMPUTER STORAGE MEDIUM

Simplified Explanation

The method disclosed in the patent application involves explaining a model by transforming input samples, generating perturbation data, acquiring weights, training an explainable model, and obtaining an explanation result.

  • Transform target input sample to embedding space
  • Generate perturbation data set in embedding space
  • Acquire weight of neighborhood vector in perturbation data set
  • Transform perturbation data set back to original feature space
  • Acquire explainable model of target data analysis model by training
  • Obtain explanation result based on explainable model

Potential Applications

This technology could be applied in various fields such as machine learning, data analysis, and artificial intelligence to provide explanations for complex models and predictions.

Problems Solved

This method helps in understanding and interpreting the decisions made by machine learning models, which can be crucial for ensuring transparency, accountability, and trust in automated systems.

Benefits

- Enhances interpretability of machine learning models - Facilitates trust and accountability in AI systems - Enables users to understand the reasoning behind model predictions

Potential Commercial Applications

"Explaining a Model" technology could be utilized in industries such as finance, healthcare, and e-commerce to explain the rationale behind automated decisions, improve model performance, and comply with regulatory requirements.

Possible Prior Art

One possible prior art in this field is the use of LIME (Local Interpretable Model-agnostic Explanations) technique for explaining machine learning models by generating local approximations.

What are the specific techniques used for transforming input samples in the method described?

The specific techniques used for transforming input samples in the method described involve converting the target input sample from the original feature space to an embedding space and then back to the original feature space after generating perturbation data.

How does the acquired explainable model differ from the original target data analysis model?

The acquired explainable model differs from the original target data analysis model in that it is trained based on the acquired data and is designed to provide a more interpretable and transparent explanation of the model's decisions.


Original Abstract Submitted

Disclosed is a method for explaining a model. The method comprises: transforming a target input sample from an original feature space to an embedding space; generating a perturbation data set in the embedding space; acquiring a weight of a neighborhood vector in the perturbation data set; after that, transforming the perturbation data set back to the original feature space; acquiring an explainable model of a target data analysis model by training based on the acquired data; and acquiring an explanation result based on the explainable model.