18150671. ARTIFICIAL INTELLIGENCE EXPLAINABILITY FOR INTENT CLASSIFICATION simplified abstract (Accenture Global Solutions Limited)
ARTIFICIAL INTELLIGENCE EXPLAINABILITY FOR INTENT CLASSIFICATION
Organization Name
Accenture Global Solutions Limited
Inventor(s)
Debapriya Mukherjee of Kolkata (IN)
Raghavender Surya Upadhyayula of Bengaluru (IN)
Raghavendra Kotala of Bangalore (IN)
ARTIFICIAL INTELLIGENCE EXPLAINABILITY FOR INTENT CLASSIFICATION - A simplified explanation of the abstract
This abstract first appeared for US patent application 18150671 titled 'ARTIFICIAL INTELLIGENCE EXPLAINABILITY FOR INTENT CLASSIFICATION
Simplified Explanation
The patent application describes a framework for explainability in AI systems using a Bert-Siamese model approach for intent classification.
Key Features and Innovation
- AI explainability system for intent classification
- Surrogate Bert-Siamese model approach
- Training the model using sentence similarity
- Extracting token/word level embeddings for explanations
Potential Applications
This technology can be applied in various industries such as healthcare, finance, customer service, and more for improving transparency and interpretability of AI systems.
Problems Solved
This technology addresses the lack of transparency and interpretability in AI systems, providing insights into how decisions are made.
Benefits
- Enhanced trust in AI systems
- Improved understanding of model predictions
- Facilitates compliance with regulations regarding AI transparency
Commercial Applications
- AI-powered customer service chatbots
- Financial risk assessment models
- Medical diagnosis support systems
Prior Art
Researchers can explore prior works on explainability in AI systems, Bert models, and Siamese networks to understand the evolution of this technology.
Frequently Updated Research
Stay updated on advancements in explainability frameworks for AI systems, Bert-Siamese model applications, and the impact of transparency in AI decision-making processes.
Questions about AI Explainability
How does the Bert-Siamese model approach improve explainability in AI systems?
The Bert-Siamese model approach enhances explainability by training on sentence similarity and extracting token/word level embeddings for detailed explanations.
What are the potential implications of this technology in industries like healthcare and finance?
This technology can lead to more transparent and interpretable AI systems in healthcare for accurate diagnosis support and in finance for risk assessment models.
Original Abstract Submitted
Systems and methods for providing an explainability framework for use with AI systems are described. In one example, such an AI explainability system for intent classification uses a surrogate Bert-Siamese model approach. For example, a prediction from an intent classification model is paired with a top matching sentence and used as input to train a Bert-Siamese model for sentence similarity. Using the sentence similarity, the token/word level embedding can be extracted from attention weights of the sentences and correlations between query tokens/words, and the best matching sentences may be used for explanations.