18178768. ONTOLOGY-BASED FRAMEWORK FOR INTERPRETABLE FEATURE ENGINEERING simplified abstract (SAP SE)
Contents
ONTOLOGY-BASED FRAMEWORK FOR INTERPRETABLE FEATURE ENGINEERING
Organization Name
Inventor(s)
Mohamed Bouadi of La Garenne-Colombes (FR)
Salima Benbernou of Colombes (FR)
Mourad Ouziri of Gennevilliers (FR)
ONTOLOGY-BASED FRAMEWORK FOR INTERPRETABLE FEATURE ENGINEERING - A simplified explanation of the abstract
This abstract first appeared for US patent application 18178768 titled 'ONTOLOGY-BASED FRAMEWORK FOR INTERPRETABLE FEATURE ENGINEERING
The patent application describes a system and method for generating features using a learning network, determining the interpretability of these features based on a domain ontology and symbolic rules, selecting interpretable features for model training, evaluating model performance, calculating a reward based on performance and interpretability, and generating new features based on the reward.
- Features are generated using a learning network.
- Interpretability of features is determined using a domain ontology and symbolic rules.
- Interpretable features are selected for model training.
- Model performance is evaluated.
- A reward is calculated based on performance and interpretability.
- New features are generated using the reward.
Potential Applications: This technology can be applied in various fields such as machine learning, artificial intelligence, data analysis, and predictive modeling.
Problems Solved: This technology addresses the challenge of selecting interpretable features for model training, which can improve the performance and explainability of machine learning models.
Benefits: The system and method described in the patent application can lead to more transparent and accurate machine learning models, enhancing decision-making processes in various industries.
Commercial Applications: This technology could be utilized in industries such as healthcare, finance, marketing, and cybersecurity to improve the accuracy and interpretability of predictive models.
Questions about the technology: 1. How does the system determine the interpretability of features based on a domain ontology and symbolic rules? 2. What are the potential implications of using interpretable features for model training in various industries?
Original Abstract Submitted
Systems and methods include generation of a first plurality of features using a learning network, determination of an interpretability of each of the first plurality of features based on a domain ontology and on symbolic rules associated with entities of the domain ontology, determination of a first set of the first plurality of features which were determined as interpretable, determination of a performance of a model trained using the first set of the plurality of features, determine a reward based on the performance and the interpretability, and generation of a second plurality of features using the learning network based on the reward.