International business machines corporation (20240211727). LOCAL INTERPRETABILITY ARCHITECTURE FOR A NEURAL NETWORK simplified abstract

From WikiPatents
Jump to navigation Jump to search

LOCAL INTERPRETABILITY ARCHITECTURE FOR A NEURAL NETWORK

Organization Name

international business machines corporation

Inventor(s)

Zhong Fang Yuan of Xi'an (CN)

Tong Liu of Xi'an (CN)

Li Ni Zhang of Beijing (CN)

Wei Ting Hsieh of Tainan (TW)

Huan Meng of Wuhan (CN)

Qi Liang Zhou of Xi'an (CN)

Jia Wei He of Xi'an (CN)

Yi Fan Liu of Beijing (CN)

Lin Ji of Chengdu (CN)

LOCAL INTERPRETABILITY ARCHITECTURE FOR A NEURAL NETWORK - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240211727 titled 'LOCAL INTERPRETABILITY ARCHITECTURE FOR A NEURAL NETWORK

The patent application describes a computer-implemented process for training a neural network with multiple transform layers.

  • Input data is transformed by each transform layer into output data.
  • A neural-backed decision tree is generated for each transform layer.
  • The process is repeated for all transform layers.
  • A neural-backed decision tree map converts output data into interpretable words from a generative search domain.

Potential Applications: - Natural language processing - Data analysis and interpretation - Machine learning algorithms

Problems Solved: - Enhancing the interpretability of neural network outputs - Improving decision-making processes in complex data sets

Benefits: - Increased accuracy in data analysis - Enhanced understanding of neural network decisions - Improved transparency in machine learning models

Commercial Applications: Title: "Enhancing Data Interpretation with Neural Network Training" This technology could be used in industries such as finance, healthcare, and marketing for data analysis, pattern recognition, and predictive modeling.

Prior Art: Researchers in the field of machine learning and artificial intelligence have explored methods to improve the interpretability of neural networks, including decision trees and generative models.

Frequently Updated Research: Stay updated on advancements in neural network training techniques, interpretability in machine learning, and applications in various industries.

Questions about Neural Network Training: 1. How does this technology improve the interpretability of neural network outputs? 2. What are the potential real-world applications of this patent innovation?


Original Abstract Submitted

a computer-implemented process for training a neural network having a plurality of transform layers includes the following operations. input data for one transform layer of the plurality of transform layers is transformed by the one transform layer into output data. a neural-backed decision tree is generated for the transform layer, a neural-backed decision tree. the transforming and the generating are repeated for each of the plurality of transform layers. a neural-backed decision tree map for a particular one of the plurality of transform layers maps output data of the particular one of the plurality of transform layers into a list of interpretable words from a generative search domain of facts and evidence.