18145417. LOCAL INTERPRETABILITY ARCHITECTURE FOR A NEURAL NETWORK simplified abstract (International Business Machines Corporation)

From WikiPatents
Jump to navigation Jump to search

LOCAL INTERPRETABILITY ARCHITECTURE FOR A NEURAL NETWORK

Organization Name

International Business Machines Corporation

Inventor(s)

Zhong Fang Yuan of Xi'an (CN)

Tong Liu of Xi'an (CN)

Li Ni Zhang of Beijing (CN)

Wei Ting Hsieh of Tainan (TW)

Huan Meng of Wuhan (CN)

Qi Liang Zhou of Xi'an (CN)

Jia Wei He of Xi'an (CN)

Yi Fan Liu of Beijing (CN)

Lin Ji of Chengdu (CN)

LOCAL INTERPRETABILITY ARCHITECTURE FOR A NEURAL NETWORK - A simplified explanation of the abstract

This abstract first appeared for US patent application 18145417 titled 'LOCAL INTERPRETABILITY ARCHITECTURE FOR A NEURAL NETWORK

Simplified Explanation

The patent application describes a computer-implemented process for training a neural network with multiple transform layers. Each layer transforms input data into output data, and a neural-backed decision tree is generated for each layer. The output data is mapped into interpretable words from a generative search domain.

  • The process involves transforming input data through multiple layers of a neural network.
  • A neural-backed decision tree is generated for each transform layer.
  • The output data is mapped into interpretable words from a generative search domain.

Potential Applications

This technology can be applied in various fields such as natural language processing, machine learning, and data analysis. It can be used to improve the interpretability of neural networks and enhance decision-making processes.

Problems Solved

This technology addresses the challenge of interpreting the output of neural networks, which can often be complex and difficult to understand. By mapping the output data into interpretable words, it helps users make sense of the information generated by the neural network.

Benefits

- Improved interpretability of neural networks - Enhanced decision-making processes - Better understanding of complex data

Commercial Applications

Title: Enhanced Neural Network Interpretability for Improved Decision Making This technology can be used in industries such as healthcare, finance, and cybersecurity to analyze data, make predictions, and improve decision-making processes. It can also be integrated into existing machine learning systems to enhance their interpretability and performance.

Prior Art

Further research can be conducted in the field of neural network interpretability and decision-making processes to explore existing technologies and methodologies.

Frequently Updated Research

Researchers are constantly working on improving the interpretability of neural networks and developing new techniques to enhance decision-making processes. Stay updated on the latest advancements in this field to leverage the benefits of this technology.

Questions about Neural Network Interpretability

1. How does this technology improve the interpretability of neural networks?

  - This technology enhances interpretability by mapping output data into interpretable words, making it easier for users to understand the information generated by the neural network.

2. What are the potential applications of this technology in different industries?

  - This technology can be applied in various fields such as healthcare, finance, and cybersecurity to analyze data, make predictions, and improve decision-making processes.


Original Abstract Submitted

A computer-implemented process for training a neural network having a plurality of transform layers includes the following operations. Input data for one transform layer of the plurality of transform layers is transformed by the one transform layer into output data. A neural-backed decision tree is generated for the transform layer, a neural-backed decision tree. The transforming and the generating are repeated for each of the plurality of transform layers. A neural-backed decision tree map for a particular one of the plurality of transform layers maps output data of the particular one of the plurality of transform layers into a list of interpretable words from a generative search domain of facts and evidence.