18579451. CAUSAL DISCOVERY AND MISSING VALUE IMPUTATION simplified abstract (Microsoft Technology Licensing, LLC)

From WikiPatents
Jump to navigation Jump to search

CAUSAL DISCOVERY AND MISSING VALUE IMPUTATION

Organization Name

Microsoft Technology Licensing, LLC

Inventor(s)

Cheng Zhang of Cambridge (GB)

Miltiadis Allamanis of Cambridge (GB)

Simon Loftus Peyton Jones of Cambridge (GB)

Angus James Lamb of East Yorkshire (GB)

Pablo Morales- Álvarez of Granada (ES)

CAUSAL DISCOVERY AND MISSING VALUE IMPUTATION - A simplified explanation of the abstract

This abstract first appeared for US patent application 18579451 titled 'CAUSAL DISCOVERY AND MISSING VALUE IMPUTATION

The abstract describes a computer-implemented method that involves encoding input vector values into latent vectors using a neural network, determining an output vector using a second neural network with a graph neural network, and minimizing a loss function by tuning parameters of the neural networks and edge probabilities of the graph.

  • Input vector values are encoded into latent vectors using a first neural network.
  • The latent vectors are input into a second neural network with a graph neural network to determine an output vector.
  • The edge probabilities of the graph, parameters of the neural networks, and loss function are minimized to optimize the system.
  • The loss function includes a measure of difference between the input vector and the output vector, as well as a function of the graph.

Potential Applications: - Predictive modeling - Data analysis - Pattern recognition

Problems Solved: - Efficient encoding and decoding of input vectors - Optimization of neural network parameters and graph edge probabilities

Benefits: - Improved accuracy in predicting output vectors - Enhanced understanding of causal relationships between variables - Efficient data processing and analysis

Commercial Applications: Predictive analytics software for businesses to optimize decision-making processes based on data analysis.

Questions about the technology: 1. How does this method improve upon traditional neural network approaches? 2. What are the potential limitations of using a graph neural network in this context?

Frequently Updated Research: Stay updated on advancements in graph neural networks and their applications in predictive modeling and data analysis.


Original Abstract Submitted

A computer-implemented method comprising: receiving an input vector comprising values of variables; using a first neural network to encode the values of the variables of the input vector into a plurality of latent vectors; determining an output vector by inputting the plurality of latent vectors into a second neural network comprising a graph neural network, wherein the graph neural network is parametrized by a graph comprising edge probabilities indicating causal relationships between the variables; and minimising a loss function by tuning the edge probabilities of the graph, at least one parameter of the first neural network and at least one parameter of the second neural network, wherein the loss function comprises a function of the graph and a measure of difference between the input vector and the output vector