18343173. REDUCING DATA COMMUNICATIONS IN DISTRIBUTED INFERENCE SCHEMES simplified abstract (Western Digital Technologies, Inc.)

From WikiPatents
Jump to navigation Jump to search

REDUCING DATA COMMUNICATIONS IN DISTRIBUTED INFERENCE SCHEMES

Organization Name

Western Digital Technologies, Inc.

Inventor(s)

Minghai Qin of Milpitas CA (US)

Jaco Hofmann of Santa Clara CA (US)

Chao Sun of San Jose CA (US)

Qingbo Wang of Irvine CA (US)

Dejan Vucinic of San Jose CA (US)

REDUCING DATA COMMUNICATIONS IN DISTRIBUTED INFERENCE SCHEMES - A simplified explanation of the abstract

This abstract first appeared for US patent application 18343173 titled 'REDUCING DATA COMMUNICATIONS IN DISTRIBUTED INFERENCE SCHEMES

Simplified Explanation

The patent application describes methods and apparatus for processing data in a distributed inference scheme based on sparse inputs. The system involves generating sparsified inputs for different nodes in a neural network, transmitting these inputs between nodes, and combining them to generate an inference.

  • The method involves receiving an input at a first node, generating a sparsified input for a second node based on a set of features associated with the second node, transmitting the sparsified input to the second node, receiving a sparsified input from the second node, and combining these inputs to process into an output.
  • The neural network is configured to generate an inference based on processing the outputs of the first and second nodes.

Potential Applications

This technology could be applied in various fields such as:

  • Distributed computing
  • Machine learning
  • Artificial intelligence

Problems Solved

This technology helps in:

  • Efficient processing of data in a distributed system
  • Optimizing neural network performance
  • Handling sparse inputs effectively

Benefits

The benefits of this technology include:

  • Improved inference accuracy
  • Reduced computational complexity
  • Enhanced scalability of neural networks

Potential Commercial Applications

This technology could be valuable in industries such as:

  • Healthcare for medical diagnosis
  • Finance for fraud detection
  • Autonomous vehicles for real-time decision making

Possible Prior Art

One possible prior art could be the use of distributed computing techniques in neural networks to improve processing efficiency. Another could be the optimization of sparse inputs in machine learning algorithms.

What are the specific features of the weight mask used in this method?

The specific features of the weight mask include non-zero values for weights associated with features upon which processing by the second node depends, and zeroed values for weights associated with other features.

How does this method improve the efficiency of processing data in a distributed inference scheme?

This method improves efficiency by generating sparsified inputs for different nodes based on relevant features, reducing the amount of data transmitted between nodes and optimizing the overall processing of the neural network.


Original Abstract Submitted

Methods and apparatus for processing data in a distributed inference scheme based on sparse inputs. An example method includes receiving an input at a first node. A first sparsified input is generated for a second node based on a set of features associated with the second node, which are identified based on a weight mask having non-zero values for weights associated with features upon which processing by the second node depends and zeroed values for weights associated with other features. The first sparsified input is transmitted to the second node for generating an output of the second node. A second sparsified input is received from the second node and combined into a combined input. The combined input is processed into an output of the first node. The neural network is configured to generate an inference based on processing the outputs of the first node and the second node.