17900522. Method for Optimizing Neural Networks simplified abstract (SAMSUNG ELECTRONICS CO., LTD.)

From WikiPatents
Revision as of 00:39, 4 January 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Method for Optimizing Neural Networks

Organization Name

SAMSUNG ELECTRONICS CO., LTD.

Inventor(s)

Weiran Deng of Woodland Hills CA (US)

Method for Optimizing Neural Networks - A simplified explanation of the abstract

This abstract first appeared for US patent application 17900522 titled 'Method for Optimizing Neural Networks

Simplified Explanation

The patent application describes a method for training a deep neural network (DNN) model with high sparsity in the weights. Here is a simplified explanation of the abstract:

  • The method involves using a DNN model with multiple layers and nodes.
  • Each weight in the model corresponds to a node in the network.
  • A distribution function is used to sample a change in weight for each weight.
  • The weight is then updated by multiplying the change with the sign of the weight.
  • This process of sampling and updating is iterated to train the DNN model.
  • After training, the weights in the model exhibit a high rate of sparsity.

Potential applications of this technology:

  • This method can be applied in various fields where deep neural networks are used, such as image recognition, natural language processing, and speech recognition.
  • It can help improve the efficiency and speed of training deep neural networks by reducing the number of non-zero weights.

Problems solved by this technology:

  • Deep neural networks often have a large number of weights, which can lead to high computational and memory requirements during training.
  • The high sparsity achieved through this method helps address these issues by reducing the number of non-zero weights, resulting in more efficient training.

Benefits of this technology:

  • The method allows for training deep neural networks with high sparsity in the weights, leading to more efficient models.
  • The reduced number of non-zero weights can result in faster inference and lower memory requirements during deployment.
  • The approach can potentially lead to improved performance in terms of accuracy and generalization of the trained models.


Original Abstract Submitted

A method includes: providing a deep neural networks (DNN) model comprising a plurality of layers, each layer of the plurality of layers includes a plurality of nodes; sampling a change of a weight for each of a plurality of weights based on a distribution function, each weight of the plurality of weights corresponds to each node of the plurality of nodes; updating the weight with the change of the weight multiplied by a sign of the weight; and training the DNN model by iterating the steps of sampling the change and updating the weight. The plurality of weights has a high rate of sparsity after the training.