18148418. SYSTEMS AND METHODS FOR NEURAL ARCHITECTURE SEARCH simplified abstract (Samsung Electronics Co., Ltd.)

From WikiPatents
Jump to navigation Jump to search

SYSTEMS AND METHODS FOR NEURAL ARCHITECTURE SEARCH

Organization Name

Samsung Electronics Co., Ltd.

Inventor(s)

Mostafa El-khamy of San Diego CA (US)

Yanlin Zhou of San Diego CA (US)

SYSTEMS AND METHODS FOR NEURAL ARCHITECTURE SEARCH - A simplified explanation of the abstract

This abstract first appeared for US patent application 18148418 titled 'SYSTEMS AND METHODS FOR NEURAL ARCHITECTURE SEARCH

Simplified Explanation

The abstract describes a system and method for neural architecture search, where a neural network is trained using a training data set and a smooth maximum unit regularization value to compute training loss. The connection weights of the neural network are adjusted to reduce the training loss.

  • Training data set is processed with a neural network during the first epoch of training.
  • Training loss is computed using a smooth maximum unit regularization value.
  • Connection weights of the neural network are adjusted to reduce training loss.

Potential applications of this technology:

  • Automated machine learning model development
  • Optimization of neural network architectures
  • Improving performance of deep learning models

Problems solved by this technology:

  • Manual tuning of neural network architectures
  • Time-consuming process of architecture search
  • Improving efficiency and accuracy of neural network training

Benefits of this technology:

  • Faster development of machine learning models
  • Enhanced performance of neural networks
  • Reduction in human effort required for architecture search.


Original Abstract Submitted

A system and a method are disclosed for neural architecture search. In some embodiments, the method includes: processing a training data set with a neural network during a first epoch of training of the neural network; computing a training loss using a smooth maximum unit regularization value; and adjusting a plurality of multiplicative connection weights and a plurality of parametric connection weights of the neural network in a direction that reduces the training loss.