18279754. PARALLELIZING MOMENT-BASED OPTIMIZATIONS WITH BLOCKWISE MODEL-UPDATE FILTERING simplified abstract (Microsoft Technology Licensing, LLC)

From WikiPatents
Jump to navigation Jump to search

PARALLELIZING MOMENT-BASED OPTIMIZATIONS WITH BLOCKWISE MODEL-UPDATE FILTERING

Organization Name

Microsoft Technology Licensing, LLC

Inventor(s)

Kai Chen of Beijing (CN)

Qiang Huo of Beijing (CN)

Haisong Ding of Beijing (CN)

PARALLELIZING MOMENT-BASED OPTIMIZATIONS WITH BLOCKWISE MODEL-UPDATE FILTERING - A simplified explanation of the abstract

This abstract first appeared for US patent application 18279754 titled 'PARALLELIZING MOMENT-BASED OPTIMIZATIONS WITH BLOCKWISE MODEL-UPDATE FILTERING

Simplified Explanation

The patent application describes a method for parallelizing moment-based optimization with blockwise model-update filtering. A master node shares global model and moment parameters with worker nodes for training cycles, receiving local parameters from the workers to update the global parameters. This process improves training convergence.

  • Efficient parallelization of moment-based optimization
  • Blockwise model-update filtering
  • Master node shares global parameters with worker nodes
  • Local parameters from workers used to update global parameters
  • Improved training convergence

Potential Applications

This technology can be applied in various fields such as machine learning, artificial intelligence, data analytics, and optimization algorithms.

Problems Solved

This technology addresses the challenges of optimizing large-scale models efficiently and achieving faster convergence during training processes.

Benefits

  • Improved efficiency in optimizing large-scale models
  • Faster convergence during training processes
  • Enhanced performance of machine learning algorithms

Commercial Applications

Optimizing large-scale machine learning models in industries such as finance, healthcare, e-commerce, and autonomous vehicles can benefit from this technology, leading to improved accuracy and efficiency in data analysis and decision-making processes.

Prior Art

Prior research in parallel optimization algorithms and distributed computing can provide insights into similar approaches to improving training processes in machine learning.

Frequently Updated Research

Stay updated on advancements in parallel optimization algorithms, distributed computing, and machine learning techniques to enhance the efficiency and performance of training processes.

Questions about Parallelizing Moment-Based Optimization

How does parallelizing moment-based optimization improve training convergence?

Parallelizing moment-based optimization allows for faster updates to global parameters, leading to quicker convergence during training cycles.

What are the key benefits of blockwise model-update filtering in this context?

Blockwise model-update filtering helps streamline the updating process of global parameters based on local information, enhancing the overall efficiency of the training process.


Original Abstract Submitted

In embodiments of the present disclosure, there is provided a solution for parallelizing moment-based optimization with blockwise model-update filtering. A master node provides a global model parameter and a global moment parameter to a plurality of worker node for a training cycle s, and receives, from the worker nodes, a plurality of local model parameters and a plurality of local moment parameters generated by the worker nodes performing parallel moment-based optimizations. The global model parameter and the global moment parameter are updated based on the corresponding received local parameters and model update information for the training cycle. The updated global model parameter and the updated global moment parameter are then provided to the worker nodes for performing moment-based optimizations in parallel for a succeeding training cycle. Embodiments of the present disclosure can achieve better and faster convergence of the training process.