18075757. DECENTRALIZED LEARNING OF MACHINE LEARNING MODEL(S) THROUGH UTILIZATION OF STALE UPDATES(S) RECEIVED FROM STRAGGLER COMPUTING DEVICE(S) simplified abstract (Google LLC)

From WikiPatents
Jump to navigation Jump to search

DECENTRALIZED LEARNING OF MACHINE LEARNING MODEL(S) THROUGH UTILIZATION OF STALE UPDATES(S) RECEIVED FROM STRAGGLER COMPUTING DEVICE(S)

Organization Name

Google LLC

Inventor(s)

Andrew Hard of Menlo Park CA (US)

Sean Augenstein of San Mateo CA (US)

Rohan Anil of Lafayette CA (US)

Rajiv Mathews of Sunnyvale CA (US)

Lara Mcconnaughey of San Francisco CA (US)

Ehsan Amid of Mountain View CA (US)

Antonious Girgis of Los Angeles CA (US)

DECENTRALIZED LEARNING OF MACHINE LEARNING MODEL(S) THROUGH UTILIZATION OF STALE UPDATES(S) RECEIVED FROM STRAGGLER COMPUTING DEVICE(S) - A simplified explanation of the abstract

This abstract first appeared for US patent application 18075757 titled 'DECENTRALIZED LEARNING OF MACHINE LEARNING MODEL(S) THROUGH UTILIZATION OF STALE UPDATES(S) RECEIVED FROM STRAGGLER COMPUTING DEVICE(S)

Simplified Explanation

The abstract describes a method for updating a global machine learning model through decentralized learning, where remote processors transmit primary weights to computing devices for generating updates to the model.

  • Remote processors transmit primary weights to computing devices for updating the global machine learning model.
  • Computing devices generate corresponding updates for the primary version of the global ML model.
  • The primary version of the global ML model is updated based on the received updates during the decentralized learning round.
  • Techniques like FARe-DUST and FeAST on MSG are used to incorporate other corresponding updates for achieving the final version of the global ML model.

Potential Applications

This technology can be applied in various fields such as healthcare, finance, and e-commerce for improving machine learning models through decentralized learning.

Problems Solved

1. Efficiently updating a global machine learning model with contributions from multiple computing devices. 2. Ensuring that all updates from computing devices are incorporated into the final version of the model.

Benefits

1. Improved accuracy and performance of the global machine learning model. 2. Scalability in updating the model with a large number of computing devices. 3. Enhanced collaboration and knowledge sharing among remote processors and computing devices.

Potential Commercial Applications

Optimizing advertising algorithms, enhancing recommendation systems, and improving fraud detection in financial transactions are potential commercial applications of this technology.

Possible Prior Art

One possible prior art in this field is the use of federated learning techniques for updating machine learning models across distributed devices.

Unanswered Questions

How does this technology ensure data privacy and security during decentralized learning?

This article does not address the specific mechanisms or protocols used to protect sensitive data during the decentralized learning process.

What are the computational requirements for implementing this decentralized learning approach on a large scale?

The article does not provide information on the computational resources needed to support decentralized learning with a significant number of computing devices.


Original Abstract Submitted

During a round of decentralized learning for updating of a global machine learning (ML) model, remote processor(s) of a remote system may transmit, to a population of computing devices, primary weights for a primary version of the global ML model, and cause each of the computing devices to generate a corresponding update for the primary version of the global ML model. Further, the remote processor(s) may cause the primary version of the global ML model to be updated based on the corresponding updates that are received during the round of decentralized learning. However, the remote processor(s) may receive other corresponding updates subsequent to the round of decentralized learning. Accordingly, various techniques described herein (e.g., FARe-DUST, FeAST on MSG, and/or other techniques) enable the other corresponding updates to be utilized in achieving a final version of the global ML model.