Google llc (20240095582). DECENTRALIZED LEARNING OF MACHINE LEARNING MODEL(S) THROUGH UTILIZATION OF STALE UPDATES(S) RECEIVED FROM STRAGGLER COMPUTING DEVICE(S) simplified abstract

From WikiPatents
Jump to navigation Jump to search

DECENTRALIZED LEARNING OF MACHINE LEARNING MODEL(S) THROUGH UTILIZATION OF STALE UPDATES(S) RECEIVED FROM STRAGGLER COMPUTING DEVICE(S)

Organization Name

google llc

Inventor(s)

Andrew Hard of Menlo Park CA (US)

Sean Augenstein of San Mateo CA (US)

Rohan Anil of Lafayette CA (US)

Rajiv Mathews of Sunnyvale CA (US)

Lara Mcconnaughey of San Francisco CA (US)

Ehsan Amid of Mountain View CA (US)

Antonious Girgis of Los Angeles CA (US)

DECENTRALIZED LEARNING OF MACHINE LEARNING MODEL(S) THROUGH UTILIZATION OF STALE UPDATES(S) RECEIVED FROM STRAGGLER COMPUTING DEVICE(S) - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240095582 titled 'DECENTRALIZED LEARNING OF MACHINE LEARNING MODEL(S) THROUGH UTILIZATION OF STALE UPDATES(S) RECEIVED FROM STRAGGLER COMPUTING DEVICE(S)

Simplified Explanation

In a round of decentralized learning for updating a global machine learning model, remote processors transmit primary weights to computing devices, which generate corresponding updates for the model. The global model is then updated based on these updates, with additional updates received subsequently for further refinement.

  • Remote processors transmit primary weights to computing devices
  • Computing devices generate corresponding updates for the global machine learning model
  • Global model is updated based on received updates
  • Additional updates are received for further refinement

Potential Applications

This technology can be applied in various fields such as healthcare, finance, e-commerce, and more for improving machine learning models in a decentralized manner.

Problems Solved

1. Efficient updating of global machine learning models in a decentralized environment 2. Utilizing multiple updates for achieving a final version of the model

Benefits

1. Improved accuracy and performance of machine learning models 2. Scalability and flexibility in updating global models 3. Enhanced collaboration among distributed systems

Potential Commercial Applications

Optimizing advertising algorithms in digital marketing for better targeting and conversion rates.

Possible Prior Art

One potential prior art could be the use of federated learning techniques in updating machine learning models across distributed systems.

What are the potential security implications of transmitting model weights to computing devices for updates?

Transmitting model weights to computing devices for updates could pose security risks such as data breaches, unauthorized access to sensitive information, and potential model poisoning attacks.

How can the efficiency of receiving and utilizing subsequent updates be further improved in decentralized learning systems?

Efficiency in receiving and utilizing subsequent updates can be enhanced by implementing advanced synchronization techniques, optimizing communication protocols, and prioritizing updates based on relevance and impact on the global model.


Original Abstract Submitted

during a round of decentralized learning for updating of a global machine learning (ml) model, remote processor(s) of a remote system may transmit, to a population of computing devices, primary weights for a primary version of the global ml model, and cause each of the computing devices to generate a corresponding update for the primary version of the global ml model. further, the remote processor(s) may cause the primary version of the global ml model to be updated based on the corresponding updates that are received during the round of decentralized learning. however, the remote processor(s) may receive other corresponding updates subsequent to the round of decentralized learning. accordingly, various techniques described herein (e.g., fare-dust, feast on msg, and/or other techniques) enable the other corresponding updates to be utilized in achieving a final version of the global ml model.