GOOGLE LLC (20240296834). UNSUPERVISED FEDERATED LEARNING OF MACHINE LEARNING MODEL LAYERS simplified abstract

From WikiPatents
Jump to navigation Jump to search

UNSUPERVISED FEDERATED LEARNING OF MACHINE LEARNING MODEL LAYERS

Organization Name

GOOGLE LLC

Inventor(s)

Françoise Beaufays of Mountain View CA (US)

Khe Chai Sim of Dublin CA (US)

Johan Schalkwyk of Scarsdale NY (US)

UNSUPERVISED FEDERATED LEARNING OF MACHINE LEARNING MODEL LAYERS - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240296834 titled 'UNSUPERVISED FEDERATED LEARNING OF MACHINE LEARNING MODEL LAYERS

The abstract of this patent application describes a method for unsupervised federated training of global machine learning model layers, which are then combined with additional layers to create a combined ML model.

  • Detect audio data capturing a user's spoken utterance on a client device.
  • Process the audio data locally using a local ML model to generate predicted outputs.
  • Use unsupervised learning locally to generate a gradient based on the predicted outputs.
  • Transmit the gradient to a remote system for further processing.
  • Update the weight of global ML model layers based on the gradient.
  • Train a combined ML model remotely that includes the updated global ML model layers and additional layers.
  • Transmit the combined ML model back to the client device for prediction purposes.

Potential Applications: - Speech recognition technology - Personalized virtual assistants - Voice-controlled devices

Problems Solved: - Efficient training of machine learning models in a distributed environment - Real-time processing of audio data on client devices

Benefits: - Improved accuracy of speech recognition - Reduced latency in processing audio data - Enhanced user experience with voice-controlled devices

Commercial Applications: - Smart speakers - Voice-enabled applications - Customer service chatbots

Questions about the technology: 1. How does unsupervised federated training differ from traditional supervised training methods? 2. What are the potential privacy implications of processing audio data locally on client devices?

Frequently Updated Research: - Ongoing advancements in federated learning techniques - Research on improving the efficiency of unsupervised learning algorithms in machine learning.


Original Abstract Submitted

implementations disclosed herein are directed to unsupervised federated training of global machine learning (“ml”) model layers that, after the federated training, can be combined with additional layer(s), thereby resulting in a combined ml model. processor(s) can: detect audio data that captures a spoken utterance of a user of a client device; process, using a local ml model, the audio data to generate predicted output(s); generate, using unsupervised learning locally at the client device, a gradient based on the predicted output(s); transmit the gradient to a remote system; update weight(s) of the global ml model layers based on the gradient; subsequent to updating the weight(s), train, using supervised learning remotely at the remote system, a combined ml model that includes the updated global ml model layers and additional layer(s); transmit the combined ml model to the client device; and use the combined ml model to make prediction(s) at the client device.