17994850. SYSTEMS AND METHODS FOR COMMUNICATION-AWARE FEDERATED LEARNING simplified abstract (TOYOTA JIDOSHA KABUSHIKI KAISHA)

From WikiPatents
Jump to navigation Jump to search

SYSTEMS AND METHODS FOR COMMUNICATION-AWARE FEDERATED LEARNING

Organization Name

TOYOTA JIDOSHA KABUSHIKI KAISHA

Inventor(s)

Yitao Chen of Mountain View CA (US)

Dawei Chen of Mountain View CA (US)

Haoxin Wang of Mountain View CA (US)

Kyungtae Han of Mountain View CA (US)

SYSTEMS AND METHODS FOR COMMUNICATION-AWARE FEDERATED LEARNING - A simplified explanation of the abstract

This abstract first appeared for US patent application 17994850 titled 'SYSTEMS AND METHODS FOR COMMUNICATION-AWARE FEDERATED LEARNING

Simplified Explanation

The system described in the abstract is a communication-aware federated learning system that involves edge nodes and a server working together to train machine learning models.

  • Edge nodes train machine learning models using local data obtained from sensors.
  • Edge nodes determine the level of compression based on network bandwidth for the channel to the server.
  • Compressed trained machine learning models are transmitted to the server for aggregation.
  • The server decompresses and aggregates the trained machine learning models to obtain an aggregated model.
  • The aggregated model is then transmitted back to the edge nodes for further training.

Potential Applications

This technology can be applied in various fields such as healthcare, finance, and manufacturing where data privacy and communication efficiency are crucial.

Problems Solved

This system addresses the challenges of training machine learning models on distributed data sources while considering communication constraints and privacy concerns.

Benefits

The system allows for efficient training of machine learning models on decentralized data sources while minimizing communication costs and ensuring data privacy.

Potential Commercial Applications

Potential commercial applications of this technology include personalized healthcare services, financial risk analysis, and predictive maintenance in manufacturing.

Possible Prior Art

One possible prior art for this technology is the concept of federated learning, where machine learning models are trained across multiple decentralized devices while preserving data privacy.

What are the specific compression algorithms used in this system?

The specific compression algorithms used in this system are not mentioned in the abstract.

How does the system handle edge nodes with varying levels of computational power?

The abstract does not provide information on how the system handles edge nodes with varying levels of computational power.


Original Abstract Submitted

A system for communication-aware federated learning includes a server and edge nodes. Each of the edge nodes trains a machine learning model using first local data obtained by sensors of corresponding edge node. Each of the edge nodes obtains network bandwidth for a channel between corresponding edge node and the server. One or more of the edge nodes determines a level of compression based on the bandwidth for the channel, compresses the trained machine leaning model based on the determined level of compression, and transmits the compressed trained machine learning model to the server. The server decompresses the compressed trained machine learning models and aggregates the decompressed trained machine learning models to obtain the aggregated machine learning model, and transmits the aggregated machine learning model to each of the edge nodes. Each of the edge nodes receives the aggregated machine learning model from the server.