18493136. FEDERATED LEARNING METHOD AND APPARATUS simplified abstract (Huawei Technologies Co., Ltd.)

From WikiPatents
Jump to navigation Jump to search

FEDERATED LEARNING METHOD AND APPARATUS

Organization Name

Huawei Technologies Co., Ltd.

Inventor(s)

Qi Zhang of Hangzhou (CN)

Peichen Zhou of Shenzhen (CN)

Gang Chen of Shenzhen (CN)

Dongsheng Chen of Hangzhou (CN)

FEDERATED LEARNING METHOD AND APPARATUS - A simplified explanation of the abstract

This abstract first appeared for US patent application 18493136 titled 'FEDERATED LEARNING METHOD AND APPARATUS

Simplified Explanation

The patent application describes a method and apparatus for federated learning, where a server coordinates the training of a global model using updates from multiple clients.

  • First server receives request message from at least one first client.
  • First server sends training configuration parameter and global model to first client.
  • First server receives first model update parameters from first client.
  • First server aggregates first model update parameters to obtain first aggregation information.
  • First server obtains second aggregation information from second server.
  • First server updates global model based on aggregation information from both servers.

---

      1. Potential Applications
  • Collaborative machine learning projects
  • Privacy-preserving data analysis
  • Distributed model training in IoT devices
      1. Problems Solved
  • Privacy concerns in centralized machine learning
  • Scalability issues with large datasets
  • Communication overhead in distributed learning systems
      1. Benefits
  • Improved data privacy and security
  • Efficient distributed model training
  • Scalability for large-scale machine learning projects


Original Abstract Submitted

This application provides a federated learning method and apparatus. The method includes: A first server receives a request message sent by at least one first client. The first server sends a training configuration parameter and a global model to the at least one first client. The first server receives first model update parameters separately fed back by the at least one first client. The first server aggregates the first model update parameters, to obtain first aggregation information in a current round of iteration. The first server obtains second aggregation information sent by the second server. The first server updates, based on the first aggregation information and the second aggregation information, the global model stored on the first server.