18518753. FEDERATED LEARNING METHOD, APPARATUS, AND SYSTEM simplified abstract (Huawei Technologies Co., Ltd.)
FEDERATED LEARNING METHOD, APPARATUS, AND SYSTEM
Organization Name
Inventor(s)
FEDERATED LEARNING METHOD, APPARATUS, AND SYSTEM - A simplified explanation of the abstract
This abstract first appeared for US patent application 18518753 titled 'FEDERATED LEARNING METHOD, APPARATUS, AND SYSTEM
Simplified Explanation
The application describes a method for federated learning to improve model precision through depersonalization processing.
- First server receives models from downstream devices.
- First server trains received models to obtain trained models.
- First server aggregates trained models and updates a locally stored model.
- The process results in an updated model with higher output precision.
Potential Applications
The technology can be applied in industries such as healthcare, finance, and telecommunications for improving data privacy and model accuracy.
Problems Solved
1. Enhances model precision through depersonalization processing. 2. Facilitates collaborative learning without compromising data privacy. 3. Enables efficient model updates without centralized data storage.
Benefits
1. Improved model accuracy and precision. 2. Enhanced data privacy protection. 3. Collaborative learning across multiple devices and servers.
Potential Commercial Applications
Enhancing Data Privacy in Healthcare Analytics
Original Abstract Submitted
This application provides a federated learning method, apparatus, and system, so that a server retrains a received model in a federated learning process to implement depersonalization processing to some extent, to obtain a model with higher output precision. The method includes: First, a first server receives information about at least one first model sent by at least one downstream device, where the at least one downstream device may include another server or a client connected to the first server; the first server trains the at least one first model to obtain at least one trained first model; and then the first server aggregates the at least one trained first model, and updates a locally stored second model by using an aggregation result, to obtain an updated second model.