18392502. METHOD, APPARATUS, AND SYSTEM FOR GENERATING NEURAL NETWORK MODEL, DEVICE, MEDIUM, AND PROGRAM PRODUCT simplified abstract (Huawei Technologies Co., Ltd.)

From WikiPatents
Jump to navigation Jump to search

METHOD, APPARATUS, AND SYSTEM FOR GENERATING NEURAL NETWORK MODEL, DEVICE, MEDIUM, AND PROGRAM PRODUCT

Organization Name

Huawei Technologies Co., Ltd.

Inventor(s)

Mi Luo of Singapore (SG)

Fei Chen of Hong Kong (CN)

Zhenguo Li of Hong Kong (CN)

Jiashi Feng of Singapore (SG)

METHOD, APPARATUS, AND SYSTEM FOR GENERATING NEURAL NETWORK MODEL, DEVICE, MEDIUM, AND PROGRAM PRODUCT - A simplified explanation of the abstract

This abstract first appeared for US patent application 18392502 titled 'METHOD, APPARATUS, AND SYSTEM FOR GENERATING NEURAL NETWORK MODEL, DEVICE, MEDIUM, AND PROGRAM PRODUCT

Simplified Explanation

The abstract describes a method, apparatus, and system for generating a neural network model through a federated learning scheme between multiple devices. The process involves adjusting the structure of a hypernetwork model to determine a subnetwork model, exchanging parameters between devices, training the subnetwork model, and updating the hypernetwork model based on the trained subnetwork model.

  • Efficient federated learning scheme between multiple devices
  • Adjusting hypernetwork model to determine subnetwork model
  • Exchanging parameters between devices for training
  • Updating hypernetwork model based on trained subnetwork model

Potential Applications

The technology described in the patent application could be applied in various fields such as:

  • Distributed machine learning systems
  • Collaborative AI research projects
  • Secure data sharing platforms

Problems Solved

This technology addresses several issues, including:

  • Efficient model training across multiple devices
  • Secure parameter exchange in federated learning
  • Improved collaboration in AI development

Benefits

The benefits of this technology include:

  • Enhanced model accuracy through collaborative training
  • Increased privacy and security in data sharing
  • Scalability for large-scale machine learning projects

Potential Commercial Applications

The technology could be utilized in commercial applications such as:

  • Healthcare for collaborative medical research
  • Finance for secure data analysis
  • Manufacturing for predictive maintenance systems

Possible Prior Art

One possible prior art for this technology could be the concept of federated learning in machine learning, where models are trained across multiple devices without sharing raw data. This patent application appears to improve upon existing federated learning techniques by introducing a structured approach to model generation and parameter exchange.

Unanswered Questions

How does this technology ensure data privacy and security during parameter exchange between devices?

The patent abstract does not provide specific details on the mechanisms used to secure data during parameter exchange. It would be beneficial to understand the encryption methods or protocols employed to protect sensitive information.

What are the potential limitations of this federated learning scheme in terms of scalability and model complexity?

While the abstract highlights the efficiency of the federated learning scheme, it does not address potential challenges related to scalability or complex neural network structures. It would be important to investigate how this technology performs in scenarios with a large number of devices or intricate model architectures.


Original Abstract Submitted

A method, an apparatus, and a system for generating a neural network model, a device, a medium, and a program product are provided. In an embodiment, a first device sends an indication about a structure of a subnetwork model to a second device, where the subnetwork model is determined by adjusting a structure of a hypernetwork model. The first device receives a parameter of the subnetwork model from the second device, where the parameter of the subnetwork model is determined by the second device based on the indication and the hypernetwork model. The first device trains the subnetwork model based on the received parameter of the subnetwork model. The first device sends a parameter of the trained subnetwork model to the second device for the second device to update the hypernetwork model. In the foregoing manner, an efficient federated learning scheme between a plurality of devices is provided.