17770049. COOPERATIVE TRAINING MIGRATION simplified abstract (RAKUTEN MOBILE, INC.)

From WikiPatents
Jump to navigation Jump to search

COOPERATIVE TRAINING MIGRATION

Organization Name

RAKUTEN MOBILE, INC.

Inventor(s)

Rehmat Ullah of Belfast (GB)

Di Wu of Belfast (GB)

Paul Harvey of Tokyo (JP)

Peter Kilpatrick of Belfast (GB)

Ivor Spence of Belfast (GB)

Blesson Varghese of Belfast (GB)

COOPERATIVE TRAINING MIGRATION - A simplified explanation of the abstract

This abstract first appeared for US patent application 17770049 titled 'COOPERATIVE TRAINING MIGRATION

Simplified Explanation

The patent application describes a method for cooperatively training a neural network model using a computational device connected through a network. During training iterations, a data checkpoint is created containing gradient values, weight values, loss value, and optimizer state of the server partition. A migration notice is received during training iterations, indicating the transfer of the data checkpoint to a second edge server.

  • Training a neural network model cooperatively with a computational device through a network
  • Creating a data checkpoint during training iterations containing gradient values, weight values, loss value, and optimizer state
  • Receiving a migration notice during training iterations for transferring the data checkpoint to a second edge server

Key Features and Innovation

  • Cooperative training migration of neural network models
  • Creation of data checkpoints during training iterations
  • Transfer of data checkpoints to a second edge server based on migration notices

Potential Applications

  • Edge computing
  • Distributed machine learning systems
  • Network optimization

Problems Solved

  • Efficient training of neural network models across multiple edge servers
  • Seamless migration of training data checkpoints
  • Improved network performance during cooperative training

Benefits

  • Faster training of neural network models
  • Enhanced scalability of machine learning systems
  • Optimized network utilization

Commercial Applications

Edge Computing: Enhancing the efficiency of distributed machine learning systems in edge computing environments

Prior Art

Further research can be conducted on cooperative training methods in distributed systems and edge computing environments.

Frequently Updated Research

Stay updated on advancements in cooperative training methods for neural networks in distributed systems.

Questions about Cooperative Training Migration

How does cooperative training migration improve the efficiency of neural network training?

Cooperative training migration allows for the seamless transfer of data checkpoints between edge servers, optimizing network utilization and improving training speed.

What are the key challenges in implementing cooperative training migration in distributed systems?

The main challenges include ensuring data consistency, minimizing latency during data transfer, and maintaining network stability during migration processes.


Original Abstract Submitted

Cooperative training migration is performed by training, cooperatively with a computational device through a network, the neural network model, creating, during the iterations of training, a data checkpoint, the data checkpoint including the gradient values and the weight values of the server partition, the loss value, and an optimizer state, receiving, during the iterations of training, a migration notice, the migration notice including an identifier of a second edge server, and transferring, during the iterations of training, the data checkpoint to the second edge server.