17838445. VERTICAL FEDERATED LEARNING WITH SECURE AGGREGATION simplified abstract (INTERNATIONAL BUSINESS MACHINES CORPORATION)

From WikiPatents
Jump to navigation Jump to search

VERTICAL FEDERATED LEARNING WITH SECURE AGGREGATION

Organization Name

INTERNATIONAL BUSINESS MACHINES CORPORATION

Inventor(s)

Shiqiang Wang of White Palins NY (US)

Timothy John Castiglia of Troy NY (US)

Nathalie Baracaldo Angel of San Jose CA (US)

Stacy Elizabeth Patterson of Troy NY (US)

Runhua Xu of Pittsburgh PA (US)

Yi Zhou of San Jose CA (US)

VERTICAL FEDERATED LEARNING WITH SECURE AGGREGATION - A simplified explanation of the abstract

This abstract first appeared for US patent application 17838445 titled 'VERTICAL FEDERATED LEARNING WITH SECURE AGGREGATION

Simplified Explanation

The patent application describes a method for analyzing the connections between layers of a neural network model used for vertical federated learning. The method involves generating an undirected graph of nodes, where nodes with multiple child nodes perform aggregation operations. The model's output corresponds to a node in the graph.

  • The method analyzes the model to identify a layer where the sum of lower layer outputs is computed.
  • This identified layer is then partitioned into two parts: one part is applied to multiple entities, and the other part acts as an aggregator for the outputs of the first part.
  • Aggregation operations are performed between pairs of lower layer outputs.
  • Multiple forward and backward passes of the neural network model are executed, incorporating secure aggregation and maintaining the model's partitioning.

Potential Applications

  • Vertical federated learning: The method is specifically designed for vertical federated learning, where multiple entities collaborate to train a shared model without sharing their private data.
  • Privacy-preserving machine learning: By performing secure aggregation and maintaining model partitioning, the method ensures privacy protection during the training process.
  • Collaborative AI: The method enables multiple entities to contribute to the training of a neural network model while preserving data privacy.

Problems Solved

  • Privacy concerns in federated learning: The method addresses the challenge of training a shared model without exposing sensitive data from individual entities.
  • Efficient aggregation in distributed learning: By partitioning the model and performing aggregation operations, the method optimizes the aggregation process in a distributed learning setting.

Benefits

  • Enhanced privacy: The method incorporates secure aggregation and maintains model partitioning, ensuring that sensitive data remains private during the training process.
  • Improved collaboration: Multiple entities can contribute to the training of a shared model, enabling collaborative learning without compromising data privacy.
  • Efficient distributed learning: By partitioning the model and performing aggregation operations, the method optimizes the training process in a distributed learning environment.


Original Abstract Submitted

The method provides for analyzing input and output connections of layers of a received neural network model configured for vertical federated learning. An undirected graph of nodes is generated in which a node having two or more child nodes includes an aggregation operation, based on the analysis of the model in which a model output corresponds to a node of the graph. A layer of the model is identified in which a sum of lower layer outputs are computed. The identified model layer is partitioned into a first part applied respectively to the multiple entities and a second part applied as an aggregator of the output of the first part. The aggregation operation is performed between pairs of lower layer outputs, and multiple forward and backward passes of the neural network model are performed that include secure aggregation and maintain model partitioning in forward and backward passes.