18493571. Systems and Methods for Differentially Private Federated Machine Learning for Large Models and a Strong Adversary simplified abstract (The Regents of the University of California)

From WikiPatents
Jump to navigation Jump to search

Systems and Methods for Differentially Private Federated Machine Learning for Large Models and a Strong Adversary

Organization Name

The Regents of the University of California

Inventor(s)

Trinabh Gupta of Goleta CA (US)

Kunlong Liu of Santa Barbara CA (US)

Richa Wadaskar of Santa Barbara CA (US)

Systems and Methods for Differentially Private Federated Machine Learning for Large Models and a Strong Adversary - A simplified explanation of the abstract

This abstract first appeared for US patent application 18493571 titled 'Systems and Methods for Differentially Private Federated Machine Learning for Large Models and a Strong Adversary

Simplified Explanation

The abstract describes a method for federated learning that involves different committees of devices working together to update model parameters based on encrypted aggregation results.

  • Identifying devices for master and DP-noise committees
  • Receiving encrypted noise values and update values from different sets of devices
  • Aggregating encrypted values to produce encrypted aggregation results
  • Receiving decrypted aggregation results from devices with cryptographic key shares
  • Updating model parameters based on decrypted results

Potential Applications

Federated learning can be applied in various fields such as healthcare, finance, and manufacturing to train machine learning models without sharing sensitive data.

Problems Solved

This technology solves the problem of maintaining data privacy and security while allowing multiple devices to collaborate on training machine learning models.

Benefits

The benefits of this technology include improved data privacy, enhanced model accuracy through collaborative learning, and reduced communication overhead.

Potential Commercial Applications

One potential commercial application of this technology is in the development of personalized recommendation systems for e-commerce platforms, where user data privacy is crucial.

Possible Prior Art

One possible prior art for federated learning is the concept of secure multi-party computation, where multiple parties can jointly compute a function over their inputs without revealing their private data.

What are the potential limitations of this federated learning method?

One potential limitation of this federated learning method is the increased computational overhead required for encrypting and decrypting data during the aggregation process.

How does this federated learning method ensure data privacy and security?

This federated learning method ensures data privacy and security by using encryption techniques to protect sensitive information while allowing devices to collaborate on model training.


Original Abstract Submitted

Systems and methods for federated learning are illustrated. A method for federated learning includes steps for identifying a first set of one or more devices as members of a master committee, identifying a second set of one or more devices as members of a differential privacy (DP)-noise committee, receiving a set of encrypted noise values for differential privacy from the members of the DP-noise committee, receiving, from a third set of one or more devices, a set of encrypted update values, and aggregating the encrypted noise values and the encrypted update values to produce encrypted aggregation results. The method further includes steps for receiving, from a fourth set of one or more devices, decrypted aggregation results based on cryptographic key shares of a private cryptographic key from the master committee, and updating model parameters of the model based on the decrypted aggregation results.