Google llc (20240095594). LEVERAGING INTERMEDIATE CHECKPOINTS TO IMPROVE THE PERFORMANCE OF TRAINED DIFFERENTIALLY PRIVATE MODELS simplified abstract

From WikiPatents
Jump to navigation Jump to search

LEVERAGING INTERMEDIATE CHECKPOINTS TO IMPROVE THE PERFORMANCE OF TRAINED DIFFERENTIALLY PRIVATE MODELS

Organization Name

google llc

Inventor(s)

Om Dipakbhai Thakkar of Fremont CA (US)

Arun Ganesh of Seattle WA (US)

Virat Vishnu Shejwalkar of Amherst MA (US)

Abhradeep Guha Thakurta of Los Gatos CA (US)

Rajiv Mathews of Sunnyvale CA (US)

LEVERAGING INTERMEDIATE CHECKPOINTS TO IMPROVE THE PERFORMANCE OF TRAINED DIFFERENTIALLY PRIVATE MODELS - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240095594 titled 'LEVERAGING INTERMEDIATE CHECKPOINTS TO IMPROVE THE PERFORMANCE OF TRAINED DIFFERENTIALLY PRIVATE MODELS

Simplified Explanation

The method described in the patent application involves training a differentially private model using a private training set, generating intermediate checkpoints during the training process, and determining a second differentially private model using an aggregate of the first model and the checkpoints.

  • Training a differentially private model using a private training set
  • Generating intermediate checkpoints during the training process
  • Determining a second differentially private model using an aggregate of the first model and the checkpoints

Potential Applications

This technology could be applied in industries where privacy and data security are crucial, such as healthcare, finance, and government.

Problems Solved

This technology addresses the challenge of maintaining privacy and confidentiality while training machine learning models on sensitive data.

Benefits

- Ensures differential privacy throughout the training process - Allows for the creation of accurate models without compromising individual data privacy

Potential Commercial Applications

"Privacy-Preserving Machine Learning Models in Sensitive Industries"

Possible Prior Art

One possible prior art in this field is the work on differential privacy in machine learning, which focuses on developing techniques to protect sensitive data during model training.

Unanswered Questions

How does this method compare to existing techniques for ensuring differential privacy in machine learning models?

This article does not provide a direct comparison with existing techniques for ensuring differential privacy in machine learning models.

What are the potential limitations or drawbacks of using this method in practice?

This article does not address the potential limitations or drawbacks of using this method in practice.


Original Abstract Submitted

a method includes training a first differentially private (dp) model using a private training set, the private training set including a plurality of training samples, the first dp model satisfying a differential privacy budget, the differential privacy budget defining an amount of information about individual training samples of the private training set that may be revealed by the first dp model. the method also includes, while training the first dp model, generating a plurality of intermediate checkpoints, each intermediate checkpoint of the plurality of intermediate checkpoints representing a different intermediate state of the first dp model, each of the intermediate checkpoints satisfying the same differential privacy budget. the method further includes determining an aggregate of the first dp model and the plurality of intermediate checkpoints, and determining, using the aggregate, a second dp model, the second dp model satisfying the same differential privacy budget.