18497576. PRIVACY PRESERVING CENTROID MODELS USING SECURE MULTI-PARTY COMPUTATION simplified abstract (Google LLC)

From WikiPatents
Revision as of 08:08, 24 May 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

PRIVACY PRESERVING CENTROID MODELS USING SECURE MULTI-PARTY COMPUTATION

Organization Name

Google LLC

Inventor(s)

Gang Wang of Frederick MD (US)

Marcel M. Moti Yung of New York NY (US)

PRIVACY PRESERVING CENTROID MODELS USING SECURE MULTI-PARTY COMPUTATION - A simplified explanation of the abstract

This abstract first appeared for US patent application 18497576 titled 'PRIVACY PRESERVING CENTROID MODELS USING SECURE MULTI-PARTY COMPUTATION

Simplified Explanation

The patent application describes a method for a privacy-preserving machine learning platform that helps identify user groups to which to add a user based on user profile data and centroids.

  • The method involves receiving a request for user group identifiers, including a model identifier, user profile data, and a threshold distance.
  • Centroids for user groups corresponding to the model identifier are identified using a centroid model.
  • A user group result is determined based on the user profile data, centroids, and threshold distance, indicating which user groups to add the user to.

Potential Applications

This technology could be applied in various industries such as healthcare, finance, and marketing for targeted user group analysis and personalized services.

Problems Solved

This technology addresses privacy concerns by securely processing user data in a distributed manner without compromising individual user profiles.

Benefits

The platform ensures data privacy by using multi-party computation systems and centroid models to make accurate user group determinations while protecting sensitive information.

Potential Commercial Applications

Potential commercial applications include customer segmentation for targeted marketing campaigns, fraud detection in financial transactions, and personalized healthcare recommendations.

Possible Prior Art

One possible prior art could be the use of federated learning techniques in machine learning to protect user privacy while training models on decentralized data sources.

Unanswered Questions

1. How does the platform handle updates to user profiles and centroids over time? 2. What measures are in place to prevent unauthorized access to user group identifiers and profile data?


Original Abstract Submitted

This disclosure relates to a privacy preserving machine learning platform. In one aspect, a method includes receiving, from a client device and by a computing system of multiple multi-party computation (MPC) systems, a first request for user group identifiers that identify user groups to which to add a user. The first request includes a model identifier for a centroid model, first user profile data for a user profile of the user, and a threshold distance. For each user group in a set of user groups corresponding to the model identifier, a centroid for the user group that is determined using a centroid model corresponding to the model identifier is identified. The computing system determines a user group result based at least on the first user profile data, the centroids, and the threshold distance. The user group result is indicative of user group(s) to which to add the user.