International business machines corporation (20240291633). VERIFICATION OF TRUSTWORTHINESS OF AGGREGATION SCHEME USED IN FEDERATED LEARNING simplified abstract

From WikiPatents
Revision as of 09:44, 5 September 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

VERIFICATION OF TRUSTWORTHINESS OF AGGREGATION SCHEME USED IN FEDERATED LEARNING

Organization Name

international business machines corporation

Inventor(s)

Giulio Zizzo of Dublin (IE)

Stefano Braghin of Dublin (IE)

Ambrish Rawat of Dublin (IE)

Mark Purcell of Naas (IE)

VERIFICATION OF TRUSTWORTHINESS OF AGGREGATION SCHEME USED IN FEDERATED LEARNING - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240291633 titled 'VERIFICATION OF TRUSTWORTHINESS OF AGGREGATION SCHEME USED IN FEDERATED LEARNING

Simplified Explanation:

This patent application describes a method to verify the trustworthiness of an aggregation scheme used in federated learning. The method involves receiving encrypted bit masks from clients, combining them using homomorphic encryption, and determining the trustworthiness of the aggregator based on the values in the resulting mask.

  • The method verifies the trustworthiness of an aggregation scheme in federated learning.
  • Encrypted bit masks are received from clients.
  • The masks are combined using homomorphic encryption.
  • The trustworthiness of the aggregator is determined based on the values in the resulting mask.

Key Features and Innovation:

  • Verification of trustworthiness in federated learning aggregation schemes.
  • Use of encrypted bit masks from clients.
  • Homomorphic encryption for combining masks.
  • Trustworthiness determination based on mask values.

Potential Applications:

  • Federated learning systems.
  • Secure machine learning collaborations.
  • Privacy-preserving data aggregation.

Problems Solved:

  • Ensuring trustworthiness in federated learning.
  • Secure data aggregation.
  • Privacy protection in collaborative machine learning.

Benefits:

  • Enhanced security in federated learning.
  • Improved trust in aggregation schemes.
  • Protection of sensitive data.

Commercial Applications:

Secure Federated Learning Aggregation Verification: Enhancing trust in collaborative machine learning systems for businesses and organizations.

Prior Art:

Prior research in homomorphic encryption and secure aggregation techniques in machine learning.

Frequently Updated Research:

Stay updated on advancements in homomorphic encryption and federated learning security protocols.

Questions about Federated Learning Aggregation Verification:

1. How does the method ensure the privacy of client data during the trustworthiness verification process? 2. What are the potential implications of using this verification method in large-scale machine learning collaborations?


Original Abstract Submitted

a computer-implemented method, system and computer program product for verifying the trustworthiness of an aggregation scheme utilized by an aggregator in the federated learning technique. a bit mask is received from each client used for training a machine learning algorithm using the federated learning technique. such a bit mask contains values of ones and zeros, where a value of one indicates that the updated parameter of the global model corresponds to a parameter used by the local model trained on the client and a value of zero indicates that is not the case. these bit masks, which are encrypted, may then be combined using a homomorphic additive encryption scheme into a mask containing a matrix of values. if the mask contains a matrix of values of only the value of one, then the aggregator is deemed to be trustworthy. otherwise, the aggregator is deemed to be untrustworthy.