17937842. DETECTING CLIENT ISOLATION ATTACKS IN FEDERATED LEARNING THROUGH OVERFITTING MONITORING simplified abstract (Dell Products L.P.)

From WikiPatents
Jump to navigation Jump to search

DETECTING CLIENT ISOLATION ATTACKS IN FEDERATED LEARNING THROUGH OVERFITTING MONITORING

Organization Name

Dell Products L.P.

Inventor(s)

Maira Beatriz Hernandez Moran of Rio de Janeriro (BR)

Paulo Abelha Ferreira of Rio de Janeiro (BR)

Pablo Nascimento Da Silva of Niteroi (BR)

DETECTING CLIENT ISOLATION ATTACKS IN FEDERATED LEARNING THROUGH OVERFITTING MONITORING - A simplified explanation of the abstract

This abstract first appeared for US patent application 17937842 titled 'DETECTING CLIENT ISOLATION ATTACKS IN FEDERATED LEARNING THROUGH OVERFITTING MONITORING

Simplified Explanation

The abstract describes a method for detecting and mitigating client isolation attacks in a federated machine learning system.

  • Receiving a global machine-learning model at a client node within a federation.
  • Determining if the model is trending towards an overfitted state using a validation dataset.
  • If overfitting is detected, the client node leaves the federation to prevent client isolation attacks.
  • If no overfitting is detected, the model is trained using the local dataset to update it.

Potential Applications

This technology can be applied in secure federated machine learning systems to prevent client isolation attacks and ensure the integrity of the global model.

Problems Solved

1. Detection of client isolation attacks in federated machine learning systems. 2. Ensuring the security and reliability of global machine learning models.

Benefits

1. Improved security in federated machine learning environments. 2. Enhanced trust in global machine learning models. 3. Prevention of data poisoning attacks.

Potential Commercial Applications

Securing federated machine learning platforms for industries such as healthcare, finance, and e-commerce.

Possible Prior Art

Prior research has focused on detecting model poisoning attacks in federated learning systems, but specific methods for detecting and mitigating client isolation attacks may not have been extensively explored.

Unanswered Questions

How does the method handle false positives in detecting overfitting?

The abstract does not mention how the system distinguishes between actual overfitting and false alarms, which could lead to unnecessary client node exits.

What impact does client node departure have on the overall performance of the federated learning system?

It is unclear how the departure of client nodes due to suspected client isolation attacks affects the training process and performance of the global model.


Original Abstract Submitted

One example method includes receiving at a client node of a federation a global machine-learning model that is to be trained by the client node using a training dataset that is local to the client node. In response to receiving the global machine-learning model, determining at the client node if the global machine-learning model is trending toward an overfitted state using a validation dataset. The overfitted state indicates that the global machine-learning model has not been received from a server that is part of the federation because of a client isolation attack. In response to determining that the global machine-learning model is trending towards the overfitting state, causing the client node to leave the federation. In response to determining that the global machine-learning model is not trending towards the overfitted state, training the global machine-learning model using the training dataset to thereby update the global machine-learning model.