Dell products l.p. (20240111903). DETECTING CLIENT ISOLATION ATTACKS IN FEDERATED LEARNING THROUGH OVERFITTING MONITORING simplified abstract

From WikiPatents
Jump to navigation Jump to search

DETECTING CLIENT ISOLATION ATTACKS IN FEDERATED LEARNING THROUGH OVERFITTING MONITORING

Organization Name

dell products l.p.

Inventor(s)

Maira Beatriz Hernandez Moran of Rio de Janeriro (BR)

Paulo Abelha Ferreira of Rio de Janeiro (BR)

Pablo Nascimento Da Silva of Niteroi (BR)

DETECTING CLIENT ISOLATION ATTACKS IN FEDERATED LEARNING THROUGH OVERFITTING MONITORING - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240111903 titled 'DETECTING CLIENT ISOLATION ATTACKS IN FEDERATED LEARNING THROUGH OVERFITTING MONITORING

Simplified Explanation

The abstract describes a method for detecting and mitigating client isolation attacks in a federated machine learning system. Here is a simplified explanation of the abstract:

  • Receiving a global machine-learning model at a client node in a federation.
  • Determining if the model is trending towards overfitting by using a validation dataset.
  • If overfitting is detected, the client node leaves the federation.
  • If not overfitting, the client node trains the model with its local dataset to update it.

---

      1. Potential Applications

This technology could be applied in secure federated machine learning systems to prevent client isolation attacks and ensure the integrity of the global model.

      1. Problems Solved

This technology addresses the issue of client isolation attacks in federated machine learning systems, which can compromise the integrity of the global model and lead to inaccurate predictions.

      1. Benefits

- Enhanced security in federated machine learning systems. - Improved accuracy of global machine learning models. - Protection against malicious attacks targeting client nodes.

      1. Potential Commercial Applications

"Enhancing Security in Federated Machine Learning Systems" could be a potential commercial application for this technology, as companies seek to protect their machine learning models from attacks.

      1. Possible Prior Art

One possible prior art could be research papers or patents related to federated learning security mechanisms or techniques for detecting overfitting in machine learning models.

---

        1. Unanswered Questions
      1. How does the method determine if the global machine-learning model is trending towards overfitting?

The method likely uses metrics such as validation loss or accuracy on a separate dataset to assess the generalization performance of the model.

      1. What measures are taken to ensure the client node's departure from the federation does not disrupt the overall training process?

The method may involve redistributing the training workload to other nodes in the federation to compensate for the departing client node.


Original Abstract Submitted

one example method includes receiving at a client node of a federation a global machine-learning model that is to be trained by the client node using a training dataset that is local to the client node. in response to receiving the global machine-learning model, determining at the client node if the global machine-learning model is trending toward an overfitted state using a validation dataset. the overfitted state indicates that the global machine-learning model has not been received from a server that is part of the federation because of a client isolation attack. in response to determining that the global machine-learning model is trending towards the overfitting state, causing the client node to leave the federation. in response to determining that the global machine-learning model is not trending towards the overfitted state, training the global machine-learning model using the training dataset to thereby update the global machine-learning model.