18045253. MITIGATING THE INFLUENCE OF BIASED TRAINING INSTANCES WITHOUT REFITTING simplified abstract (INTERNATIONAL BUSINESS MACHINES CORPORATION)

From WikiPatents
Jump to navigation Jump to search

MITIGATING THE INFLUENCE OF BIASED TRAINING INSTANCES WITHOUT REFITTING

Organization Name

INTERNATIONAL BUSINESS MACHINES CORPORATION

Inventor(s)

Prasanna Sattigeri of Acton MA (US)

Soumya Ghosh of Boston MA (US)

Inkit Padhi of White Plains NY (US)

Pierre L. Dognin of White Plains NY (US)

Kush Raj Varshney of Chappaqua NY (US)

MITIGATING THE INFLUENCE OF BIASED TRAINING INSTANCES WITHOUT REFITTING - A simplified explanation of the abstract

This abstract first appeared for US patent application 18045253 titled 'MITIGATING THE INFLUENCE OF BIASED TRAINING INSTANCES WITHOUT REFITTING

Simplified Explanation

The abstract describes a system and method for mitigating biased training instances associated with a machine learning model without the need for additional refitting of the model.

  • The system includes a memory storing computer executable components and a processor executing these components.
  • The components consist of a training data influence estimation component and an influence mitigation component.
  • The training data influence estimation component calculates a fairness influence score of training instances on group fairness metrics associated with a pre-trained machine learning model.
  • The influence mitigation component performs post-hoc unfairness mitigation by removing the effect of biased training instances based on the fairness influence score.

Potential Applications

This technology could be applied in various fields where machine learning models are used, such as finance, healthcare, and marketing, to ensure fair and unbiased decision-making processes.

Problems Solved

This innovation addresses the issue of biased training data affecting the performance of machine learning models, leading to unfair outcomes and potential discrimination.

Benefits

The system allows for the mitigation of biased training instances without the need for retraining the entire machine learning model, saving time and resources while improving fairness and accuracy.

Potential Commercial Applications

A potential commercial application of this technology could be in the development of AI-driven tools for recruitment processes, where unbiased decision-making is crucial to ensure equal opportunities for all candidates.

Possible Prior Art

One possible prior art could be the use of fairness-aware machine learning algorithms that aim to mitigate bias in training data to improve the fairness of predictive models.

Unanswered Questions

How does the system handle complex interactions between multiple biased training instances in the dataset?

The abstract does not provide details on how the system addresses complex interactions between multiple biased training instances and whether it can effectively mitigate their combined influence on the machine learning model.

What are the potential limitations or challenges of implementing this system in real-world applications?

The abstract does not mention any potential limitations or challenges that may arise when implementing this system in real-world applications, such as scalability issues, computational complexity, or the need for domain-specific adjustments.


Original Abstract Submitted

One or more systems, devices, computer program products and/or computer implemented methods of use provided herein relate to a process of mitigating biased training instances associated with a machine learning model without additional refitting of the machine learning model. A system can comprise a memory that stores computer executable components, and a processor that executed the computer executable components stored in the memory, wherein the computer executable components can comprise a training data influence estimation component and an influence mitigation component. The training data influence estimation component can receive a pre-trained machine learning model and calculate a fairness influence score of training instances on group fairness metrics associated with the pre-trained machine learning model. The influence mitigation component can perform post-hoc unfairness mitigation by removing the effect of at least one training instance based on the fairness influence score to mitigate biased training instances without refitting the pre-trained machine learning model.