18274526. METHODS, APPARATUS AND MACHINE-READABLE MEDIUMS RELATING TO MACHINE LEARNING MODELS simplified abstract (Telefonaktiebolaget LM Ericsson (publ))

From WikiPatents
Jump to navigation Jump to search

METHODS, APPARATUS AND MACHINE-READABLE MEDIUMS RELATING TO MACHINE LEARNING MODELS

Organization Name

Telefonaktiebolaget LM Ericsson (publ)

Inventor(s)

Konstantinos Vandikas of Solna (SE)

Aneta Vulgarakis Feljan of Stockholm (SE)

Athanasios Karapantelakis of Solna (SE)

Marin Orlic of Bromma (SE)

Selim Ickin of Stocksund (SE)

METHODS, APPARATUS AND MACHINE-READABLE MEDIUMS RELATING TO MACHINE LEARNING MODELS - A simplified explanation of the abstract

This abstract first appeared for US patent application 18274526 titled 'METHODS, APPARATUS AND MACHINE-READABLE MEDIUMS RELATING TO MACHINE LEARNING MODELS

Simplified Explanation

The patent application abstract describes a method for determining bias in machine learning models by training a local machine learning model to approximate a remote machine learning model and then interrogating the local model for bias.

  • The method involves forming a training dataset with input and output data samples from the remote machine learning model.
  • A local machine learning model is trained using the training dataset to approximate the remote model.
  • The trained local model is then analyzed to identify bias in the remote machine learning model with respect to specific data parameters.

Potential Applications

This technology could be applied in various industries where machine learning models are used, such as finance, healthcare, and marketing, to ensure fairness and accuracy in decision-making processes.

Problems Solved

This technology addresses the issue of bias in machine learning models, which can lead to unfair outcomes and inaccurate predictions. By identifying and mitigating bias, the reliability and trustworthiness of these models can be improved.

Benefits

The benefits of this technology include increased transparency and accountability in machine learning systems, leading to more ethical and equitable decision-making. It also helps in improving the overall performance and reliability of machine learning models.

Potential Commercial Applications

One potential commercial application of this technology could be in the development of AI-powered tools for recruitment and hiring processes, where bias detection and mitigation are crucial for ensuring equal opportunities for all candidates.

Possible Prior Art

Prior research has been conducted on bias detection and mitigation in machine learning models, with various techniques and approaches proposed to address this issue. However, the specific method described in this patent application may offer a novel and effective way to tackle bias in machine learning models.

Unanswered Questions

How does this method compare to existing bias detection techniques in machine learning models?

This article does not provide a direct comparison with other bias detection techniques, leaving the reader wondering about the unique advantages and limitations of this particular method.

What are the potential limitations or challenges in implementing this method in real-world applications?

The article does not address the practical considerations or obstacles that may arise when applying this method in different industries or settings, leaving room for further exploration and discussion.


Original Abstract Submitted

A method is provided for determining bias of machine learning models. The method includes: forming a training dataset including input data samples provided to a remote machine learning model developed using a machine learning process, and corresponding output data samples obtained from the remote machine learning model; training a local machine learning model which approximates the remote machine learning model using a machine learning process and the training dataset; and interrogating the trained local machine learning model to determine whether the remote machine learning model is biased with respect to one or more biasing data parameters.