International business machines corporation (20240161015). RANKING MACHINE LEARNING PIPELINES USING JOINT COMPUTATIONS simplified abstract

From WikiPatents
Revision as of 08:50, 23 May 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

RANKING MACHINE LEARNING PIPELINES USING JOINT COMPUTATIONS

Organization Name

international business machines corporation

Inventor(s)

Dhavalkumar C. Patel of White Plains NY (US)

Srideepika Jayaraman of White Plains NY (US)

Shuxin Lin of White Plains NY (US)

Anuradha Bhamidipaty of Yorktown Heights NY (US)

Jayant R. Kalagnanam of Briarcliff Manor NY (US)

RANKING MACHINE LEARNING PIPELINES USING JOINT COMPUTATIONS - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240161015 titled 'RANKING MACHINE LEARNING PIPELINES USING JOINT COMPUTATIONS

Simplified Explanation

The abstract describes systems and methods for optimizing and training machine learning models by performing group executions of ML pipelines using a subset of training data, generating trained ML models, generating performance metrics for each model, ranking the models based on performance metrics, and outputting the ranked models to a user.

  • Common data transformations are implemented only once and shared between ML pipelines during group execution.
  • Performance metrics are generated for each trained ML model based on validation data.
  • Trained ML models are ranked based on performance metrics to generate a list of ranked models.

Potential Applications

This technology could be applied in various industries such as healthcare, finance, marketing, and more for optimizing machine learning models and improving their performance.

Problems Solved

This technology solves the problem of efficiently training and optimizing machine learning models by sharing common data transformations and generating performance metrics to rank the models effectively.

Benefits

The benefits of this technology include improved efficiency in training ML models, better performance through ranking based on metrics, and streamlined processes for optimizing machine learning pipelines.

Potential Commercial Applications

One potential commercial application of this technology could be in the development of software tools for data scientists and machine learning engineers to optimize and train ML models effectively.

Possible Prior Art

One possible prior art for this technology could be the use of ensemble learning techniques to combine multiple ML models for improved performance.

Unanswered Questions

== How does this technology compare to existing methods for optimizing machine learning models? This article does not provide a direct comparison to existing methods for optimizing machine learning models.

== What are the specific performance metrics used to rank the trained ML models? The article does not specify the exact performance metrics used to rank the trained ML models.


Original Abstract Submitted

systems and methods for optimizing and training machine learning (ml) models are provided. in embodiments, a computer implemented method includes: performing, by a processor set, a group execution of ml pipelines using a first subset of a training data set as input data for the ml pipelines, thereby generating a trained ml model for each of the ml pipelines, wherein data transformations that are common between the ml pipelines are implemented only once to generate an output, and the output is shared between the ml pipelines during the group execution of the ml pipelines; generating, by the processor set, performance metrics for each of the trained ml models based on validation data; ranking, by the processor set, the trained ml models based on the performance metrics, thereby generating a list of ranked ml models; and outputting, by the processor set, the list of ranked ml models to a user.