18197224. OBJECTIVE FUNCTION OPTIMIZATION IN TARGET BASED HYPERPARAMETER TUNING simplified abstract (Oracle International Corporation)

From WikiPatents
Jump to navigation Jump to search

OBJECTIVE FUNCTION OPTIMIZATION IN TARGET BASED HYPERPARAMETER TUNING

Organization Name

Oracle International Corporation

Inventor(s)

Ying Xu of Albion (AU)

Vladislav Blinov of Melbourne (AU)

Ahmed Ataallah Ataallah Abobakr of Geelong (AU)

Thanh Long Duong of Melbourne (AU)

Mark Edward Johnson of Sydney (AU)

Elias Luqman Jalaluddin of Seattle WA (US)

Xin Xu of San Jose CA (US)

Srinivasa Phani Kumar Gadde of Fremont CA (US)

Vishal Vishnoi of Redwood City CA (US)

Poorya Zaremoodi of Melbourne (AU)

Umanga Bista of Southbank (AU)

OBJECTIVE FUNCTION OPTIMIZATION IN TARGET BASED HYPERPARAMETER TUNING - A simplified explanation of the abstract

This abstract first appeared for US patent application 18197224 titled 'OBJECTIVE FUNCTION OPTIMIZATION IN TARGET BASED HYPERPARAMETER TUNING

Simplified Explanation

The abstract describes a method for optimizing objective functions in hyperparameter tuning for machine learning algorithms. Here is a simplified explanation of the patent application:

  • Initializing a machine learning algorithm with a set of hyperparameter values
  • Obtaining a hyperparameter objective function with domain scores based on correct and incorrect predictions
  • Training the algorithm, running it in different domains, and evaluating its performance
  • Outputting at least one machine learning model once convergence is reached

Potential Applications

This technology can be applied in various fields such as finance, healthcare, marketing, and more where machine learning models are used for prediction and classification tasks.

Problems Solved

1. Efficient hyperparameter tuning process for improving machine learning model performance. 2. Objective function optimization for better accuracy and generalization of machine learning models.

Benefits

1. Improved accuracy and performance of machine learning models. 2. Automated hyperparameter tuning process saves time and resources. 3. Enhanced generalization and robustness of machine learning models.

Potential Commercial Applications

Optimizing hyperparameters in machine learning models can benefit companies in industries such as e-commerce, cybersecurity, and autonomous vehicles by improving prediction accuracy and efficiency.

Possible Prior Art

One possible prior art could be the use of grid search or random search algorithms for hyperparameter tuning in machine learning models. Another could be the use of cross-validation techniques to evaluate model performance.

Unanswered Questions

How does this method compare to existing hyperparameter tuning techniques?

This article does not provide a direct comparison with other hyperparameter tuning methods, leaving the reader to wonder about the specific advantages and disadvantages of this approach compared to traditional techniques.

What are the potential limitations or challenges of implementing this method in real-world scenarios?

The article does not address any potential obstacles or difficulties that may arise when applying this method in practical settings, leaving readers curious about the feasibility and scalability of the proposed approach.


Original Abstract Submitted

Techniques are disclosed herein for objective function optimization in target based hyperparameter tuning. In one aspect, a computer-implemented method is provided that includes initializing a machine learning algorithm with a set of hyperparameter values and obtaining a hyperparameter objective function that comprises a domain score for each domain that is calculated based on a number of instances within an evaluation dataset that are correctly or incorrectly predicted by the machine learning algorithm during a given trial. For each trial of a hyperparameter tuning process: training the machine learning algorithm to generate a machine learning model, running the machine learning model in different domains using the set of hyperparameter values, evaluating the machine learning model for each domain, and once the machine learning model has reached convergence, outputting at least one machine learning model.