18060104. SYSTEM AND METHOD FOR MANAGING INFERENCE MODELS BASED ON INFERENCE GENERATION FREQUENCIES simplified abstract (Dell Products L.P.)

From WikiPatents
Jump to navigation Jump to search

SYSTEM AND METHOD FOR MANAGING INFERENCE MODELS BASED ON INFERENCE GENERATION FREQUENCIES

Organization Name

Dell Products L.P.

Inventor(s)

OFIR Ezrielev of Beer Sheva (IL)

JEHUDA Shemer of Kfar Saba (IL)

TOMER Kushnir of Omer (IL)

SYSTEM AND METHOD FOR MANAGING INFERENCE MODELS BASED ON INFERENCE GENERATION FREQUENCIES - A simplified explanation of the abstract

This abstract first appeared for US patent application 18060104 titled 'SYSTEM AND METHOD FOR MANAGING INFERENCE MODELS BASED ON INFERENCE GENERATION FREQUENCIES

Simplified Explanation

The abstract describes methods and systems for managing the execution of an inference model hosted by data processing systems. The system includes an inference model manager and data processing systems. The manager identifies the inference frequency capability of the model and determines if it meets the requirement of a downstream consumer. If not, the manager modifies the deployment of the model to meet the consumer's requirement.

  • Inference model manager: Manages the execution of the inference model hosted by data processing systems.
  • Data processing systems: Host the inference model and are managed by the inference model manager.
  • Inference frequency capability: The ability of the inference model to perform inferences at a certain frequency.
  • Downstream consumer: The entity that requires inferences from the model.
  • Deployment modification: Adjusting the deployment of the inference model to meet the frequency requirement of the downstream consumer.

Potential Applications

This technology can be applied in various industries such as healthcare, finance, and e-commerce for real-time decision-making based on data analysis.

Problems Solved

1. Ensures that the inference model meets the frequency requirements of downstream consumers. 2. Optimizes the deployment of the inference model for efficient performance.

Benefits

1. Improved accuracy and timeliness of inferences. 2. Enhanced decision-making capabilities based on real-time data analysis.

Potential Commercial Applications

Optimizing inference model deployments for industries such as healthcare, finance, and e-commerce to improve decision-making processes.

Possible Prior Art

Prior art may include systems for managing the execution of machine learning models in data processing environments, but specific methods for modifying deployments based on frequency requirements may be novel.

Unanswered Questions

How does this technology impact the scalability of data processing systems?

This article does not address how the modification of inference model deployments based on frequency requirements affects the scalability of data processing systems.

What are the potential security implications of modifying inference model deployments?

The article does not discuss the security implications of modifying deployments of inference models to meet the requirements of downstream consumers.


Original Abstract Submitted

Methods and systems for managing execution of an inference model hosted by data processing systems are disclosed. To manage execution of the inference model, a system may include an inference model manager and any number of data processing systems. The inference model manager may identify an inference frequency capability of the inference model hosted by the data processing systems and may determine whether the inference frequency capability of the inference model meets an inference frequency requirement of a downstream consumer during a future period of time. If the inference frequency capability does not meet the inference frequency requirement of the downstream consumer, the inference model manager may modify a deployment of the first inference model to meet the inference frequency requirement of the downstream consumer.