18060130. SYSTEM AND METHOD FOR EXECUTING MULTIPLE INFERENCE MODELS USING INFERENCE MODEL PRIORITIZATION simplified abstract (Dell Products L.P.)

From WikiPatents
Jump to navigation Jump to search

SYSTEM AND METHOD FOR EXECUTING MULTIPLE INFERENCE MODELS USING INFERENCE MODEL PRIORITIZATION

Organization Name

Dell Products L.P.

Inventor(s)

OFIR Ezrielev of Beer Sheva (IL)

JEHUDA Shemer of Kfar Saba (IL)

TOMER Kushnir of Omer (IL)

SYSTEM AND METHOD FOR EXECUTING MULTIPLE INFERENCE MODELS USING INFERENCE MODEL PRIORITIZATION - A simplified explanation of the abstract

This abstract first appeared for US patent application 18060130 titled 'SYSTEM AND METHOD FOR EXECUTING MULTIPLE INFERENCE MODELS USING INFERENCE MODEL PRIORITIZATION

Simplified Explanation

The patent application describes methods and systems for managing the execution of inference models across multiple data processing systems. Here is a simplified explanation of the abstract:

  • An inference model manager and multiple data processing systems work together to ensure timely execution of inference models.
  • The manager obtains operational capability data from the systems to determine if they have enough computing resources.
  • If a system lacks resources, the manager can re-assign systems to balance the load and support continued operation of the models.

Potential Applications

This technology could be applied in various industries such as healthcare, finance, and manufacturing for optimizing data processing and inference model execution.

Problems Solved

This technology solves the problem of inefficient resource allocation in executing inference models across multiple data processing systems, ensuring timely completion of tasks.

Benefits

The benefits of this technology include improved efficiency, optimized resource utilization, and enhanced performance of inference models.

Potential Commercial Applications

Potential commercial applications of this technology include cloud computing services, data analytics platforms, and AI-driven solutions for businesses.

Possible Prior Art

One possible prior art could be the use of load balancing algorithms in distributed computing systems to optimize resource allocation and improve performance.

Unanswered Questions

How does this technology handle security and privacy concerns in managing inference models across multiple data processing systems?

The patent application does not provide details on how security and privacy concerns are addressed in this technology. Additional information on encryption methods, access control mechanisms, and data protection measures would be helpful.

What are the specific criteria used by the inference model manager to determine if a data processing system has sufficient computing resources for timely execution of inference models?

The patent application does not specify the exact criteria or metrics used by the inference model manager to assess the computing resources of data processing systems. Understanding the factors considered in this evaluation process would provide more insights into the functionality of the technology.


Original Abstract Submitted

Methods and systems for managing execution of inference models across multiple data processing systems are disclosed. To manage execution of inference models across multiple data processing systems, a system may include an inference model manager and any number of data processing systems. The inference model manager may obtain operational capability data for the inference models from the data processing systems. The inference model manager may use the operational capability data to determine whether the data processing systems have access to sufficient computing resources to complete timely execution of the inference models. If the data processing systems do not have access to sufficient computing resources to complete timely execution of the inference models, the inference model manager may re-assign one or more data processing systems to re-balance the computing resource load and support continued operation of at least a portion of the inference models.