17869919. System and Method for Distributed Input/output (IO) Performance Forecasting Across Multiple Machine Learning Models simplified abstract (Dell Products L.P.)

From WikiPatents
Jump to navigation Jump to search

System and Method for Distributed Input/output (IO) Performance Forecasting Across Multiple Machine Learning Models

Organization Name

Dell Products L.P.

Inventor(s)

Shaul Dar of Petach Tikva (IL)

Ramakanth Kanagovi of Bengaluru (IN)

Vamsi Vankamamidi of Hopkinton MA (US)

Guhesh Swaminathan of Tamil Nadu (IN)

Swati Smita Sitha of Bengaluru (IN)

System and Method for Distributed Input/output (IO) Performance Forecasting Across Multiple Machine Learning Models - A simplified explanation of the abstract

This abstract first appeared for US patent application 17869919 titled 'System and Method for Distributed Input/output (IO) Performance Forecasting Across Multiple Machine Learning Models

Simplified Explanation

The patent application describes a method, computer program product, and computing system for processing input/output (IO) operations on storage objects in a storage system. The storage objects are divided into groups based on the IO operations processed on them, and each group is associated with a specific IO machine learning model. This allows for the forecasting of IO performance data for the storage objects using the group-specific machine learning models.

  • The method involves processing IO operations on storage objects in a storage system.
  • The storage objects are divided into groups based on the IO operations processed on them.
  • Each group is associated with a specific IO machine learning model.
  • The machine learning models are used to forecast IO performance data for the storage objects.

Potential applications of this technology:

  • Storage system optimization: The technology can be used to optimize the performance of storage systems by predicting IO performance for different storage object groups and making adjustments accordingly.
  • Resource allocation: By forecasting IO performance data, the technology can help in efficient resource allocation within a storage system, ensuring that resources are allocated based on the predicted needs of different storage object groups.
  • Capacity planning: The forecasting capabilities of the technology can aid in capacity planning for storage systems, allowing for better estimation of future storage needs based on the predicted IO performance.

Problems solved by this technology:

  • Performance optimization: The technology addresses the challenge of optimizing IO performance in storage systems by using machine learning models to forecast performance data for different storage object groups.
  • Resource allocation efficiency: By accurately predicting IO performance, the technology enables efficient resource allocation within a storage system, avoiding over-allocation or under-allocation of resources.
  • Capacity planning accuracy: The forecasting capabilities of the technology improve the accuracy of capacity planning for storage systems, preventing over-provisioning or under-provisioning of storage resources.

Benefits of this technology:

  • Improved storage system performance: By using machine learning models to forecast IO performance, the technology can help in optimizing the performance of storage systems, leading to faster and more efficient data access.
  • Efficient resource utilization: The accurate prediction of IO performance allows for efficient resource allocation within a storage system, ensuring that resources are allocated based on the specific needs of different storage object groups.
  • Cost savings: The technology enables better capacity planning, preventing unnecessary over-provisioning of storage resources and resulting in cost savings for organizations.


Original Abstract Submitted

A method, computer program product, and computing system for processing a plurality of input/output (IO) operations on a plurality of storage objects of a storage system. The plurality of storage objects may be divided into a plurality of storage object groups based upon, at least in part, the plurality of IO operations processed on the plurality of storage objects. Each storage object group may be associated with an IO machine learning model selected from a plurality of IO machine learning models, thus defining a plurality of storage object group-specific IO machine learning models. IO performance data may be forecasted for the plurality of storage objects using the plurality of storage object group-specific IO machine learning models.