International business machines corporation (20240104396). DISTRIBUTED TRAINING PROCESS WITH BOTTOM-UP ERROR AGGREGATION simplified abstract

From WikiPatents
Jump to navigation Jump to search

DISTRIBUTED TRAINING PROCESS WITH BOTTOM-UP ERROR AGGREGATION

Organization Name

international business machines corporation

Inventor(s)

Arindam Jati of Bangalore (IN)

Vijay Ekambaram of Chennai (IN)

Sumanta Mukherjee of Bangalore (IN)

Brian Leo Quanz of YORKTOWN HEIGHTS NY (US)

Pavithra Harsha of White Plains NY (US)

DISTRIBUTED TRAINING PROCESS WITH BOTTOM-UP ERROR AGGREGATION - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240104396 titled 'DISTRIBUTED TRAINING PROCESS WITH BOTTOM-UP ERROR AGGREGATION

Simplified Explanation

The abstract describes a method for generating time-series forecasts using a distributed computing environment and hierarchical data sets.

  • Storing hierarchical data set
  • Receiving predicted outputs from multiple nodes in a distributed computing environment
  • Combining predicted outputs via bottom-up aggregation
  • Determining error values for the forecasting model at each level of the hierarchical data set
  • Modifying model parameters based on error values

Potential Applications

This technology could be applied in various industries such as finance, retail, and healthcare for demand forecasting, inventory management, and resource planning.

Problems Solved

This technology addresses the challenge of efficiently generating accurate time-series forecasts from large and complex data sets by leveraging distributed computing and hierarchical data structures.

Benefits

The benefits of this technology include improved forecast accuracy, scalability for large data sets, and the ability to adapt model parameters based on error values for continuous improvement.

Potential Commercial Applications

A potential commercial application of this technology could be in the development of advanced forecasting software for businesses looking to optimize their operations and make data-driven decisions.

Possible Prior Art

One possible prior art could be the use of distributed computing for time-series forecasting, but the specific approach of combining predicted outputs from different nodes in a hierarchical data set may be novel.

Unanswered Questions

How does this method compare to traditional time-series forecasting techniques in terms of accuracy and efficiency?

The article does not provide a direct comparison between this method and traditional forecasting techniques.

What are the limitations or constraints of implementing this technology in real-world applications?

The article does not discuss any potential limitations or constraints that may arise when implementing this technology in practical settings.


Original Abstract Submitted

an example operation may include one or more of storing a hierarchical data set, receiving a plurality of predicted outputs from a plurality of nodes in a distributed computing environment, respectively, wherein each predicted output is generated by a different node via execution of a time-series forecasting model on a different subset of lowest level data in the hierarchical data set, combining the plurality of predicted outputs via bottom-up aggregation to generate one or more additional predicted outputs for the time-series forecasting model based on one or more levels above the lowest level in the hierarchical time-series data set, determining error values for the time-series forecasting model at each level of the hierarchical data set based on the received and the one or more additional generated predicted outputs, and modifying a parameter of the time-series forecasting model based on the determined error values.