Tata Consultancy Services Limited (20240235961). SERVICE-LEVEL OBJECTIVE (SLO) AWARE EXECUTION OF CONCURRENCE INFERENCE REQUESTS ON A FOG-CLOUD NETWORK simplified abstract

From WikiPatents
Jump to navigation Jump to search

SERVICE-LEVEL OBJECTIVE (SLO) AWARE EXECUTION OF CONCURRENCE INFERENCE REQUESTS ON A FOG-CLOUD NETWORK

Organization Name

Tata Consultancy Services Limited

Inventor(s)

CHETAN DNYANDEO Phalak of Mumbai (IN)

DHEERAJ Chahal of Pune (IN)

REKHA Singhal of Thane West (IN)

SERVICE-LEVEL OBJECTIVE (SLO) AWARE EXECUTION OF CONCURRENCE INFERENCE REQUESTS ON A FOG-CLOUD NETWORK - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240235961 titled 'SERVICE-LEVEL OBJECTIVE (SLO) AWARE EXECUTION OF CONCURRENCE INFERENCE REQUESTS ON A FOG-CLOUD NETWORK

The abstract discusses the use of cloud and fog computing for IoT applications, focusing on managing machine learning inference requests efficiently to minimize costs and avoid SLO violations.

  • Fog and cloud computing are essential for complex IoT deployments.
  • The volume of data generated by internet-connected devices has increased significantly.
  • Managing data and workloads for real-time predictive decisions using fog computing is challenging.
  • The patent provides systems and methods for automating the execution of ML/DL inference requests using fog and various cloud instances.
  • The generated workflow helps reduce deployment costs and prevents SLO violations.
    • Potential Applications:**

This technology can be applied in various industries such as healthcare, manufacturing, smart cities, and agriculture for real-time data processing and decision-making.

    • Problems Solved:**

The technology addresses the challenge of efficiently managing machine learning inference requests in fog computing environments to minimize costs and prevent SLO violations.

    • Benefits:**

The technology helps optimize the execution of ML/DL inference requests, reducing deployment costs and ensuring efficient real-time decision-making without violating service-level objectives.

    • Commercial Applications:**

Optimizing machine learning inference requests in fog computing environments can benefit companies in industries such as logistics, retail, and telecommunications by improving data processing speed and accuracy.

    • Prior Art:**

Prior research in fog and cloud computing, as well as machine learning optimization, can provide valuable insights into similar technologies and approaches.

    • Frequently Updated Research:**

Stay updated on the latest advancements in fog computing, cloud technologies, and machine learning optimization to enhance the efficiency and effectiveness of this technology.

    • Questions about Fog and Cloud Computing:**

1. How does fog computing differ from cloud computing in managing IoT applications? 2. What are the key challenges in optimizing machine learning inference requests in fog computing environments?


Original Abstract Submitted

cloud and fog computing are complementary technologies used for complex internet of things (iot) based deployment of applications. with an increase in the number of internet-connected devices, the volume of data generated and processed at higher speeds has increased substantially. serving a large amount of data and workloads for predictive decisions in real-time using fog computing without service-level objective (slo) violation is a challenge. present disclosure provides systems and method for inference management wherein a suitable execution workflow is automatically generated to execute machine learning (ml)/deep learning (dl) inference requests using fog with various type of instances (e.g., function-as-a-service (faas) instance, machine learning-as-a-service (mlaas) instance, and the like) provided by cloud vendors/platforms. generated workflow minimizes the cost of deployment as well as slo violations.