Samsung electronics co., ltd. (20240289654). ELECTRONIC DEVICE AND OPERATION METHOD OF ELECTRONIC DEVICE FOR PERFORMING CALCULATION USING ARTIFICIAL INTELLIGENCE MODEL simplified abstract

From WikiPatents
Jump to navigation Jump to search

ELECTRONIC DEVICE AND OPERATION METHOD OF ELECTRONIC DEVICE FOR PERFORMING CALCULATION USING ARTIFICIAL INTELLIGENCE MODEL

Organization Name

samsung electronics co., ltd.

Inventor(s)

Jungbae Kim of Suwon-si (KR)

Mooyoung Kim of Suwon-si (KR)

Seungjin Kim of Suwon-si (KR)

Euntaik Lee of Suwon-si (KR)

Jungeun Lee of Suwon-si (KR)

Hyeonsu Lee of Suwon-si (KR)

ELECTRONIC DEVICE AND OPERATION METHOD OF ELECTRONIC DEVICE FOR PERFORMING CALCULATION USING ARTIFICIAL INTELLIGENCE MODEL - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240289654 titled 'ELECTRONIC DEVICE AND OPERATION METHOD OF ELECTRONIC DEVICE FOR PERFORMING CALCULATION USING ARTIFICIAL INTELLIGENCE MODEL

Simplified Explanation: The patent application describes an electronic device that utilizes artificial intelligence models to execute operations efficiently.

Key Features and Innovation:

  • Electronic device with memory storing AI models and programs.
  • Instructions executed by processors to load AI models and runtime engine.
  • Identify supported operation functions on target processor.
  • Check for errors in executing AI models on nodes.
  • Create partitions based on error occurrence for efficient processing.

Potential Applications: This technology can be applied in various industries such as healthcare, finance, and manufacturing for optimizing operations and improving efficiency.

Problems Solved: This technology addresses the need for efficient execution of AI models on different processors and nodes without errors.

Benefits:

  • Improved performance and efficiency in executing AI models.
  • Enhanced accuracy and reliability in processing data.
  • Streamlined operations and reduced errors in AI inference.

Commercial Applications: Optimizing AI model execution in industries like healthcare for faster diagnosis and treatment recommendations, in finance for fraud detection, and in manufacturing for quality control processes.

Prior Art: Readers can explore prior patents related to AI model execution, processor optimization, and error handling in AI inference to understand the background of this technology.

Frequently Updated Research: Stay updated on the latest advancements in AI model execution, processor optimization, and error handling techniques to enhance the efficiency of this technology.

Questions about AI Model Execution Optimization: 1. How does this technology improve the efficiency of AI model execution on different processors? 2. What are the key benefits of partitioning nodes based on error occurrence in AI inference processes?

Ensure the content is detailed, informative, and optimized for SEO with relevant keywords and interlinking to enhance credibility and visibility.


Original Abstract Submitted

an electronic device is provided. the electronic device includes memory storing artificial intelligence models and one or more programs including instructions, and one or more processor, wherein the one or more programs including instructions, when executed by the one or more processors, cause the electronic device to load the artificial intelligence models stored in memory and execute a runtime engine of a framework, identify whether an operation function is supported on a target processor, identify whether a first node for executing an inference on the artificial intelligence models operate without errors based on supporting the operation function on the target processor, repeat the identification until a last node by adding one more nodes in case that the first node operates without errors, form a first group by creating a partition from the first node to an identified n−1st node based on the identification that an error occurred on an nth node, and form a second group by creating a partition for the nth node on which the error occurred.