18750655. ELECTRONIC DEVICE AND CONTROLLING METHOD OF ELECTRONIC DEVICE simplified abstract (SAMSUNG ELECTRONICS CO., LTD.)

From WikiPatents
Jump to navigation Jump to search

ELECTRONIC DEVICE AND CONTROLLING METHOD OF ELECTRONIC DEVICE

Organization Name

SAMSUNG ELECTRONICS CO., LTD.

Inventor(s)

Jijoong Moon of Suwon-si (KR)

Parichay Kapoor of Suwon-si (KR)

Jihoon Lee of Suwon-si (KR)

Hyeonseok Lee of Suwon-si (KR)

Myungjoo Ham of Suwon-si (KR)

ELECTRONIC DEVICE AND CONTROLLING METHOD OF ELECTRONIC DEVICE - A simplified explanation of the abstract

This abstract first appeared for US patent application 18750655 titled 'ELECTRONIC DEVICE AND CONTROLLING METHOD OF ELECTRONIC DEVICE

The abstract describes an electronic apparatus that stores data related to a neural network model and divides the learning step into multiple steps, including forward propagation, gradient calculation, and derivative calculation. It determines the execution order of these steps, integrates information about sensor usage and tensor sharing, allocates data to tensors in memory, and trains the neural network model accordingly.

  • Memory stores data related to a neural network model
  • Processor divides learning step into forward propagation, gradient calculation, and derivative calculation
  • Determines execution order of steps based on sensor usage
  • Integrates information on tensor sharing in neighboring layers
  • Allocates data to tensors in memory efficiently
  • Trains neural network model based on integrated execution order

Potential Applications: - Artificial intelligence - Machine learning - Data analysis - Pattern recognition - Robotics

Problems Solved: - Efficient memory allocation for neural network training - Optimized execution order for learning steps - Enhanced performance of neural network models

Benefits: - Faster training of neural network models - Improved accuracy in data analysis - Reduced memory usage for storing data - Enhanced performance in various applications

Commercial Applications: Title: "Optimized Neural Network Training Apparatus for AI Applications" This technology can be used in industries such as: - Healthcare for medical image analysis - Finance for fraud detection - Automotive for autonomous driving systems - Retail for customer behavior analysis - Manufacturing for quality control processes

Prior Art: Prior art related to this technology can be found in research papers, patents, and academic publications on neural network training methods, memory optimization techniques, and artificial intelligence algorithms.

Frequently Updated Research: Researchers are constantly exploring new methods to optimize neural network training, improve memory allocation efficiency, and enhance the performance of AI systems in various applications.

Questions about Neural Network Training Optimization: 1. How does this technology improve the efficiency of memory allocation in neural network training? 2. What are the key factors considered in determining the execution order of learning steps in a neural network model?


Original Abstract Submitted

An electronic apparatus may include a memory configured to store data related to a neural network model and at least one processor configured to divide a learning step performed through a plurality of layers of the neural network model into a plurality of steps including a forward propagation step, a gradient calculation step, and a derivative calculation step, and determine an execution order of the plurality of steps, obtain first information regarding in which step of a plurality of steps according to the determined execution order a plurality of sensors used in the plurality of layers are used, based on the determined execution order, integrate the determined execution order based on the first information and second information regarding whether tensors used in neighboring layers from among the plurality of layers are able to be shared, allocate the data to the plurality of tensors by minimizing a region of the memory for allocating data corresponding to the plurality of tensors, based on the integrated execution order, and train the neural network model according to the integrated execution order using the plurality of tensors and the data allocated to the plurality of tensors. Various other embodiments are possible to be implemented.