18663438. METHOD AND APPARATUS WITH CONVOLUTION NEURAL NETWORK PROCESSING simplified abstract (Samsung Electronics Co., Ltd.)

From WikiPatents
Jump to navigation Jump to search

METHOD AND APPARATUS WITH CONVOLUTION NEURAL NETWORK PROCESSING

Organization Name

Samsung Electronics Co., Ltd.

Inventor(s)

Sehwan Lee of Suwon-si (KR)

METHOD AND APPARATUS WITH CONVOLUTION NEURAL NETWORK PROCESSING - A simplified explanation of the abstract

This abstract first appeared for US patent application 18663438 titled 'METHOD AND APPARATUS WITH CONVOLUTION NEURAL NETWORK PROCESSING

Simplified Explanation

The neural network apparatus described in the patent application uses shared operands to perform parallelized operations on pixel values and weight values.

Key Features and Innovation

  • Controller determines shared operand based on features of input feature map and kernel.
  • Processing units perform parallelized operations using the determined shared operand.

Potential Applications

This technology can be applied in image recognition, natural language processing, and other machine learning tasks.

Problems Solved

This technology addresses the need for efficient parallelized operations in neural networks.

Benefits

  • Improved performance in neural network operations.
  • Enhanced accuracy in tasks such as image recognition and natural language processing.

Commercial Applications

  • This technology can be used in industries such as healthcare for medical image analysis, in autonomous vehicles for object recognition, and in finance for fraud detection.

Prior Art

Readers can explore prior art related to neural network parallelization techniques and shared operands in machine learning.

Frequently Updated Research

Stay updated on the latest research in neural network optimization and parallel processing techniques.

Questions about Neural Network Apparatus

How does the shared operand improve the efficiency of parallelized operations in neural networks?

The shared operand allows for optimized processing of pixel values and weight values, leading to faster and more accurate results.

What are the potential limitations of using shared operands in neural networks?

Some potential limitations may include increased computational complexity and the need for careful selection of shared operands to ensure optimal performance.


Original Abstract Submitted

A neural network apparatus includes one or more processors comprising: a controller configured to determine a shared operand to be shared in parallelized operations as being either one of a pixel value among pixel values of an input feature map and a weight value among weight values of a kernel, based on either one or both of a feature of the input feature map and a feature of the kernel; and one or more processing units configured to perform the parallelized operations based on the determined shared operand.