US Patent Application 18315072. METHOD AND ELECTRONIC DEVICE FOR DETERMINING OPTIMAL GLOBAL ATTENTION IN DEEP LEARNING MODEL simplified abstract

From WikiPatents
Jump to navigation Jump to search

METHOD AND ELECTRONIC DEVICE FOR DETERMINING OPTIMAL GLOBAL ATTENTION IN DEEP LEARNING MODEL

Organization Name

Samsung Electronics Co., Ltd.


Inventor(s)

Eega Revanth Raj of Hyderabad (IN)


Sai Karthikey Pentapati of Gurgaon (IN)


Raj Narayana Gadde of Bangalore (IN)


Anushka Gupta of Bareilly (IN)


Dongkyu Kim of Suwon-si (KR)


Kwangpyo Choi of Suwon-si (KR)


METHOD AND ELECTRONIC DEVICE FOR DETERMINING OPTIMAL GLOBAL ATTENTION IN DEEP LEARNING MODEL - A simplified explanation of the abstract

  • This abstract for appeared for US patent application number 18315072 Titled 'METHOD AND ELECTRONIC DEVICE FOR DETERMINING OPTIMAL GLOBAL ATTENTION IN DEEP LEARNING MODEL'

Simplified Explanation

The abstract describes an electronic device that can determine global attention in a deep learning model. The device includes several components such as a hardware accelerator, a low-complex global attention generator, a parallel switch, and a series switch.

The hardware accelerator processes each tile of a full-frame image, while the low complex global attention generator generates a channel attention map of the full-frame image.

The parallel switch allows the channel attention map to bypass the hardware accelerator, while the series switch controls the connection of the channel attention map with the hardware accelerator.

In simpler terms, this device helps analyze and focus on important parts of an image using deep learning techniques. It can process different parts of an image and generate a map that highlights important areas. The device also has switches to control how the map is connected and processed.


Original Abstract Submitted

An electronic device for determining global attention in a deep learning model is provided. The electronic device includes a hardware accelerator, a low-complex global attention generator, a parallel switch, and a series switch. The hardware accelerator is configured to process each tile of a full-frame image and the low complex global attention generator is configured to generate a channel attention map of the full-frame image. The parallel switch is configured to bypass a connection of the channel attention map with the hardware accelerator and a series switch, configured to gate the connection of the channel attention map with the hardware accelerator.