18387908. LEARNING DEVICE, LEARNING METHOD, AND STORAGE MEDIUM simplified abstract (NEC Corporation)

From WikiPatents
Revision as of 07:03, 24 May 2024 by Wikipatents (talk | contribs) (Creating a new page)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

LEARNING DEVICE, LEARNING METHOD, AND STORAGE MEDIUM

Organization Name

NEC Corporation

Inventor(s)

Toshinori Araki of Tokyo (JP)

Kazuya Kakizaki of Tokyo (JP)

Inderjeet Singh of Tokyo (JP)

LEARNING DEVICE, LEARNING METHOD, AND STORAGE MEDIUM - A simplified explanation of the abstract

This abstract first appeared for US patent application 18387908 titled 'LEARNING DEVICE, LEARNING METHOD, AND STORAGE MEDIUM

Simplified Explanation

The learning device for a neural network described in the abstract uses a base data group to update parameter values of the network and normalization layers, as well as an adversarial example to induce errors in estimation.

  • Base data group: Includes multiple data points used to update parameter values of the network and normalization layers.
  • Adversarial data group: Contains adversarial examples that cause errors in estimation, used to update parameter values of the network and normalization layers.
  • Neural network components: Includes a partial network, a first normalization layer, and a second normalization layer for data normalization.

Potential Applications

This technology could be applied in:

  • Cybersecurity for detecting and preventing attacks using adversarial examples.
  • Image recognition systems to improve accuracy and robustness against adversarial attacks.

Problems Solved

This technology addresses:

  • Vulnerabilities in neural networks to adversarial attacks.
  • Improving the accuracy and reliability of neural network estimations.

Benefits

The benefits of this technology include:

  • Enhanced security and robustness of neural networks.
  • Improved performance and accuracy in estimation tasks.

Potential Commercial Applications

The potential commercial applications of this technology could be in:

  • Security software companies offering advanced threat detection solutions.
  • AI companies developing image recognition systems with improved accuracy and reliability.

Possible Prior Art

One possible prior art in this field is the use of adversarial training techniques to enhance the robustness of neural networks against adversarial attacks.

Unanswered Questions

How does this technology compare to existing methods for defending against adversarial attacks in neural networks?

This article does not provide a direct comparison with other existing methods for defending against adversarial attacks in neural networks. It would be interesting to see a detailed analysis of the effectiveness and efficiency of this approach compared to traditional defense mechanisms.

What are the potential limitations or drawbacks of using adversarial examples to update parameter values in neural networks?

The article does not discuss any potential limitations or drawbacks of using adversarial examples in this context. It would be valuable to explore any potential downsides, such as increased computational complexity or the risk of overfitting, associated with this approach.


Original Abstract Submitted

A learning device for a neural network uses a base data group, which is a group including a plurality of data, to update a parameter value of the partial network and a parameter value of the second normalization layer, and uses an adversarial example determined to induce an error in estimation using the neural network, among adversarial examples included in an adversarial data group, which is a group including a plurality of adversarial examples with respect to the data included in the base data group, to update the parameter value of the partial network and a parameter value of the first normalization layer. The neural network includes a partial network, a first normalization layer normalizing data input to the first normalization layer itself, and a second normalization layer normalizing data input to the second normalization layer itself.