17545358. IDENTIFYING DIFFERENCES IN COMPARATIVE EXAMPLES USING SIAMESE NEURAL NETWORKS simplified abstract (INTERNATIONAL BUSINESS MACHINES CORPORATION)

From WikiPatents
Jump to navigation Jump to search

IDENTIFYING DIFFERENCES IN COMPARATIVE EXAMPLES USING SIAMESE NEURAL NETWORKS

Organization Name

INTERNATIONAL BUSINESS MACHINES CORPORATION

Inventor(s)

Thai F. Le of West Palm Beach FL (US)

Supriyo Chakraborty of White Plains NY (US)

IDENTIFYING DIFFERENCES IN COMPARATIVE EXAMPLES USING SIAMESE NEURAL NETWORKS - A simplified explanation of the abstract

This abstract first appeared for US patent application 17545358 titled 'IDENTIFYING DIFFERENCES IN COMPARATIVE EXAMPLES USING SIAMESE NEURAL NETWORKS

Simplified Explanation

The patent application describes a method for identifying differences in features between two instances of data that have been classified differently. This is achieved by inputting the first instance into a first neural network and generating a first encoding, and inputting the second instance into a second neural network and generating a second encoding. The neural networks are trained to learn similarities between input objects. By comparing the first and second encodings, the method can identify the specific features that contributed to the different classifications.

  • Data instances classified differently are received.
  • First instance is input to a first neural network, generating a first encoding.
  • Second instance is input to a second neural network, generating a second encoding.
  • Neural networks are trained to learn similarities in input objects.
  • Differences in features between the first and second instances are identified based on their encodings.
  • These differences contributed to the different classifications.

Potential Applications

  • Identifying key features that contribute to different classifications in various domains such as image recognition, natural language processing, and fraud detection.
  • Improving the accuracy and interpretability of machine learning models by understanding the specific features that drive different classifications.

Problems Solved

  • Difficulty in understanding why two instances of data with different classifications have been classified differently.
  • Lack of transparency in machine learning models, making it challenging to identify the specific features that drive different classifications.

Benefits

  • Provides insights into the specific features that contribute to different classifications, enhancing interpretability of machine learning models.
  • Enables targeted improvements in classification accuracy by focusing on the identified differences in features.
  • Enhances transparency and trust in machine learning systems by explaining the reasons behind different classifications.


Original Abstract Submitted

A first instance of data and a second instance of data can be received, which have been classified differently. The first instance can be input to a first neural network, the first neural network generating a first encoding associated with the first instance. The second instance can be input to a second neural network the second neural network generating a second encoding associated with the second instance. The first neural network and the second neural network form neural network architecture trained to learn similarities in given pair of input objects. Based on the first encoding and the second encoding, a difference can be identified in features of the first instance and the second instance, which contributed to the first instance and the second instance being classified differently.