US Patent Application 18211352. MACHINE LEARNING MULTIPLE FEATURES OF DEPICTED ITEM simplified abstract

From WikiPatents
Jump to navigation Jump to search

MACHINE LEARNING MULTIPLE FEATURES OF DEPICTED ITEM

Organization Name

Microsoft Technology Licensing, LLC


Inventor(s)

Oren Barkan of Rishon Lezion (IL)


Noam Razin of Jerusalem (IL)


Noam Koenigstein of Tel Aviv (IL)


Roy Hirsch of Ramat Yishai (IL)


Nir Nice of Salit (IL)


MACHINE LEARNING MULTIPLE FEATURES OF DEPICTED ITEM - A simplified explanation of the abstract

  • This abstract for appeared for US patent application number 18211352 Titled 'MACHINE LEARNING MULTIPLE FEATURES OF DEPICTED ITEM'

Simplified Explanation

The abstract describes a method of using machine learning to analyze multiple images of an item and extract various features from them. A neural network is trained on these images to generate embedding vectors for each feature. In each iteration of the training process, the embedding vector is converted into a probability vector that represents the likelihood of different values for that feature. This probability vector is then compared to the actual value of the feature in the images, and an error is calculated. This error is used to adjust the parameters of the neural network, allowing for further iterations and improvement in the generation of the embedding vectors. This iterative process continues to train the neural network on the multiple features of the item depicted in the images.


Original Abstract Submitted

Machine learning multiple features of an item depicted in images. Upon accessing multiple images that depict the item, a neural network is used to machine train on the plurality of images to generate embedding vectors for each of multiple features of the item. For each of multiple features of the item depicted in the images, in each iteration of the machine learning, the embedding vector is converted into a probability vector that represents probabilities that the feature has respective values. That probability vector is then compared with a value vector representing the actual value of that feature in the depicted item, and an error between the two vectors is determined. That error is used to adjust parameters of the neural network used to generate the embedding vector, allowing for the next iteration in the generation of the embedding vectors. These iterative changes continue thereby training the neural network.