Video Anchors: abstract simplified (18334648)

From WikiPatents
Jump to navigation Jump to search
  • This abstract for appeared for patent application number 18334648 Titled 'Video Anchors'

Simplified Explanation

The abstract describes a method that involves analyzing videos by obtaining a set of anchors (time-stamped text) for each video and identifying entities mentioned in the video's audio. The method also determines the importance of each entity using a language model. Additionally, human rater data is used to train an anchor model that predicts entity labels for video anchors.


Original Abstract Submitted

In one aspect, a method includes obtaining videos and for each video: obtaining a set of anchors for the video, each anchor beginning at the playback time and including anchor text; identifying, from text generated from audio of the video, a set of entities specified in the text, wherein each entity in the set of entities is associated with a times stamp at which the entity is mentioned; determining, by a language model and from the text generated from the audio of the video, an importance value for each entity; for a subset of the videos, receiving rater data that describes, for each anchor, the accuracy of the anchor text in describing subject matter of the video; and training, using the human rater data, the importance values, the text, and the set of entities, an anchor model that predicts an entity label for an anchor for a video.